Sample records for typical values obtained

  1. Solution of second order quasi-linear boundary value problems by a wavelet method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn

    2015-03-10

    A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less

  2. The amount effect and marginal value.

    PubMed

    Rachlin, Howard; Arfer, Kodi B; Safin, Vasiliy; Yen, Ming

    2015-07-01

    The amount effect of delay discounting (by which the value of larger reward amounts is discounted by delay at a lower rate than that of smaller amounts) strictly implies that value functions (value as a function of amount) are steeper at greater delays than they are at lesser delays. That is, the amount effect and the difference in value functions at different delays are actually a single empirical finding. Amount effects of delay discounting are typically found with choice experiments. Value functions for immediate rewards have been empirically obtained by direct judgment. (Value functions for delayed rewards have not been previously obtained.) The present experiment obtained value functions for both immediate and delayed rewards by direct judgment and found them to be steeper when the rewards were delayed--hence, finding an amount effect with delay discounting. © Society for the Experimental Analysis of Behavior.

  3. A concept for improved fire-safety through coated fillers

    NASA Technical Reports Server (NTRS)

    Ramohalli, K.

    1977-01-01

    A possible method is examined for obtaining a high value of thermal conductivity before ignition and a low value after ignition in standard composite materials. The idea is to coat fiberglass, alumina trihydrate, and similar fillers with specially selected chemicals prior to using polymer resins. The amount of the coat constitutes typically less than 5% of the material's total weight. The experimental results obtained are consistent with the basic concept.

  4. Asymptotic Equivalence of Probability Measures and Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Touchette, Hugo

    2018-03-01

    Let P_n and Q_n be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let M_n be a random variable representing a "macrostate" or "global observable" of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of P_n and Q_n, for the set of typical values of M_n obtained relative to P_n to be the same as the set of typical values obtained relative to Q_n in the limit n→ ∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model.

  5. Typical entanglement

    NASA Astrophysics Data System (ADS)

    Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio

    2013-05-01

    Let a pure state | ψ> be chosen randomly in an NM-dimensional Hilbert space, and consider the reduced density matrix ρ A of an N-dimensional subsystem. The bipartite entanglement properties of | ψ> are encoded in the spectrum of ρ A . By means of a saddle point method and using a "Coulomb gas" model for the eigenvalues, we obtain the typical spectrum of reduced density matrices. We consider the cases of an unbiased ensemble of pure states and of a fixed value of the purity. We finally obtain the eigenvalue distribution by using a statistical mechanics approach based on the introduction of a partition function.

  6. Properties of behavior under different random ratio and random interval schedules: A parametric study.

    PubMed

    Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H

    1985-03-01

    Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.

  7. The Statistics of wood assays for preservative retention

    Treesearch

    Patricia K. Lebow; Scott W. Conklin

    2011-01-01

    This paper covers general statistical concepts that apply to interpreting wood assay retention values. In particular, since wood assays are typically obtained from a single composited sample, the statistical aspects, including advantages and disadvantages, of simple compositing are covered.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pestehe, S. J., E-mail: sjpest@tabrizu.ac.ir; Mohammadnejad, M.; Research Institute for Applied Physics and Astronomy, University of Tabriz, Tabriz

    A theoretical model is developed to study the signals from a typical dynamic Faraday cup, and using this model the output signals from this structure are obtained. A detailed discussion on the signal structure, using different experimental conditions, is also given. It is argued that there is a possibility of determining the total charge of the generated ion pulse, the maximum velocity of the ions, ion velocity distribution, and the number of ion species for mixed working gases, under certain conditions. In addition, the number of different ionization stages, the number of different pinches in one shot, and the numbermore » of different existing acceleration mechanisms can also be determined provided that the mentioned conditions being satisfied. An experiment is carried out on the Filippov type 90 kJ Sahand plasma focus using Ar as the working gas at the pressure of 0.25 Torr. The data from a typical shot are fitted to a signal from the model and the total charge of the related energetic ion pulse is deduced using the values of the obtained fit parameters. Good agreement between the obtained amount of the total charge and the values obtained during other experiments on the same plasma focus device is observed.« less

  9. Energy levels distribution in supersaturated silicon with titanium for photovoltaic applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pérez, E., E-mail: eduper@ele.uva.es; Castán, H.; García, H.

    2015-01-12

    In the attempt to form an intermediate band in the bandgap of silicon substrates to give it the capability to absorb infrared radiation, we studied the deep levels in supersaturated silicon with titanium. The technique used to characterize the energy levels was the thermal admittance spectroscopy. Our experimental results showed that in samples with titanium concentration just under Mott limit there was a relationship among the activation energy value and the capture cross section value. This relationship obeys to the well known Meyer-Neldel rule, which typically appears in processes involving multiple excitations, like carrier capture/emission in deep levels, and itmore » is generally observed in disordered systems. The obtained characteristic Meyer-Neldel parameters were Tmn = 176 K and kTmn = 15 meV. The energy value could be associated to the typical energy of the phonons in the substrate. The almost perfect adjust of all experimental data to the same straight line provides further evidence of the validity of the Meyer Neldel rule, and may contribute to obtain a deeper insight on the ultimate meaning of this phenomenon.« less

  10. A Quantitative Comparison of Single-Dye Tracking Analysis Tools Using Monte Carlo Simulations

    PubMed Central

    McColl, James; Irvine, Kate L.; Davis, Simon J.; Gay, Nicholas J.; Bryant, Clare E.; Klenerman, David

    2013-01-01

    Single-particle tracking (SPT) is widely used to study processes from membrane receptor organization to the dynamics of RNAs in living cells. While single-dye labeling strategies have the benefit of being minimally invasive, this comes at the expense of data quality; typically a data set of short trajectories is obtained and analyzed by means of the mean square displacements (MSD) or the distribution of the particles’ displacements in a set time interval (jump distance, JD). To evaluate the applicability of both approaches, a quantitative comparison of both methods under typically encountered experimental conditions is necessary. Here we use Monte Carlo simulations to systematically compare the accuracy of diffusion coefficients (D-values) obtained for three cases: one population of diffusing species, two populations with different D-values, and a population switching between two D-values. For the first case we find that the MSD gives more or equally accurate results than the JD analysis (relative errors of D-values <6%). If two diffusing species are present or a particle undergoes a motion change, the JD analysis successfully distinguishes both species (relative error <5%). Finally we apply the JD analysis to investigate the motion of endogenous LPS receptors in live macrophages before and after treatment with methyl-β-cyclodextrin and latrunculin B. PMID:23737978

  11. A quantitative comparison of single-dye tracking analysis tools using Monte Carlo simulations.

    PubMed

    Weimann, Laura; Ganzinger, Kristina A; McColl, James; Irvine, Kate L; Davis, Simon J; Gay, Nicholas J; Bryant, Clare E; Klenerman, David

    2013-01-01

    Single-particle tracking (SPT) is widely used to study processes from membrane receptor organization to the dynamics of RNAs in living cells. While single-dye labeling strategies have the benefit of being minimally invasive, this comes at the expense of data quality; typically a data set of short trajectories is obtained and analyzed by means of the mean square displacements (MSD) or the distribution of the particles' displacements in a set time interval (jump distance, JD). To evaluate the applicability of both approaches, a quantitative comparison of both methods under typically encountered experimental conditions is necessary. Here we use Monte Carlo simulations to systematically compare the accuracy of diffusion coefficients (D-values) obtained for three cases: one population of diffusing species, two populations with different D-values, and a population switching between two D-values. For the first case we find that the MSD gives more or equally accurate results than the JD analysis (relative errors of D-values <6%). If two diffusing species are present or a particle undergoes a motion change, the JD analysis successfully distinguishes both species (relative error <5%). Finally we apply the JD analysis to investigate the motion of endogenous LPS receptors in live macrophages before and after treatment with methyl-β-cyclodextrin and latrunculin B.

  12. Magnetic resonance imaging DTI-FT study on schizophrenic patients with typical negative first symptoms.

    PubMed

    Gu, Chengyu; Zhang, Ying; Wei, Fuquan; Cheng, Yougen; Cao, Yulin; Hou, Hongtao

    2016-09-01

    Magnetic resonance imaging (MRI) with diffusion-tensor imaging (DTI) together with a white matter fiber tracking (FT) technique was used to assess different brain white matter structures and functionalities in schizophrenic patients with typical first negative symptoms. In total, 30 schizophrenic patients with typical first negative symptoms, comprising an observation group were paired 1:1 according to gender, age, right-handedness, and education, with 30 healthy individuals in a control group. Individuals in each group underwent routine MRI and DTI examination of the brain, and diffusion-tensor tractography (DTT) data were obtained through whole brain analysis based on voxel and tractography. The results were expressed by fractional anisotropy (FA) values. The schizophrenic patients were evaluated using a positive and negative symptom scale (PANSS) as well as a Global Assessment Scale (GAS). The results of the study showed that routine MRIs identified no differences between the two groups. However, compared with the control group, the FA values obtained by DTT from the deep left prefrontal cortex, the right deep temporal lobe, the white matter of the inferior frontal gyrus and part of the corpus callosum were significantly lower in the observation group (P<0.05). The PANSS positive scale value in the observation group averaged 7.7±1.5, and the negative scale averaged 46.6±5.9, while the general psychopathology scale averaged 65.4±10.3, and GAS averaged 53.8±19.2. The Pearson statistical analysis, the left deep prefrontal cortex, the right deep temporal lobe, the white matter of the inferior frontal gyrus and the FA value of part of the corpus callosum in the observation group was negatively correlated with the negative scale (P<0.05), and positively correlated with GAS (P<0.05). In conclusion, a decrease in the FA values of the left deep prefrontal cortex, the right deep temporal lobe, the white matter of the inferior frontal gyrus and part of the corpus callosum may be associated with schizophrenia with typical first negative symptoms and the application of MRI DTI-FT can improve diagnostic accuracy.

  13. Incorporating geographical factors with artificial neural networks to predict reference values of erythrocyte sedimentation rate

    PubMed Central

    2013-01-01

    Background The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Methods and findings Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China. The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Conclusions Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships. PMID:23497145

  14. Incorporating geographical factors with artificial neural networks to predict reference values of erythrocyte sedimentation rate.

    PubMed

    Yang, Qingsheng; Mwenda, Kevin M; Ge, Miao

    2013-03-12

    The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China.The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships.

  15. The zeta potential of extended dielectrics and conductors in terms of streaming potential and streaming current measurements.

    PubMed

    Gallardo-Moreno, Amparo M; Vadillo-Rodríguez, Virginia; Perera-Núñez, Julia; Bruque, José M; González-Martín, M Luisa

    2012-07-21

    The electrical characterization of surfaces in terms of the zeta potential (ζ), i.e., the electric potential contributing to the interaction potential energy, is of major importance in a wide variety of industrial, environmental and biomedical applications in which the integration of any material with the surrounding media is initially mediated by the physico-chemical properties of its outer surface layer. Among the different existing electrokinetic techniques for obtaining ζ, streaming potential (V(str)) and streaming current (I(str)) are important when dealing with flat-extended samples. Mostly dielectric materials have been subjected to this type of analysis and only a few papers can be found in the literature regarding the electrokinetic characterization of conducting materials. Nevertheless, a standardized procedure is typically followed to calculate ζ from the measured data and, importantly, it is shown in this paper that such a procedure leads to incorrect zeta potential values when conductors are investigated. In any case, assessment of a reliable numerical value of ζ requires careful consideration of the origin of the input data and the characteristics of the experimental setup. In particular, it is shown that the cell resistance (R) typically obtained through a.c. signals (R(a.c.)), and needed for the calculations of ζ, always underestimates the zeta potential values obtained from streaming potential measurements. The consideration of R(EK), derived from the V(str)/I(str) ratio, leads to reliable values of ζ when dielectrics are investigated. For metals, the contribution of conductivity of the sample to the cell resistance provokes an underestimation of R(EK), which leads to unrealistic values of ζ. For the electrical characterization of conducting samples I(str) measurements constitute a better choice. In general, the findings gathered in this manuscript establish a measurement protocol for obtaining reliable zeta potentials of dielectrics and conductors based on the intrinsic electrokinetic behavior of both types of samples.

  16. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    NASA Astrophysics Data System (ADS)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  17. Demystifying Introductory Chemistry. Part 3: Ionization Energies, Electronegativity, Polar Bonds, and Partial Charges.

    ERIC Educational Resources Information Center

    Spencer, James; And Others

    1996-01-01

    Shows how ionization energies provide a convenient method for obtaining electronegativity values that is simpler than the conventional methods. Demonstrates how approximate atomic charges can be calculated for polar molecules and how this method of determining electronegativities may lead to deeper insights than are typically possible for the…

  18. Let the market help prescribe forest management practices

    Treesearch

    Gary W. Zinn; Edward Pepke

    1989-01-01

    To obtain the best economic returns from a hardwood forest, you must consider markets. Management decisions made now will affect a stand's future character and value, whether or not the decision results in immediate timber sales. Progressive forest landowners will have a management plan for their woodlots. Typically, such plans are largely land- and resource-...

  19. Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface

    PubMed Central

    Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A.; Castellanos-Ramos, Julián; Hidalgo-López, José A.

    2016-01-01

    Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor. PMID:26840321

  20. Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface.

    PubMed

    Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A; Castellanos-Ramos, Julián; Hidalgo-López, José A

    2016-02-01

    Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor.

  1. Kinematic analysis of articulatory coupling in acquired apraxia of speech post-stroke.

    PubMed

    Bartle-Meyer, Carly J; Goozée, Justine V; Murdoch, Bruce E; Green, Jordan R

    2009-02-01

    Electromagnetic articulography was employed to investigate the strength of articulatory coupling and hence the degree of functional movement independence between individual articulators in apraxia of speech (AOS). Tongue-tip, tongue-back and jaw movement was recorded from five speakers with AOS and a concomitant aphasia (M = 53.6 years; SD = 12.60) during /ta, sa, la, ka/ syllable repetitions, spoken at typical and fast rates of speech. Covariance values were calculated for each articulatory pair to gauge the strength of articulatory coupling. The results obtained for each of the participants with AOS were individually compared to those obtained by a control group (n = 12; M = 52.08 years; SD = 12.52). Comparisons were made between the typical rate productions of the control group and the typical and fast rate productions of the participants with AOS. In comparison to the control group, four speakers with AOS exhibited significantly stronger articulatory coupling for alveolar and/or velar speech targets, during typical and/or fast rate conditions, suggesting decreased functional movement independence. The reduction in functional movement independence might have reflected an attempt to simplify articulatory control or a decrease in the ability to differentially control distinct articulatory regions.

  2. Sensitivity of solar-cell performance to atmospheric variables. 1: Single cell

    NASA Technical Reports Server (NTRS)

    Klucher, T. M.

    1976-01-01

    The short circuit current of a typical silicon solar cell under direct solar radiation was measured for a range of turbidity, water vapor content, and air mass to determine the relation of the solar cell calibration value (current-to-intensity ratio) to those atmospheric variables. A previously developed regression equation was modified to describe the relation between calibration value, turbidity, water vapor content, and air mass. Based on the value of the constants obtained by a least squares fit of the data to the equation, it was found that turbidity lowers the value, while increase in water vapor increases the calibration value. Cell calibration values exhibited a change of about 6% over the range of atmospheric conditions experienced.

  3. Magnetic field `flyby' measurement using a smartphone's magnetometer and accelerometer simultaneously

    NASA Astrophysics Data System (ADS)

    Monteiro, Martín; Stari, Cecilia; Cabeza, Cecilia; Marti, Arturo C.

    2017-12-01

    The spatial dependence of magnetic fields in simple configurations is a common topic in introductory electromagnetism lessons, both in high school and in university courses. In typical experiments, magnetic fields and distances are obtained taking point-by-point values using a Hall sensor and a ruler, respectively. Here, we show how to take advantage of the smartphone capabilities to get simultaneous measures with the built-in accelerometer and magnetometer and to obtain the spatial dependence of magnetic fields. We consider a simple setup consisting of a smartphone mounted on a track whose direction coincides with the axis of a coil. While the smartphone is moving on the track, both the magnetic field and the distance from the center of the coil (integrated numerically from the acceleration values) are simultaneously obtained. This methodology can easily be extended to more complicated setups.

  4. Evaluation of electrical conductivity of Cu and Al through sub microsecond underwater electrical wire explosion

    NASA Astrophysics Data System (ADS)

    Sheftman, D.; Shafer, D.; Efimov, S.; Krasik, Ya. E.

    2012-03-01

    Sub-microsecond timescale underwater electrical wire explosions using Cu and Al materials have been conducted. Current and voltage waveforms and time-resolved streak images of the discharge channel, coupled to 1D magneto-hydrodynamic simulations, have been used to determine the electrical conductivity of the metals for the range of conditions between hot liquid metal and strongly coupled non-ideal plasma, in the temperature range of 10-60 KK. The results of these studies showed that the conductivity values obtained are typically lower than those corresponding to modern theoretical electrical conductivity models and provide a transition between the conductivity values obtained in microsecond time scale explosions and those obtained in nanosecond time scale wire explosions. In addition, the measured wire expansion shows good agreement with equation of state tables.

  5. Magnetic Field "Flyby" Measurement Using a Smartphone's Magnetometer and Accelerometer Simultaneously

    ERIC Educational Resources Information Center

    Monteiro, Martin; Stari, Cecilia; Cabeza, Cecilia; Marti, Arturo C.

    2017-01-01

    The spatial dependence of magnetic fields in simple configurations is a common topic in introductory electromagnetism lessons, both in high school and in university courses. In typical experiments, magnetic fields and distances are obtained taking point-by-point values using a Hall sensor and a ruler, respectively. Here, we show how to take…

  6. Influence of season and type of restaurants on sashimi microbiota.

    PubMed

    Miguéis, S; Moura, A T; Saraiva, C; Esteves, A

    2016-10-01

    In recent years, an increase in the consumption of Japanese food in European countries has been verified, including in Portugal. These specialities made with raw fish, typical Japanese meals, have been prepared in typical and on non-typical restaurants, and represent a challenge to risk analysis on HACCP plans. The aim of this study was to evaluate the influence of the type of restaurant, season and type of fish used on sashimi microbiota. Sashimi samples (n = 114) were directly collected from 23 sushi restaurants and were classified as Winter and Summer Samples. They were also categorized according to the type of restaurant where they were obtained: as typical or non-typical. The samples were processed using international standards procedures. A middling seasonality influence was observed in microbiota using mesophilic aerobic bacteria, psychrotrophic microorganisms, Lactic acid bacteria, Pseudomonas spp., H 2 S positive bacteria, mould and Bacillus cereus counts parameters. During the Summer Season, samples classified as unacceptable or potentially Hazardous were observed. Non-typical restaurants had the most cases of Unacceptable/potentially hazardous samples 83.33%. These unacceptable results were obtained as a result of high values of pathogenic bacteria like Listeria monocytogenes and Staphylococcus aureus No significant differences were observed on microbiota counts from different fish species. The need to implement more accurate food safety systems was quite evident, especially in the warmer season, as well as in restaurants where other kinds of food, apart from Japanese meals, was prepared. © Crown copyright 2016.

  7. Sum-over-states density functional perturbation theory: Prediction of reliable 13C, 15N, and 17O nuclear magnetic resonance chemical shifts

    NASA Astrophysics Data System (ADS)

    Olsson, Lars; Cremer, Dieter

    1996-11-01

    Sum-over-states density functional perturbation theory (SOS-DFPT) has been used to calculate 13C, 15N, and 17O NMR chemical shifts of 20 molecules, for which accurate experimental gas-phase values are available. Compared to Hartree-Fock (HF), SOS-DFPT leads to improved chemical shift values and approaches the degree of accuracy obtained with second order Møller-Plesset perturbation theory (MP2). This is particularly true in the case of 15N chemical shifts where SOS-DFPT performs even better than MP2. Additional improvements of SOS-DFPT chemical shifts can be obtained by empirically correcting diamagnetic and paramagnetic contributions to compensate for deficiencies which are typical of DFT.

  8. Environmental degradation and remediation: is economics part of the problem?

    PubMed

    Dore, Mohammed H I; Burton, Ian

    2003-01-01

    It is argued that standard environmental economic and 'ecological economics', have the same fundamentals of valuation in terms of money, based on a demand curve derived from utility maximization. But this approach leads to three different measures of value. An invariant measure of value exists only if the consumer has 'homothetic preferences'. In order to obtain a numerical estimate of value, specific functional forms are necessary, but typically these estimates do not converge. This is due to the fact that the underlying economic model is not structurally stable. According to neoclassical economics, any environmental remediation can be justified only in terms of increases in consumer satisfaction, balancing marginal gains against marginal costs. It is not surprising that the optimal policy obtained from this approach suggests only small reductions in greenhouse gases. We show that a unidimensional metric of consumer's utility measured in dollar terms can only trivialize the problem of global climate change.

  9. On-line sulfur isotope analysis of organic material by direct combustion: Preliminary results and potential applications

    USGS Publications Warehouse

    Kester, C.L.; Rye, R.O.; Johnson, C.A.; Schwartz, C.H.; Holmes, C.H.

    2001-01-01

    Sulfur isotopes have received little attention in ecology studies because plant and animal materials typically have low sulfur concentrations (< 1 wt.%) necessitating labor-intensive chemical extraction prior to analysis. To address the potential of direct combustion of organic material in an elemental analyzer coupled with a mass spectrometer, we compared results obtained by direct combustion to results obtained by sulfur extraction with Eschka's mixture. Direct combustion of peat and animal tissue gave reproducibility of better than 0.5??? and on average, values are 0.8??? higher than values obtained by Eschka extraction. Successful direct combustion of organic material appears to be a function of sample matrix and sulfur concentration. Initial results indicate that direct combustion provides fast, reliable results with minimal preparation. Pilot studies underway include defining bear diets and examining fluctuations between freshwater and brackish water in coastal environments.

  10. Statistical inference for Hardy-Weinberg proportions in the presence of missing genotype information.

    PubMed

    Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor

    2013-01-01

    In genetic association studies, tests for Hardy-Weinberg proportions are often employed as a quality control checking procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy-Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing values in order to improve inference for Hardy-Weinberg proportions. For imputation we employ a multinomial logit model that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at random. Deviation from Hardy-Weinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that imputation of missing data leads to improved statistical inference for Hardy-Weinberg proportions.

  11. Spheres settling in an Oldroyd-B fluid

    NASA Astrophysics Data System (ADS)

    Pan, Tsorng-Whay; Glowinski, Roland

    2017-11-01

    In this talk we present a numerical study of the dynamics of balls settling in a vertical channel with a square cross-section filled with an Oldroyd-B fluid. For the case of two balls, two typical kinds of particle dynamics are obtained: (i) periodic interaction between two balls and (ii) the formation of a vertical chain of two balls. For the periodic interaction of two balls occurred at lower values of the elasticity number, two balls draft, kiss and break away periodically and the chain is not formed due to not strong enough elastic force. For slightly higher values of the elasticity number, two balls draft, kiss and break away a couple times first and then form a chain. Such chain finally becomes a vertical one after the oscillation damps out. For higher values of the elasticity number, two balls draft, kiss and form a vertical chain right away. The formation of three ball chain can be obtained at higher values of the elasticity number. This work was supported by NSF (Grant DMS-1418308).

  12. Use of passive scalar tagging for the study of coherent structures in the plane mixing layer

    NASA Technical Reports Server (NTRS)

    Ramaprian, B. R.; Sandham, N. D.; Mungal, M. G.; Reynolds, W. C.

    1988-01-01

    Data obtained from the numerical simulation of a 2-D mixing layer were used to study the feasibility of using the instantaneous concentration of a passive scalar for detecting the typical coherent structures in the flow. The study showed that this technique works quite satisfactorily and yields results similar to those that can be obtained by using the instantaneous vorticity for structure detection. Using the coherent events educed by the scalar conditioning technique, the contribution of the coherent events to the total turbulent momentum and scalar transport was estimated. It is found that the contribution from the typical coherent events is of the same order as that of the time-mean value. However, the individual contributions become very large during the pairing of these structures. The increase is particularly spectacular in the case of the Reynolds shear stress.

  13. Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.

    PubMed

    Shireman, Emilie; Steinley, Douglas; Brusco, Michael J

    2017-02-01

    Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.

  14. Accurate computation and continuation of homoclinic and heteroclinic orbits for singular perturbation problems

    NASA Technical Reports Server (NTRS)

    Vaughan, William W.; Friedman, Mark J.; Monteiro, Anand C.

    1993-01-01

    In earlier papers, Doedel and the authors have developed a numerical method and derived error estimates for the computation of branches of heteroclinic orbits for a system of autonomous ordinary differential equations in R(exp n). The idea of the method is to reduce a boundary value problem on the real line to a boundary value problem on a finite interval by using a local (linear or higher order) approximation of the stable and unstable manifolds. A practical limitation for the computation of homoclinic and heteroclinic orbits has been the difficulty in obtaining starting orbits. Typically these were obtained from a closed form solution or via a homotopy from a known solution. Here we consider extensions of our algorithm which allow us to obtain starting orbits on the continuation branch in a more systematic way as well as make the continuation algorithm more flexible. In applications, we use the continuation software package AUTO in combination with some initial value software. The examples considered include computation of homoclinic orbits in a singular perturbation problem and in a turbulent fluid boundary layer in the wall region problem.

  15. Atmospheric components of the surface energy budget over young sea ice: Results from the N-ICE2015 campaign

    NASA Astrophysics Data System (ADS)

    Walden, Von P.; Hudson, Stephen R.; Cohen, Lana; Murphy, Sarah Y.; Granskog, Mats A.

    2017-08-01

    The Norwegian young sea ice campaign obtained the first measurements of the surface energy budget over young, thin Arctic sea ice through the seasonal transition from winter to summer. This campaign was the first of its kind in the North Atlantic sector of the Arctic. This study describes the atmospheric and surface conditions and the radiative and turbulent heat fluxes over young, thin sea ice. The shortwave albedo of the snow surface ranged from about 0.85 in winter to 0.72-0.80 in early summer. The near-surface atmosphere was typically stable in winter, unstable in spring, and near neutral in summer once the surface skin temperature reached 0°C. The daily average radiative and turbulent heat fluxes typically sum to negative values (-40 to 0 W m-2) in winter but then transition toward positive values of up to nearly +60 W m-2 as solar radiation contributes significantly to the surface energy budget. The sensible heat flux typically ranges from +20-30 W m-2 in winter (into the surface) to negative values between 0 and -20 W m-2 in spring and summer. A winter case study highlights the significant effect of synoptic storms and demonstrates the complex interplay of wind, clouds, and heat and moisture advection on the surface energy components over sea ice in winter. A spring case study contrasts a rare period of 24 h of clear-sky conditions with typical overcast conditions and highlights the impact of clouds on the surface radiation and energy budgets over young, thin sea ice.

  16. Generation and evaluation of typical meteorological year datasets for greenhouse and external conditions on the Mediterranean coast.

    PubMed

    Fernández, M D; López, J C; Baeza, E; Céspedes, A; Meca, D E; Bailey, B

    2015-08-01

    A typical meteorological year (TMY) represents the typical meteorological conditions over many years but still contains the short term fluctuations which are absent from long-term averaged data. Meteorological data were measured at the Experimental Station of Cajamar 'Las Palmerillas' (Cajamar Foundation) in Almeria, Spain, over 19 years at the meteorological station and in a reference greenhouse which is typical of those used in the region. The two sets of measurements were subjected to quality control analysis and then used to create TMY datasets using three different methodologies proposed in the literature. Three TMY datasets were generated for the external conditions and two for the greenhouse. They were assessed by using each as input to seven horticultural models and comparing the model results with those obtained by experiment in practical trials. In addition, the models were used with the meteorological data recorded during the trials. A scoring system was used to identify the best performing TMY in each application and then rank them in overall performance. The best methodology was that of Argiriou for both greenhouse and external conditions. The average relative errors between the seasonal values estimated using the 19-year dataset and those using the Argiriou greenhouse TMY were 2.2 % (reference evapotranspiration), -0.45 % (pepper crop transpiration), 3.4 % (pepper crop nitrogen uptake) and 0.8 % (green bean yield). The values obtained using the Argiriou external TMY were 1.8 % (greenhouse reference evapotranspiration), 0.6 % (external reference evapotranspiration), 4.7 % (greenhouse heat requirement) and 0.9 % (loquat harvest date). Using the models with the 19 individual years in the historical dataset showed that the year to year weather variability gave results which differed from the average values by ± 15 %. By comparison with results from other greenhouses it was shown that the greenhouse TMY is applicable to greenhouses which have a solar radiation transmission of approximately 65 % and rely on manual control of ventilation which constitute the majority in the south-east of Spain and in most Mediterranean greenhouse areas.

  17. Generation and evaluation of typical meteorological year datasets for greenhouse and external conditions on the Mediterranean coast

    NASA Astrophysics Data System (ADS)

    Fernández, M. D.; López, J. C.; Baeza, E.; Céspedes, A.; Meca, D. E.; Bailey, B.

    2015-08-01

    A typical meteorological year (TMY) represents the typical meteorological conditions over many years but still contains the short term fluctuations which are absent from long-term averaged data. Meteorological data were measured at the Experimental Station of Cajamar `Las Palmerillas' (Cajamar Foundation) in Almeria, Spain, over 19 years at the meteorological station and in a reference greenhouse which is typical of those used in the region. The two sets of measurements were subjected to quality control analysis and then used to create TMY datasets using three different methodologies proposed in the literature. Three TMY datasets were generated for the external conditions and two for the greenhouse. They were assessed by using each as input to seven horticultural models and comparing the model results with those obtained by experiment in practical trials. In addition, the models were used with the meteorological data recorded during the trials. A scoring system was used to identify the best performing TMY in each application and then rank them in overall performance. The best methodology was that of Argiriou for both greenhouse and external conditions. The average relative errors between the seasonal values estimated using the 19-year dataset and those using the Argiriou greenhouse TMY were 2.2 % (reference evapotranspiration), -0.45 % (pepper crop transpiration), 3.4 % (pepper crop nitrogen uptake) and 0.8 % (green bean yield). The values obtained using the Argiriou external TMY were 1.8 % (greenhouse reference evapotranspiration), 0.6 % (external reference evapotranspiration), 4.7 % (greenhouse heat requirement) and 0.9 % (loquat harvest date). Using the models with the 19 individual years in the historical dataset showed that the year to year weather variability gave results which differed from the average values by ± 15 %. By comparison with results from other greenhouses it was shown that the greenhouse TMY is applicable to greenhouses which have a solar radiation transmission of approximately 65 % and rely on manual control of ventilation which constitute the majority in the south-east of Spain and in most Mediterranean greenhouse areas.

  18. Influence of source batch S{sub K} dispersion on dosimetry for prostate cancer treatment with permanent implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuñez-Cumplido, E., E-mail: ejnc-mccg@hotmail.com; Hernandez-Armas, J.; Perez-Calatayud, J.

    2015-08-15

    Purpose: In clinical practice, specific air kerma strength (S{sub K}) value is used in treatment planning system (TPS) permanent brachytherapy implant calculations with {sup 125}I and {sup 103}Pd sources; in fact, commercial TPS provide only one S{sub K} input value for all implanted sources and the certified shipment average is typically used. However, the value for S{sub K} is dispersed: this dispersion is not only due to the manufacturing process and variation between different source batches but also due to the classification of sources into different classes according to their S{sub K} values. The purpose of this work is tomore » examine the impact of S{sub K} dispersion on typical implant parameters that are used to evaluate the dose volume histogram (DVH) for both planning target volume (PTV) and organs at risk (OARs). Methods: The authors have developed a new algorithm to compute dose distributions with different S{sub K} values for each source. Three different prostate volumes (20, 30, and 40 cm{sup 3}) were considered and two typical commercial sources of different radionuclides were used. Using a conventional TPS, clinically accepted calculations were made for {sup 125}I sources; for the palladium, typical implants were simulated. To assess the many different possible S{sub K} values for each source belonging to a class, the authors assigned an S{sub K} value to each source in a randomized process 1000 times for each source and volume. All the dose distributions generated for each set of simulations were assessed through the DVH distributions comparing with dose distributions obtained using a uniform S{sub K} value for all the implanted sources. The authors analyzed several dose coverage (V{sub 100} and D{sub 90}) and overdosage parameters for prostate and PTV and also the limiting and overdosage parameters for OARs, urethra and rectum. Results: The parameters analyzed followed a Gaussian distribution for the entire set of computed dosimetries. PTV and prostate V{sub 100} and D{sub 90} variations ranged between 0.2% and 1.78% for both sources. Variations for the overdosage parameters V{sub 150} and V{sub 200} compared to dose coverage parameters were observed and, in general, variations were larger for parameters related to {sup 125}I sources than {sup 103}Pd sources. For OAR dosimetry, variations with respect to the reference D{sub 0.1cm{sup 3}} were observed for rectum values, ranging from 2% to 3%, compared with urethra values, which ranged from 1% to 2%. Conclusions: Dose coverage for prostate and PTV was practically unaffected by S{sub K} dispersion, as was the maximum dose deposited in the urethra due to the implant technique geometry. However, the authors observed larger variations for the PTV V{sub 150}, rectum V{sub 100}, and rectum D{sub 0.1cm{sup 3}} values. The variations in rectum parameters were caused by the specific location of sources with S{sub K} value that differed from the average in the vicinity. Finally, on comparing the two sources, variations were larger for {sup 125}I than for {sup 103}Pd. This is because for {sup 103}Pd, a greater number of sources were used to obtain a valid dose distribution than for {sup 125}I, resulting in a lower variation for each S{sub K} value for each source (because the variations become averaged out statistically speaking)« less

  19. Improved measurements of turbulence in the hot gaseous atmospheres of nearby giant elliptical galaxies

    DOE PAGES

    Ogorzalek, A.; Zhuravleva, I.; Allen, S. W.; ...

    2017-08-12

    Here, we present significantly improved measurements of turbulent velocities in the hot gaseous haloes of nearby giant elliptical galaxies. Using deep XMM–NewtonReflection Grating Spectrometer ( RGS) observations and a combination of resonance scattering and direct line broadening methods, we obtain well bounded constraints for 13 galaxies. Assuming that the turbulence is isotropic, we obtain a best-fitting mean 1D turbulent velocity of 110 km s -1. This implies a typical 3D Mach number ~0.45 and a typical non-thermal pressure contribution of ~6 per cent in the cores of nearby massive galaxies. The intrinsic scatter around these values is modest – consistentmore » with zero, albeit with large statistical uncertainty – hinting at a common and quasi-continuous mechanism sourcing the velocity structure in these objects. Using conservative estimates of the spatial scales associated with the observed turbulent motions, we find that turbulent heating can be sufficient to offset radiative cooling in the inner regions of these galaxies (<10 kpc, typically 2–3 kpc). The full potential of our analysis methods will be enabled by future X-ray micro-calorimeter observations.« less

  20. Source rock potential of an Eocene carbonate slope: The Armancies Formation of the south-Pyrenean basin, northeast Spain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Permanyer, A.; Valles, D.; Dorronsorro, C.

    1988-08-01

    The Armancies Formation is an Eocene carbonate slope succession in the Catalonian South Pyrenean basin. It ranges from 500 to 700 m in thickness. The first 200 m are made of a thin-bedded facies of wackestones alternating with dark pelagic fauna of miliolid, ostracods, bryozoans, and planktonic foraminifers and show significant bioturbation. They also show a low organic content (< 0.5% TOC). The lime-mudstone beds show a massive structure or planar millimeter laminations. They may contain sparse pelagic fossils of planktonic foraminifers, ostracods, and dinoflagellates; they do not show any bioturbation, and have high TOC values, which can reach individualmore » scores of about 14%. They qualify, therefore, as a typical oil shale. Rock-Eval Pyrolysis analysis affords a mean S{sub 2} value of 25 mg HC/g. Mean S{sub 1} value is around 1.0 mg HC/g. As is typical of an initial oil window, T{sub max} maturity parameter ranges from 432 to 440{degree}C (mean = 434{degree}C). This degree of evolution is in accordance with the very low value of carbonyl and carboxyl groups, as determined by IR spectrometry and NMR on Fischer assay extract. The proton NMR shows an aromatic/aliphatic hydrocarbon ratio of 1:4, as expected in earlier stages of catagenesis. N-alkane gas chromatography profiles show n-C{sub 15} to n-C{sub 19} prevalence and that neither even nor odd carbon numbers prevail. This distribution perfectly matches that of typical sediments of marine origin and also agrees with obtained hydrogen index values (mean HI = 500 mg HC/g TOC). Sedimentological and geochemical results indicate an autochthonous marine organic matter and the potential of these slope shales is good oil-prone source beds.« less

  1. Ion/molecule reactions to chemically deconvolute the electrospray ionization mass spectra of synthetic polymers.

    PubMed

    Lennon, John D; Cole, Scott P; Glish, Gary L

    2006-12-15

    A new approach has been developed to analyze synthetic polymers via electrospray ionization mass spectrometry. Ion/molecule reactions, a unique feature of trapping instruments such as quadrupole ion trap mass spectrometers, can be used to chemically deconvolute the molecular mass distribution of polymers from the charge-state distribution generated by electrospray ionization. The reaction involves stripping charge from multiply charged oligomers to reduce the number of charge states. This reduces or eliminates the overlapping of oligomers from adjacent charge states. 15-Crown-5 was used to strip alkali cations (Na+) from several narrow polydisperse poly(ethylene glycol) standards. The charge-state distribution of each oligomer is reduced to primarily one charge state. Individual oligomers can be resolved, and the average molecular mass and polydispersities can be calculated for the polymers examined here. In most cases, the measured number-average molecular mass values are within 10% of the manufacturers' reported values obtained by gel permeation chromatography. The polydispersity was typically underestimated compared to values reported by the suppliers. Mn values were obtained with 0.5% RSD and are independent, over several orders of magnitude, of the polymer and cation concentration. The distributions that were obtained fit quite well to the Gaussian distribution indicating no high- or low-mass discriminations.

  2. Environmental monitoring through use of silica-based TLD.

    PubMed

    Rozaila, Z Siti; Khandaker, M U; Abdul Sani, S F; Sabtu, Siti Norbaini; Amin, Y M; Maah, M J; Bradley, D A

    2017-09-25

    The sensitivity of a novel silica-based fibre-form thermoluminescence dosimeter was tested off-site of a rare-earths processing plant, investigating the potential for obtaining baseline measurements of naturally occurring radioactive materials. The dosimeter, a Ge-doped collapsed photonic crystal fibre (PCFc) co-doped with B, was calibrated against commercially available thermoluminescent dosimetry (TLD) (TLD-200 and TLD-100) using a bremsstrahlung (tube-based) x-ray source. Eight sampling sites within 1 to 20 km of the perimeter of the rare-earth facility were identified, the TLDs (silica- as well as TLD-200 and TLD-100) in each case being buried within the soil at fixed depth, allowing measurements to be obtained, in this case for protracted periods of exposure of between two to eight months. The values of the dose were then compared against values projected on the basis of radioactivity measurements of the associated soils, obtained via high-purity germanium gamma-ray spectrometry. Accord was found in relative terms between the TL evaluations at each site and the associated spectroscopic results. Thus said, in absolute terms, the TL evaluated doses were typically less than those derived from gamma-ray spectroscopy, by ∼50% in the case of PCFc-Ge. Gamma spectrometry analysis typically provided an upper limit to the projected dose, and the Marinelli beaker contents were formed from sieving to provide a homogenous well-packed medium. However, with the radioactivity per unit mass typically greater for smaller particles, with preferential adsorption on the surface and the surface area per unit volume increasing with decrease in radius, this made for an elevated dose estimate. Prevailing concentrations of key naturally occurring radionuclides in soil, 226 Ra, 232 Th and 40 K, were also determined, together with radiological dose evaluation. To date, the area under investigation, although including a rare-earth processing facility, gives no cause for concern from radiological impact. The current study reveals the suitability of the optical fibre based micro-dosimeter for all-weather monitoring of low-level environmental radioactivity.

  3. Prediction of Hot Tearing Using a Dimensionless Niyama Criterion

    NASA Astrophysics Data System (ADS)

    Monroe, Charles; Beckermann, Christoph

    2014-08-01

    The dimensionless form of the well-known Niyama criterion is extended to include the effect of applied strain. Under applied tensile strain, the pressure drop in the mushy zone is enhanced and pores grow beyond typical shrinkage porosity without deformation. This porosity growth can be expected to align perpendicular to the applied strain and to contribute to hot tearing. A model to capture this coupled effect of solidification shrinkage and applied strain on the mushy zone is derived. The dimensionless Niyama criterion can be used to determine the critical liquid fraction value below which porosity forms. This critical value is a function of alloy properties, solidification conditions, and strain rate. Once a dimensionless Niyama criterion value is obtained from thermal and mechanical simulation results, the corresponding shrinkage and deformation pore volume fractions can be calculated. The novelty of the proposed method lies in using the critical liquid fraction at the critical pressure drop within the mushy zone to determine the onset of hot tearing. The magnitude of pore growth due to shrinkage and deformation is plotted as a function of the dimensionless Niyama criterion for an Al-Cu alloy as an example. Furthermore, a typical hot tear "lambda"-shaped curve showing deformation pore volume as a function of alloy content is produced for two Niyama criterion values.

  4. Faying Surface Lubrication Effects on Nut Factors

    NASA Technical Reports Server (NTRS)

    Taylor, Deneen M.; Morrison, Raymond F.

    2006-01-01

    Bolted joint analysis typically is performed using nut factors derived from textbooks and procedures from program requirement documents. Joint specific testing was performed for a critical International Space Station (ISS) joint. Test results indicate that for some configurations the nut factor may be significantly different than accepted textbook values. This paper presents results of joint specific testing to aid in determining if joint specific testing should be performed to insure required preloads are obtained.

  5. Plasma Thruster Development: Magnetoplasmadynamic Propulsion, Status and Basic Problems.

    DTIC Science & Technology

    1986-02-01

    34 9 Sublimation Rates vs. Temperature for Typical Electrode Materials 65 10 Time to Reach Melting vs. Surface Heat Load (One-Dimensional, Large Area...Approx.) for Different Electrode Materials and Initial Temperatures 75 V LIST OF TABLES TABLE PAGE I Models of Thruster Types (with approximation (1...much higher specific impulse values than the minimum must be achieved in order to obtain acceptable effi- Sciencies , e.g. for 30% efficiency with argon

  6. Effect of Axially Staged Fuel Introduction on Performance of One-quarter Sector of Annular Turbojet Combustor

    NASA Technical Reports Server (NTRS)

    Zettle, Eugene V; Mark, Herman

    1953-01-01

    The design principle of injecting liquid fuel at more than one axial station in an annual turbojet combustor was investigated. Fuel was injected into the combustor as much as 5 inches downstream of the primary fuel injectors. Many fuel-injection configurations were examined and the performance results are presented for 11 configurations that best demonstrate the trends in performance obtained. The performance investigations were made at a constant combustor-inlet pressure of 15 inches of mercury absolute and at air flows up to 70 percent higher than values typical of current design practice. At these higher air flows, staging the fuel introduction improved the combustion efficiency considerably over that obtained in the combustor when no fuel staging was employed. At air flows currently encountered in turbojet engines, fuel staging was of minor value. Radial temperature distribution seemed relatively unaffected by the location of fuel-injection stations.

  7. Fabrication of polymer microlenses on single mode optical fibers for light coupling

    NASA Astrophysics Data System (ADS)

    Zaboub, Monsef; Guessoum, Assia; Demagh, Nacer-Eddine; Guermat, Abdelhak

    2016-05-01

    In this paper, we present a technique for producing fibers optics micro-collimators composed of polydimethylsiloxane PDMS microlenses of different radii of curvature. The waist and working distance values obtained enable the optimization of optical coupling between optical fibers, fibers and optical sources, and fibers and detectors. The principal is based on the injection of polydimethylsiloxane (PDMS) into a conical micro-cavity chemically etched at the end of optical fibers. A spherical microlens is then formed that is self-centered with respect to the axis of the fiber. Typically, an optimal radius of curvature of 10.08 μm is obtained. This optimized micro-collimator is characterized by a working distance of 19.27 μm and a waist equal to 2.28 μm for an SMF 9/125 μm fiber. The simulation and experimental results reveal an optical coupling efficiency that can reach a value of 99.75%.

  8. Effect of mesa structure formation on the electrical properties of zinc oxide thin film transistors.

    PubMed

    Singh, Shaivalini; Chakrabarti, P

    2014-05-01

    ZnO based bottom-gate thin film transistor (TFT) with SiO2 as insulating layer has been fabricated with two different structures. The effect of formation of mesa structure on the electrical characteristics of the TFTs has been studied. The formation of mesa structure of ZnO channel region can definitely result in better control over channel region and enhance value of channel mobility of ZnO TFT. As a result, by fabricating a mesa structured TFT, a better value of mobility and on-state current are achieved at low voltages. A typical saturation current of 1.85 x 10(-7) A under a gate bias of 50 V is obtained for non mesa structure TFT while for mesa structured TFT saturation current of 5 x 10(-5) A can be obtained at comparatively very low gate bias of 6.4 V.

  9. The emission-line regions in the nucleus of NGC 1313 probed with GMOS-IFU: a supergiant/hypergiant candidate and a kinematically cold nucleus

    NASA Astrophysics Data System (ADS)

    Menezes, R. B.; Steiner, J. E.

    2017-04-01

    NGC 1313 is a bulgeless nearby galaxy, classified as SB(s)d. Its proximity allows high spatial resolution observations. We performed the first detailed analysis of the emission-line properties in the nuclear region of NGC 1313, using an optical data cube obtained with the Gemini Multi-object Spectrograph. We detected four main emitting areas, three of them (regions 1, 2 and 3) having spectra typical of H II regions. Region 1 is located very close to the stellar nucleus and shows broad spectral features characteristic of Wolf-Rayet stars. Our analysis revealed the presence of one or two WC4-5 stars in this region, which is compatible with results obtained by previous studies. Region 4 shows spectral features (as a strong Hα emission line, with a broad component) typical of a massive emission-line star, such as a luminous blue variable, a B[e] supergiant or a B hypergiant. The radial velocity map of the ionized gas shows a pattern consistent with rotation. A significant drop in the values of the gas velocity dispersion was detected very close to region 1, which suggests that the young stars there were formed from this cold gas, possibly keeping low values of velocity dispersion. Therefore, although detailed measurements of the stellar kinematics were not possible (due to the weak stellar absorption spectrum of this galaxy), we predict that NGC 1313 may also show a drop in the values of the stellar velocity dispersion in its nuclear region.

  10. Optimal Threshold Determination for Interpreting Semantic Similarity and Particularity: Application to the Comparison of Gene Sets and Metabolic Pathways Using GO and ChEBI

    PubMed Central

    Bettembourg, Charles; Diot, Christian; Dameron, Olivier

    2015-01-01

    Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274

  11. Classification of typical and atypical antipsychotic drugs on the basis of dopamine D-1, D-2 and serotonin2 pKi values.

    PubMed

    Meltzer, H Y; Matsubara, S; Lee, J C

    1989-10-01

    The pKi values of 13 reference typical and 7 reference atypical antipsychotic drugs (APDs) for rat striatal dopamine D-1 and D-2 receptor binding sites and cortical serotonin (5-HT2) receptor binding sites were determined. The atypical antipsychotics had significantly lower pKi values for the D-2 but not 5-HT2 binding sites. There was a trend for a lower pKi value for the D-1 binding site for the atypical APD. The 5-HT2 and D-1 pKi values were correlated for the typical APD whereas the 5-HT2 and D-2 pKi values were correlated for the atypical APD. A stepwise discriminant function analysis to determine the independent contribution of each pKi value for a given binding site to the classification as a typical or atypical APD entered the D-2 pKi value first, followed by the 5-HT2 pKi value. The D-1 pKi value was not entered. A discriminant function analysis correctly classified 19 of 20 of these compounds plus 14 of 17 additional test compounds as typical or atypical APD for an overall correct classification rate of 89.2%. The major contributors to the discriminant function were the D-2 and 5-HT2 pKi values. A cluster analysis based only on the 5-HT2/D2 ratio grouped 15 of 17 atypical + one typical APD in one cluster and 19 of 20 typical + two atypical APDs in a second cluster, for an overall correct classification rate of 91.9%. When the stepwise discriminant function was repeated for all 37 compounds, only the D-2 and 5-HT2 pKi values were entered into the discriminant function.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  13. Dynamic Photoelectrochemical Device Using an Electrolyte-Permeable NiO x/SiO2/Si Photocathode with an Open-Circuit Potential of 0.75 V.

    PubMed

    Jung, Jin-Young; Yu, Jin-Young; Lee, Jung-Ho

    2018-03-07

    As a thermodynamic driving force obtained from sunlight, the open-circuit potential (OCP) in photoelectrochemical cells is typically limited by the photovoltage ( V ph ). In this work, we establish that the OCP can exceed the value of V ph when an electrolyte-permeable NiO x thin film is employed as an electrocatalyst in a Si photocathode. The built-in potential developed at the NiO x /Si junction is adjusted in situ according to the progress of the NiO x hydration for the hydrogen evolution reaction (HER). As a result of decoupling of the OCP from V ph , a high OCP value of 0.75 V (vs reversible hydrogen electrode) is obtained after 1 h operation of HER in an alkaline electrolyte (pH = 14), thus outperforming the highest value (0.64 V) reported to date with conventional Si photoelectrodes. This finding might offer insight into novel photocathode designs such as those based on tandem water-splitting systems.

  14. Electrical and switching properties of the Se 90Te 10-xAg x (0⩽ x⩽6) films

    NASA Astrophysics Data System (ADS)

    Afifi, M. A.; Hegab, N. A.; Bekheet, A. E.; Sharaf, E. R.

    2009-08-01

    Amorphous Se 90Te 10-xAg x (0⩽ x⩽6) films are obtained by thermal evaporation technique under vacuum from the synthesized bulk materials on pyrographite and glass substrates. X-ray analysis shows the amorphous nature of the obtained films. The dc electrical conductivity was studied for different thicknesses (165-711 nm) as a function of temperature in the range (298-323 K) below the corresponding T g for the studied films. The obtained results show that the conduction activation energy has a single value through the investigated range of temperature which can be explained in accordance with Mott and Davis model. The I- V characteristic curves for the film compositions are found to be typical for a memory switch. The mean value of the threshold voltage Vbar increases linearly with increasing film thickness (165-711 nm), while it decreases exponentially with increasing temperature in the investigated range for the studied compositions. The results are explained in accordance with the electrothermal model for the switching process. The effect of Ag on the studied parameters is also investigated.

  15. Thickness of the Magnetic Crust of Mars from Magneto-Spectral Analysis

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    2006-01-01

    Previous analysis of the magnetic spectrum of Mars showed only a crustal source field. The observational spectrum was fairly well fitted by the spectrum expected from random dipolar sources scattered on a spherical shell about 46 plus or minus 10 km below Mars' 3389.5 km mean radius. This de-correlation depth overestimates the typical depth of extended magnetized structures, and so was judged closer to mean source layer thickness than twice its value. To better estimate the thickness of the magnetic crust of Mars, six different magnetic spectra were fitted with the theoretical spectrum expected from a novel, bimodal distribution of magnetic sources. This theoretical spectrum represents both compact and extended, laterally correlated sources, so source shell depth is doubled to obtain layer thickness. The typical magnetic crustal thickness is put at 47.8 plus or minus 8.2 km. The extended sources are enormous, typically 650 km across, and account for over half the magnetic energy at low degrees. How did such vast regions form?

  16. Radionuclide concentration ratios in Australian terrestrial wildlife and livestock: data compilation and analysis.

    PubMed

    Johansen, M P; Twining, J R

    2010-11-01

    Radionuclide concentrations in Australian terrestrial fauna, including indigenous kangaroos and lizards, as well as introduced sheep and water buffalo, are of interest when considering doses to human receptors and doses to the biota itself. Here, concentration ratio (CR) values for a variety of endemic and introduced Australian animals with a focus on wildlife and livestock inhabiting open rangeland are derived and reported. The CR values are based on U- and Th-series concentration data obtained from previous studies at mining sites and (241)Am and (239/240)Pu data from a former weapons testing site. Soil-to-muscle CR values of key natural-series radionuclides for grazing Australian kangaroo and sheep are one to two orders of magnitude higher than those of grazing cattle in North and South America, and for (210)Po, (230)Th, and (238)U are one to two orders of magnitude higher than the ERICA tool reference values. When comparing paired kangaroo and sheep CR values, results are linearly correlated (r = 0.81) for all tissue types. However, kidney and liver CR values for kangaroo are typically higher than those of sheep, particularly for (210)Pb, and (210)Po, with values in kangaroo liver more than an order of magnitude higher than those in sheep liver. Concentration ratios for organs are typically higher than those for muscle including those for (241)Am and (239/240)Pu in cooked kangaroo and rabbit samples. This study provides CR values for Australian terrestrial wildlife and livestock and suggests higher accumulation rates for select radionuclides in semi-arid Australian conditions compared with those associated with temperate conditions.

  17. On the precision of experimentally determined protein folding rates and φ-values

    PubMed Central

    De Los Rios, Miguel A.; Muralidhara, B.K.; Wildes, David; Sosnick, Tobin R.; Marqusee, Susan; Wittung-Stafshede, Pernilla; Plaxco, Kevin W.; Ruczinski, Ingo

    2006-01-01

    φ-Values, a relatively direct probe of transition-state structure, are an important benchmark in both experimental and theoretical studies of protein folding. Recently, however, significant controversy has emerged regarding the reliability with which φ-values can be determined experimentally: Because φ is a ratio of differences between experimental observables it is extremely sensitive to errors in those observations when the differences are small. Here we address this issue directly by performing blind, replicate measurements in three laboratories. By monitoring within- and between-laboratory variability, we have determined the precision with which folding rates and φ-values are measured using generally accepted laboratory practices and under conditions typical of our laboratories. We find that, unless the change in free energy associated with the probing mutation is quite large, the precision of φ-values is relatively poor when determined using rates extrapolated to the absence of denaturant. In contrast, when we employ rates estimated at nonzero denaturant concentrations or assume that the slopes of the chevron arms (mf and mu) are invariant upon mutation, the precision of our estimates of φ is significantly improved. Nevertheless, the reproducibility we thus obtain still compares poorly with the confidence intervals typically reported in the literature. This discrepancy appears to arise due to differences in how precision is calculated, the dependence of precision on the number of data points employed in defining a chevron, and interlaboratory sources of variability that may have been largely ignored in the prior literature. PMID:16501226

  18. Stability of Gradient Field Corrections for Quantitative Diffusion MRI.

    PubMed

    Rogers, Baxter P; Blaber, Justin; Welch, E Brian; Ding, Zhaohua; Anderson, Adam W; Landman, Bennett A

    2017-02-11

    In magnetic resonance diffusion imaging, gradient nonlinearity causes significant bias in the estimation of quantitative diffusion parameters such as diffusivity, anisotropy, and diffusion direction in areas away from the magnet isocenter. This bias can be substantially reduced if the scanner- and coil-specific gradient field nonlinearities are known. Using a set of field map calibration scans on a large (29 cm diameter) phantom combined with a solid harmonic approximation of the gradient fields, we predicted the obtained b-values and applied gradient directions throughout a typical field of view for brain imaging for a typical 32-direction diffusion imaging sequence. We measured the stability of these predictions over time. At 80 mm from scanner isocenter, predicted b-value was 1-6% different than intended due to gradient nonlinearity, and predicted gradient directions were in error by up to 1 degree. Over the course of one month the change in these quantities due to calibration-related factors such as scanner drift and variation in phantom placement was <0.5% for b-values, and <0.5 degrees for angular deviation. The proposed calibration procedure allows the estimation of gradient nonlinearity to correct b-values and gradient directions ahead of advanced diffusion image processing for high angular resolution data, and requires only a five-minute phantom scan that can be included in a weekly or monthly quality assurance protocol.

  19. Numerical investigation of the relationship between magnetic stiffness and minor loop size in the HTS levitation system

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Li, Chengshan

    2017-10-01

    The effect of minor loop size on the magnetic stiffness has not been paid attention to by most researchers in experimental and theoretical studies about the high temperature superconductor (HTS) magnetic levitation system. In this work, we numerically investigate the average magnetic stiffness obtained by the minor loop traverses Δz (or Δx) varying from 0.1 mm to 2 mm in zero field cooling and field cooling regimes, respectively. The approximate values of the magnetic stiffness with zero traverse are obtained using the method of linear extrapolation. Compared with the average magnetic stiffness gained by any minor loop traverse, these approximate values are Not always close to the average magnetic stiffness produced by the smallest size of minor loops. The relative deviation ranges of average magnetic stiffness gained by the usually minor loop traverse (1 or 2 mm) are presented by the ratios of approximate values to average stiffness for different moving processes and two typical cooling conditions. The results show that most of average magnetic stiffness are remarkably influenced by the sizes of minor loop, which indicates that the magnetic stiffness obtained by a single minor loop traverse Δ z or Δ x, for example, 1 or 2 mm, can be generally caused a large deviation.

  20. Mechanochemical synthesis of high thermoelectric performance bulk Cu 2X (X = S, Se) materials

    DOE PAGES

    Yang, Dongwang; Su, Xianli; Yan, Yonggao; ...

    2016-11-01

    We devised a single-step mechanochemical synthesis/densification procedure for Cu 2X (X = S, Se) thermoelectric materials via applying a pressure of 3 GPa to a stoichiometric admixture of elemental Cu and X for 3 min at room temperature. The obtained bulk materials were single-phase, nearly stoichiometric structures with a relative packing density of 97% or higher. The structures contained high concentration of atomic scale defects and pores of 20-200 nm diameter. The above attributes gave rise to a high thermoelectric performance: at 873 K, the ZT value of Cu2S reached 1.07, about 2.1 times the value typical of samples grownmore » from the melt. The ZT value of Cu 2Se samples reached in excess of 1.2, close to the state-of-the-art value.« less

  1. The dielectric properties of human pineal gland tissue and RF absorption due to wireless communication devices in the frequency range 400-1850 MHz.

    PubMed

    Schmid, Gernot; Uberbacher, Richard; Samaras, Theodoros; Tschabitscher, Manfred; Mazal, Peter R

    2007-09-07

    In order to enable a detailed analysis of radio frequency (RF) absorption in the human pineal gland, the dielectric properties of a sample of 20 freshly removed pineal glands were measured less than 20 h after death. Furthermore, a corresponding high resolution numerical model of the brain region surrounding the pineal gland was developed, based on a real human tissue sample. After inserting this model into a commercially available numerical head model, FDTD-based computations for exposure scenarios with generic models of handheld devices operated close to the head in the frequency range 400-1850 MHz were carried out. For typical output power values of real handheld mobile communication devices, the obtained results showed only very small amounts of absorbed RF power in the pineal gland when compared to SAR limits according to international safety standards. The highest absorption was found for the 400 MHz irradiation. In this case the RF power absorbed inside the pineal gland (organ mass 96 mg) was as low as 11 microW, when considering a device of 500 mW output power operated close to the ear. For typical mobile phone frequencies (900 MHz and 1850 MHz) and output power values (250 mW and 125 mW) the corresponding values of absorbed RF power in the pineal gland were found to be lower by a factor of 4.2 and 36, respectively. These results indicate that temperature-related biologically relevant effects on the pineal gland induced by the RF emissions of typical handheld mobile communication devices are unlikely.

  2. Individual Fit Testing of Hearing Protection Devices Based on Microphone in Real Ear.

    PubMed

    Biabani, Azam; Aliabadi, Mohsen; Golmohammadi, Rostam; Farhadian, Maryam

    2017-12-01

    Labeled noise reduction (NR) data presented by manufacturers are considered one of the main challenging issues for occupational experts in employing hearing protection devices (HPDs). This study aimed to determine the actual NR data of typical HPDs using the objective fit testing method with a microphone in real ear (MIRE) method. Five available commercially earmuff protectors were investigated in 30 workers exposed to reference noise source according to the standard method, ISO 11904-1. Personal attenuation rating (PAR) of the earmuffs was measured based on the MIRE method using a noise dosimeter (SVANTEK, model SV 102). The results showed that means of PAR of the earmuffs are from 49% to 86% of the nominal NR rating. The PAR values of earmuffs when a typical eyewear was worn differed statistically ( p < 0.05). It is revealed that a typical safety eyewear can reduce the mean of the PAR value by approximately 2.5 dB. The results also showed that measurements based on the MIRE method resulted in low variability. The variability in NR values between individuals, within individuals, and within earmuffs was not the statistically significant ( p > 0.05). This study could provide local individual fit data. Ergonomic aspects of the earmuffs and different levels of users experience and awareness can be considered the main factors affecting individual fitting compared with the laboratory condition for acquiring the labeled NR data. Based on the obtained fit testing results, the field application of MIRE can be employed for complementary studies in real workstations while workers perform their regular work duties.

  3. Australian aerosol backscatter survey

    NASA Technical Reports Server (NTRS)

    Gras, John L.; Jones, William D.

    1989-01-01

    This paper describes measurements of the atmospheric backscatter coefficient in and around Australia during May and June 1986. One set of backscatter measurements was made with a CO2 lidar operating at 10.6 microns; the other set was obtained from calculations using measured aerosol parameters. Despite the two quite different data collection techniques, there is quite good agreement between the two methods. Backscatter values range from near 1 x 10 to the -8th/m per sr near the surface to 4 - 5 x 10 to the -11th/m per sr in the free troposphere at 5-7-km altitude. The values in the free troposphere are somewhat lower than those typically measured at the same height in the Northern Hemisphere.

  4. Dynamic characteristics of organic bulk-heterojunction solar cells

    NASA Astrophysics Data System (ADS)

    Babenko, S. D.; Balakai, A. A.; Moskvin, Yu. L.; Simbirtseva, G. V.; Troshin, P. A.

    2010-12-01

    Transient characteristics of organic bulk-heterojunction solar cells have been studied using pulsed laser probing. An analysis of the photoresponse waveforms of a typical solar cell measured by varying load resistance within broad range at different values of the bias voltage provided detailed information on the photocell parameters that characterize electron-transport properties of active layers. It is established that the charge carrier mobility is sufficient to ensure high values of the fill factor (˜0.6) in the obtained photocells. On approaching the no-load voltage, the differential capacitance of the photocell exhibits a sixfold increase as compared to the geometric capacitance. A possible mechanism of recombination losses in the active medium is proposed.

  5. The effect of surface anisotropy and viewing geometry on the estimation of NDVI from AVHRR

    USGS Publications Warehouse

    Meyer, David; Verstraete, M.; Pinty, B.

    1995-01-01

    Since terrestrial surfaces are anisotropic, all spectral reflectance measurements obtained with a small instantaneous field of view instrument are specific to these angular conditions, and the value of the corresponding NDVI, computed from these bidirectional reflectances, is relative to the particular geometry of illumination and viewing at the time of the measurement. This paper documents the importance of these geometric effects through simulations of the AVHRR data acquisition process, and investigates the systematic biases that result from the combination of ecosystem-specific anisotropies with instrument-specific sampling capabilities. Typical errors in the value of NDVI are estimated, and strategies to reduce these effects are explored. -from Authors

  6. Production of a large, quiescent, magnetized plasma

    NASA Technical Reports Server (NTRS)

    Landt, D. L.; Ajmera, R. C.

    1976-01-01

    An experimental device is described which produces a large homogeneous quiescent magnetized plasma. In this device, the plasma is created in an evacuated brass cylinder by ionizing collisions between electrons emitted from a large-diameter electron gun and argon atoms in the chamber. Typical experimentally measured values of the electron temperature and density are presented which were obtained with a glass-insulated planar Langmuir probe. It is noted that the present device facilitates the study of phenomena such as waves and diffusion in magnetized plasmas.

  7. Measurement of Transmission Loss Using an Inexpensive Mobile Source on the Upper Slope of the South China Sea

    DTIC Science & Technology

    2015-09-01

    reduction of SPL in dB as sound travels from a source to a receiver ( Urick 1983). The basic equation to obtain TL from measurements in a tonal transmission...attributed to the sum of losses due to spreading, multipath effects, scattering, and attenuation ( Urick 1983). Typical values for TL in different areas...executive.com/article/us-toughens- south-china-sea-stance.] Urick , R. J., 1983: Principles of Underwater Sound. 3rd ed. Peninsula Publishing, 423 pp. 32

  8. Monolithic integration of a GaAlAs buried-heterostructure laser and a bipolar phototransistor

    NASA Technical Reports Server (NTRS)

    Bar-Chaim, N.; Harder, CH.; Margalit, S.; Yariv, A.; Katz, J.; Ury, I.

    1982-01-01

    A GaAlAs buried-heterostructure laser has been monolithically integrated with a bipolar phototransistor. The heterojunction transistor was formed by the regrowth of the burying layers of the laser. Typical threshold current values for the lasers were 30 mA. Common-emitter current gains for the phototransistor of 100-400 and light responsitivity of 75 A/W (for wavelengths of 0.82 micron) at collector current levels of 15 mA were obtained.

  9. Kinematics of SNRs CTB 109 and G206.9+2.3

    NASA Astrophysics Data System (ADS)

    Rosado, Margarita; Sánchez-Cruces, Mónica; Ambrocio-Cruz, Patricia

    2017-11-01

    We present results of optical observations in the lines of Hα and [SII] (λ 6717 and 6731 Å) obtained with the UNAM Scanning Fabry-Perot Interferometer PUMA (Rosado et al. 1995,RMxAASC, 3, 263 ) aimed at obtaining the kinematical distance, shock velocity and other important parameters of two supernova remnants (SNRs) with optical counterparts. We discuss on how kinematical distances thus obtained fit with other distance determinations. The studied SNRs are CTB 109 (SNR G109.1 - 1.0) hosting a magnetar (Sánchez-Cruces et al. 2017, in preparation) and the SNR G206.9 + 2.3 (Ambrocio-Cruz et al. 2014,RMxAA, 50, 323), a typical supernova remnant, to have a comparison. In Fig. 1 is depicted the [SII] line emission of two filaments of the optical counterpart of SNR CTB 109. We find complex radial velocity profiles obtained with the Fabry-Perot interferometer, revealing the presence of different velocity components. From these velocity profiles we obtain the kinematical distance, an expansion velocity of 188 km/s and an initial energy of 8.1 x 1050 ergs. These values are rather typical of other SNRs regardless that SNR CTB 109 hosts a magnetar. Thus, the mechanical energy delivered in the supernova explosion forming the magnetar does not seem to impact more than other SNe explosions the interstellar medium. This work has been funded by grants IN103116 and 253085 from DGAPA-UNAM and CONACYT, respectively.

  10. Experimental clean combustor program, phase 1

    NASA Technical Reports Server (NTRS)

    Bahr, D. W.; Gleason, C. C.

    1975-01-01

    Full annular versions of advanced combustor designs, sized to fit within the CF6-50 engine, were defined, manufactured, and tested at high pressure conditions. Configurations were screened, and significant reductions in CO, HC, and NOx emissions levels were achieved with two of these advanced combustor design concepts. Emissions and performance data at a typical AST cruise condition were also obtained along with combustor noise data as a part of an addendum to the basic program. The two promising combustor design approaches evolved in these efforts were the Double Annular Combustor and the Radial/Axial Combustor. With versions of these two basic combustor designs, CO and HC emissions levels at or near the target levels were obtained. Although the low target NOx emissions level was not obtained with these two advanced combustor designs, significant reductions were relative to the NOx levels of current technology combustors. Smoke emission levels below the target value were obtained.

  11. Optional contributions have positive effects for volunteering public goods games

    NASA Astrophysics Data System (ADS)

    Song, Qi-Qing; Li, Zhen-Peng; Fu, Chang-He; Wang, Lai-Sheng

    2011-11-01

    Public goods (PG) games with the volunteering mechanism are referred to as volunteering public goods (VPG) games, in which loners are introduced to the PG games, and a loner obtains a constant payoff but not participating the game. Considering that small contributions may have positive effects to encourage more players with bounded rationality to contribute, this paper introduces optional contributions (high value or low value) to these typical VPG games-a cooperator can contribute a high or low payoff to the public pools. With the low contribution, the logit dynamics show that cooperation can be promoted in a well mixed population comparing to the typical VPG games, furthermore, as the multiplication factor is greater than a threshold, the average payoff of the population is also enhanced. In spatial VPG games, we introduce a new adjusting mechanism that is an approximation to best response. Some results in agreement with the prediction of the logit dynamics are found. These simulation results reveal that for VPG games the option of low contributions may be a better method to stimulate the growth of cooperation frequency and the average payoff of the population.

  12. Contact angles of wetting and water stability of soil structure

    NASA Astrophysics Data System (ADS)

    Kholodov, V. A.; Yaroslavtseva, N. V.; Yashin, M. A.; Frid, A. S.; Lazarev, V. I.; Tyugai, Z. N.; Milanovskiy, E. Yu.

    2015-06-01

    From the soddy-podzolic soils and typical chernozems of different texture and land use, dry 3-1 mm aggregates were isolated and sieved in water. As a result, water-stable aggregates and water-unstable particles composing dry 3-1 mm aggregates were obtained. These preparations were ground, and contact angles of wetting were determined by the static sessile drop method. The angles varied from 11° to 85°. In most cases, the values of the angles for the water-stable aggregates significantly exceeded those for the water-unstable components. In terms of carbon content in structural units, there was no correlation between these parameters. When analyzing the soil varieties separately, the significant positive correlation between the carbon content and contact angle of aggregates was revealed only for the loamy-clayey typical chernozem. Based on the multivariate analysis of variance, the value of contact wetting angle was shown to be determined by the structural units belonging to water-stable or water-unstable components of macroaggregates and by the land use type. In addition, along with these parameters, the texture has an indirect effect.

  13. Full-vector geomagnetic field records from the East Eifel, Germany

    NASA Astrophysics Data System (ADS)

    Monster, Marilyn W. L.; Langemeijer, Jaap; Wiarda, Laura R.; Dekkers, Mark J.; Biggin, Andy J.; Hurst, Elliot A.; Groot, Lennart V. de

    2018-01-01

    To create meaningful models of the geomagnetic field, high-quality directional and intensity input data are needed. However, while it is fairly straightforward to obtain directional data, intensity data are much scarcer, especially for periods before the Holocene. Here, we present data from twelve flows (age range ∼ 200 to ∼ 470 ka) in the East Eifel volcanic field (Germany). These sites had been previously studied and are resampled to further test the recently proposed multi-method palaeointensity approach. Samples are first subjected to classic palaeomagnetic and rock magnetic analyses to optimise the subsequent palaeointensity experiments. Four different palaeointensity methods - IZZI-Thellier, the multispecimen method, calibrated pseudo-Thellier, and microwave-Thellier - are being used in the present study. The latter should be considered as supportive because only one or two specimens per site could be processed. Palaeointensities obtained for ten sites pass our selection criteria: two sites are successful with a single approach, four sites with two approaches, three more sites work with three approaches, and one site with all four approaches. Site-averaged intensity values typically range between 30 and 35 μT. No typically low palaeointensity values are found, in line with paleodirectional results which are compatible with regular palaeosecular variation of the Earth's magnetic field. Results from different methods are remarkably consistent and generally agree well with the values previously reported. They appear to be below the average for the Brunhes chron; there are no indications for relatively higher palaeointensities for units younger than 300 ka. However, our young sites could be close in age, and therefore may not represent the average intensity of the paleofield. Three of our sites are even considered coeval; encouragingly, these do yield the same palaeointensity within uncertainty bounds.

  14. How Knowledge Organisations Work: The Case of Software Firms

    ERIC Educational Resources Information Center

    Gottschalk, Petter

    2007-01-01

    Knowledge workers in software firms solve client problems in sequential and cyclical work processes. Sequential and cyclical work takes place in the value configuration of a value shop. While typical examples of value chains are manufacturing industries such as paper and car production, typical examples of value shops are law firms and medical…

  15. Night Sky Brightness at San Pedro Martir Observatory

    NASA Astrophysics Data System (ADS)

    Plauchu-Frayn, I.; Richer, M. G.; Colorado, E.; Herrera, J.; Córdova, A.; Ceseña, U.; Ávila, F.

    2017-03-01

    We present optical UBVRI zenith night sky brightness measurements collected on 18 nights during 2013 to 2016 and SQM measurements obtained daily over 20 months during 2014 to 2016 at the Observatorio Astronómico Nacional on the Sierra San Pedro Mártir (OAN-SPM) in México. The UBVRI data is based upon CCD images obtained with the 0.84 m and 2.12 m telescopes, while the SQM data is obtained with a high-sensitivity, low-cost photometer. The typical moonless night sky brightness at zenith averaged over the whole period is U = 22.68, B = 23.10, V = 21.84, R = 21.04, I = 19.36, and SQM = 21.88 {mag} {{arcsec}}-2, once corrected for zodiacal light. We find no seasonal variation of the night sky brightness measured with the SQM. The typical night sky brightness values found at OAN-SPM are similar to those reported for other astronomical dark sites at a similar phase of the solar cycle. We find a trend of decreasing night sky brightness with decreasing solar activity during period of the observations. This trend implies that the sky has become darker by Δ U = 0.7, Δ B = 0.5, Δ V = 0.3, Δ R=0.5 mag arcsec-2 since early 2014 due to the present solar cycle.

  16. Manual or automated measuring of antipsychotics' chemical oxygen demand.

    PubMed

    Pereira, Sarah A P; Costa, Susana P F; Cunha, Edite; Passos, Marieta L C; Araújo, André R S T; Saraiva, M Lúcia M F S

    2018-05-15

    Antipsychotic (AP) drugs are becoming accumulated in terrestrial and aqueous resources due to their actual consumption. Thus, the search of methods for assessing the contamination load of these drugs is mandatory. The COD is a key parameter used for monitoring water quality upon the assessment of the effect of polluting agents on the oxygen level. Thus, the present work aims to assess the chemical oxygen demand (COD) levels of several typical and atypical antipsychotic drugs in order to obtain structure-activity relationships. It was implemented the titrimetric method with potassium dichromate as oxidant and a digestion step of 2h, followed by the measurement of remained unreduced dichromate by titration. After that, an automated sequential injection analysis (SIA) method was, also, used aiming to overcome some drawbacks of the titrimetric method. The results obtained showed a relationship between the chemical structures of antipsychotic drugs and their COD values, where the presence of aromatic rings and oxidable groups give higher COD values. It was obtained a good compliance between the results of the reference batch procedure and the SIA system, and the APs were clustered in two groups, with the values ratio between the methodologies, of 2 or 4, in the case of lower or higher COD values, respectively. The SIA methodology is capable of operating as a screening method, in any stage of a synthetic process, being also more environmentally friendly, and cost-effective. Besides, the studies presented open promising perspectives for the improvement of the effectiveness of pharmaceutical removal from the waste effluents, by assessing COD values. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. The ultrastructure and flexibility of thylakoid membranes in leaves and isolated chloroplasts as revealed by small-angle neutron scattering.

    PubMed

    Unnep, R; Zsiros, O; Solymosi, K; Kovács, L; Lambrev, P H; Tóth, T; Schweins, R; Posselt, D; Székely, N K; Rosta, L; Nagy, G; Garab, G

    2014-09-01

    We studied the periodicity of the multilamellar membrane system of granal chloroplasts in different isolated plant thylakoid membranes, using different suspension media, as well as on different detached leaves and isolated protoplasts-using small-angle neutron scattering. Freshly isolated thylakoid membranes suspended in isotonic or hypertonic media, containing sorbitol supplemented with cations, displayed Bragg peaks typically between 0.019 and 0.023Å(-1), corresponding to spatially and statistically averaged repeat distance values of about 275-330 Å⁻¹. Similar data obtained earlier led us in previous work to propose an origin from the periodicity of stroma thylakoid membranes. However, detached leaves, of eleven different species, infiltrated with or soaked in D2O in dim laboratory light or transpired with D2O prior to measurements, exhibited considerably smaller repeat distances, typically between 210 and 230 Å⁻¹, ruling out a stromal membrane origin. Similar values were obtained on isolated tobacco and spinach protoplasts. When NaCl was used as osmoticum, the Bragg peaks of isolated thylakoid membranes almost coincided with those in the same batch of leaves and the repeat distances were very close to the electron microscopically determined values in the grana. Although neutron scattering and electron microscopy yield somewhat different values, which is not fully understood, we can conclude that small-angle neutron scattering is a suitable technique to study the periodic organization of granal thylakoid membranes in intact leaves under physiological conditions and with a time resolution of minutes or shorter. We also show here, for the first time on leaves, that the periodicity of thylakoid membranes in situ responds dynamically to moderately strong illumination. This article is part of a special issue entitled: photosynthesis research for sustainability: keys to produce clean energy. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. The origin of blue-green window and the propagation of radiation in ocean waters

    NASA Astrophysics Data System (ADS)

    Reghunath, A. T.; Venkataramanan, V.; Suviseshamuthu, D. Victor; Krishnamohan, R.; Prasad, B. Raghavendra

    1991-01-01

    A review of the present knowledge about the origin of blue-green window in the attenuation spectrum of ocean waters is presented. The various physical mechanisms which contribute to the formation of the window are dealt separately and discussed. The typical values of attenuation coefficient arising out of the various processes are compiled to obtain the total beam attenuation coefficient. These values are then compared with measured values of attenuation coefficient for ocean waters collected from Arabian sea and Bay of Bengal. The region of minimum attenuation in pure particle-free sea water is found to be at 450 to 500 nm. It is shown that in the presence of suspended particles and chlorophyll, the window shifts to longer wavelength side. Some suggestions for future work in this area are also given in the concluding section.

  19. The variance of the locally measured Hubble parameter explained with different estimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: jacobb@phys.au.dk

    We study the expected variance of measurements of the Hubble constant, H {sub 0}, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N -body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend to obtain a smaller variancemore » than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H {sub 0} from CMB measurements and the value measured in the local universe, these considerations are important in light of the percent determination of the Hubble constant in the local universe.« less

  20. Methane and carbon dioxide fluxes in the waterlogged forests of south and middle taiga of Western Siberia

    NASA Astrophysics Data System (ADS)

    Glagolev, M. V.; Ilyasov, D. V.; Terentieva, I. E.; Sabrekov, A. F.; Mochenov, S. Yu; Maksutov, S. S.

    2018-03-01

    Field measurements of methane and carbon dioxide flux were carried out using portable static chambers in south (ST) and middle taiga subzones (MT) of Western Siberia (WS) from 16 to 24 August 2015. Two sites were investigated: Bakchar bog in the Tomsk region (in typical ecosystems for this area: oligotrophic bog/forest border and waterlogged forest) and Shapsha in Khanty-Mansiysk region (in waterlogged forest). The highest values of methane fluxes (mgC·m-2·h-1) were obtained in burnt wet birch forest (median 6.96; first quartile 3.12; third quartile 8.95). The lowest values of methane fluxes (among the sites mentioned above) were obtained in seasonally waterlogged forests (median -0.08; first and third quartiles are -0.14 and -0.03 mgC·m-2·h-1 respectively). These data will help to estimate the regional methane flux from the waterlogged and periodically flooded forests and to improve its prediction.

  1. Gradient boride layers formed by diffusion carburizing and laser boriding

    NASA Astrophysics Data System (ADS)

    Kulka, M.; Makuch, N.; Dziarski, P.; Mikołajczak, D.; Przestacki, D.

    2015-04-01

    Laser boriding, instead of diffusion boriding, was proposed to formation of gradient borocarburized layers. The microstructure and properties of these layers were compared to those-obtained after typical diffusion borocarburizing. First method of treatment consists in diffusion carburizing and laser boriding only. In microstructure three zones are present: laser borided zone, hardened carburized zone and carburized layer without heat treatment. However, the violent decrease in the microhardness was observed below the laser borided zone. Additionally, these layers were characterized by a changeable value of mass wear intensity factor thus by a changeable abrasive wear resistance. Although at the beginning of friction the very low values of mass wear intensity factor Imw were obtained, these values increased during the next stages of friction. It can be caused by the fluctuations in the microhardness of the hardened carburized zone (HAZ). The use of through hardening after carburizing and laser boriding eliminated these fluctuations. Two zones characterized the microstructure of this layer: laser borided zone and hardened carburized zone. Mass wear intensity factor obtained a constant value for this layer and was comparable to that-obtained in case of diffusion borocarburizing and through hardening. Therefore, the diffusion boriding could be replaced by the laser boriding, when the high abrasive wear resistance is required. However, the possibilities of application of laser boriding instead of diffusion process were limited. In case of elements, which needed high fatigue strength, the substitution of diffusion boriding by laser boriding was not advisable. The surface cracks formed during laser re-melting were the reason for relatively quickly first fatigue crack. The preheating of the laser treated surface before laser beam action would prevent the surface cracks and cause the improved fatigue strength. Although the cohesion of laser borided carburized layer was sufficient, the diffusion borocarburized layer showed a better cohesion.

  2. A geostatistical extreme-value framework for fast simulation of natural hazard events

    PubMed Central

    Stephenson, David B.

    2016-01-01

    We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768

  3. Application of laser Doppler velocimeter to chemical vapor laser system

    NASA Technical Reports Server (NTRS)

    Gartrell, Luther R.; Hunter, William W., Jr.; Lee, Ja H.; Fletcher, Mark T.; Tabibi, Bagher M.

    1993-01-01

    A laser Doppler velocimeter (LDV) system was used to measure iodide vapor flow fields inside two different-sized tubes. Typical velocity profiles across the laser tubes were obtained with an estimated +/-1 percent bias and +/-0.3 to 0.5 percent random uncertainty in the mean values and +/-2.5 percent random uncertainty in the turbulence-intensity values. Centerline velocities and turbulence intensities for various longitudinal locations ranged from 13 to 17.5 m/sec and 6 to 20 percent, respectively. In view of these findings, the effects of turbulence should be considered for flow field modeling. The LDV system provided calibration data for pressure and mass flow systems used routinely to monitor the research laser gas flow velocity.

  4. Morphology and solubility of multiple crystal forms of Taka-amylase A

    NASA Astrophysics Data System (ADS)

    Ninomiya, Kumiko; Yamamoto, Tenyu; Oheda, Tadashi; Sato, Kiyotaka; Sazaki, Gen; Matsuura, Yoshiki

    2001-01-01

    An α-amylase originating from a mold Aspergillus oryzae, Taka-amylase A (Mr of 52 kDa, pI of 3.8), has been purified to an electrophoretically single band grade. Crystallization behaviors were investigated using ammonium sulfate and polyethleneglycol 8000 as precipitants. The variations in the morphology of the crystals obtained with changing crystallization parameters are described. Five apparently different crystal forms were obtained, and their morphology and crystallographic data have been determined. Solubility values of four typical forms were measured using a Michelson-type two-beam interferometer. The results of these experiments showed that this protein can be a potentially interesting and useful model for crystal growth study with a gram-amount availability of pure protein sample.

  5. Adjusting Estimates of the Expected Value of Information for Implementation: Theoretical Framework and Practical Application.

    PubMed

    Andronis, Lazaros; Barton, Pelham M

    2016-04-01

    Value of information (VoI) calculations give the expected benefits of decision making under perfect information (EVPI) or sample information (EVSI), typically on the premise that any treatment recommendations made in light of this information will be implemented instantly and fully. This assumption is unlikely to hold in health care; evidence shows that obtaining further information typically leads to "improved" rather than "perfect" implementation. To present a method of calculating the expected value of further research that accounts for the reality of improved implementation. This work extends an existing conceptual framework by introducing additional states of the world regarding information (sample information, in addition to current and perfect information) and implementation (improved implementation, in addition to current and optimal implementation). The extension allows calculating the "implementation-adjusted" EVSI (IA-EVSI), a measure that accounts for different degrees of implementation. Calculations of implementation-adjusted estimates are illustrated under different scenarios through a stylized case study in non-small cell lung cancer. In the particular case study, the population values for EVSI and IA-EVSI were £ 25 million and £ 8 million, respectively; thus, a decision assuming perfect implementation would have overestimated the expected value of research by about £ 17 million. IA-EVSI was driven by the assumed time horizon and, importantly, the specified rate of change in implementation: the higher the rate, the greater the IA-EVSI and the lower the difference between IA-EVSI and EVSI. Traditionally calculated measures of population VoI rely on unrealistic assumptions about implementation. This article provides a simple framework that accounts for improved, rather than perfect, implementation and offers more realistic estimates of the expected value of research. © The Author(s) 2015.

  6. Meeting Expanding Needs to Collect Food Intake Specificity: The Nutrition Data System for Research (NDS-R)

    NASA Technical Reports Server (NTRS)

    VanHeel, Nancy; Pettit, Janet; Rice, Barbara; Smith, Scott M.

    2003-01-01

    Food and nutrient databases are populated with data obtained from a variety of sources including USDA Reference Tables, scientific journals, food manufacturers and foreign food tables. The food and nutrient database maintained by the Nutrition Coordinating Center (NCC) at the University of Minnesota is continually updated with current nutrient data and continues to be expanded with additional nutrient fields to meet diverse research endeavors. Data are strictly evaluated for reliability and relevance before incorporation into the database; however, the values are obtained from various sources and food samples rather than from direct chemical analysis of specific foods. Precise nutrient values for specific foods are essential to the nutrition program at the National Aeronautics and Space Administration (NASA). Specific foods to be included in the menus of astronauts are chemically analyzed at the Johnson Space Center for selected nutrients. A request from NASA for a method to enter the chemically analyzed nutrient values for these space flight food items into the Nutrition Data System for Research (NDS-R) software resulted in modification of the database and interview system for use by NASA, with further modification to extend the method for related uses by more typical research studies.

  7. Extending the range of turbidity measurement using polarimetry

    DOEpatents

    Baba, Justin S.

    2017-11-21

    Turbidity measurements are obtained by directing a polarized optical beam to a scattering sample. Scattered portions of the beam are measured in orthogonal polarization states to determine a scattering minimum and a scattering maximum. These values are used to determine a degree of polarization of the scattered portions of the beam, and concentrations of scattering materials or turbidity can be estimated using the degree of polarization. Typically, linear polarizations are used, and scattering is measured along an axis that orthogonal to the direction of propagation of the polarized optical beam.

  8. Ground-based determination of atmospheric radiance for correction of ERTS-1 data

    NASA Technical Reports Server (NTRS)

    Peacock, K.

    1974-01-01

    A technique is described for estimating the atmospheric radiance observed by a downward sensor (ERTS) using ground-based measurements. A formula is obtained for the sky radiance at the time of the ERTS overpass from the radiometric measurement of the sky radiance made at a particular solar zenith angle and air mass. A graph illustrates ground-based sky radiance measurements as a function of the scattering angle for a range of solar air masses. Typical values for sky radiance at a solar zenith angle of 48 degrees are given.

  9. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    PubMed

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  10. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    PubMed Central

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  11. Analysis and Development of Finite Element Methods for the Study of Nonlinear Thermomechanical Behavior of Structural Components

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley

    1995-01-01

    Underintegrated methods are investigated with respect to their stability and convergence properties. The focus was on identifying regions where they work and regions where techniques such as hourglass viscosity and hourglass control can be used. Results obtained show that underintegrated methods typically lead to finite element stiffness with spurious modes in the solution. However, problems exist (scalar elliptic boundary value problems) where underintegrated with hourglass control yield convergent solutions. Also, stress averaging in underintegrated stiffness calculations does not necessarily lead to stable or convergent stress states.

  12. [Risk groups as related to gastric cancer].

    PubMed

    Vartan'ian, M G; Zhandarova, L F; Korzhenskiĭ, F P

    1979-01-01

    Under examination were the features of life, labour, habits, inheritance pattern, a type of diet, the course of the disease in 440 gastric cancer patients. The most typical and frequently observed factors were singled out. The material obtained was processed by an electronic computer. The informative value of the risk factors was checked by selection, using questionnaires of patients irrespective of the reason of their referring to the clinic. The age of patients over 40 and the character of work should become the basic indication for limiting the number of persons subject to a gastrological examination.

  13. Radiative transfer in the surfaces of atmosphereless bodies. III - Interpretation of lunar photometry

    NASA Technical Reports Server (NTRS)

    Lumme, K.; Irvine, W. M.

    1982-01-01

    Narrowband and UBV photoelectric phase curves of the entire lunar disk and surface photometry of some craters have been interpreted using a newly developed generalized radiative transfer theory for planetary regoliths. The data are well fitted by the theory, yielding information on both macroscopic and microscopic lunar properties. Derived values for the integrated disk geometric albedo are considerably higher than quoted previously, because of the present inclusion of an accurately determined opposition effect. The mean surface roughness, defined as the ratio of the height to the radius of a typical irregularity, is found to be 0.9 + or - 0.1, or somewhat less than the mean value of 1.2 obtained for the asteroids. From the phase curves, wavelength-dependent values of the single scattering albedo and the Henyey-Greenstein asymmetry factor for the average surface particle are derived.

  14. Electron Affinity of Phenyl-C61-Butyric Acid Methyl Ester (PCBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Bryon W.; Whitaker, James B.; Wang, Xue B.

    2013-07-25

    The gas-phase electron affinity (EA) of phenyl-C61-butyric acid methyl ester (PCBM), one of the best-performing electron acceptors in organic photovoltaic devices, is measured by lowtemperature photoelectron spectroscopy for the first time. The obtained value of 2.63(1) eV is only ca. 0.05 eV lower than that of C60 (2.68(1) eV), compared to a 0.09 V difference in their E1/2 values measured in this work by cyclic voltammetry. Literature E(LUMO) values for PCBM that are typically estimated from cyclic voltammetry, and commonly used as a quantitative measure of acceptor properties, are dispersed over a wide range between -4.3 and -3.62 eV; themore » reasons for such a huge discrepancy are analyzed here, and the protocol for reliable and consistent estimations of relative fullerene-based acceptor strength in solution is proposed.« less

  15. The effect of addition of selected milk protein preparations on the growth of Lactobacillus acidophilus and physicochemical properties of fermented milk.

    PubMed

    Gustaw, Waldemar; Kozioł, Justyna; Radzki, Wojciech; Skrzypczak, Katarzyna; Michalak-Majewska, Monika; Sołowiej, Bartosz; Sławińska, Aneta; Jabłońska-Ryś, Ewa

    2016-01-01

    The intake of fermented milk products, especially yoghurts, has been systematically increasing for a few decades. The purpose of this work was to obtain milk products fermented with a mix of bacterial cultures (yoghurt bacteria and Lactobacillus acidophillus LA-5) and enriched with selected milk protein preparations. Secondly, the aim of the work was to determine physiochemical and rheological properties of the obtained products. The following additives were applied in the experiment: whey protein concentrate (WPC 65), whey protein isolate (WPI), demineralised whey powder (SPD), caseinoglycomacropeptide (CGMP), α-lactalbumin (α-la), sodium caseinate (KNa) and calcium caseinate (KCa). Milk was fermented using probiotic strain Lactobacillus acidophillus LA-5 and a typical yoghurt culture. The products were analysed in terms of the survivability of bacterial cells during refrigerated storage, rheological properties and syneresis. Fermented milk products were obtained using blends of bacterial strains: ST-B01:Lb-12 (1:1), ST-B01:Lb-12:LA-5 (1:1:2). Milk beverages fermented with typical yoghurt bacteria and LA-5 strain showed intensive syneresis. The addition of LA-5 strain caused formation of harder acid gels, comparing to typical yoghurts. Milk products which were prepared from skimmed milk possessed higher values of hardness and consistency coefficient. The increase of concentrations of milk preparations (except of WPI) did not cause significant differences in the hardness of acidic gels obtained by fermentation of mixed culture with a probiotic strain. The applied preparations improved physiochemical properties of the milk beverages which were prepared with a probiotic strain. The increase of protein milk preparations concentration resulted in a gradual decrease of the secreted whey. Among the products that were made of full milk powder and were subjected to three weeks of refrigerated storage the highest survivability of Lb. acidophilus LA-5 was noticed in the samples fortified with 1% WPC.

  16. Cryogenic temperature effects on sting-balance deflections in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Popernack, Thomas G., Jr.; Adcock, Jerry B.

    1990-01-01

    An investigation was conducted at the National Transonic Facility (NTF) to document the change in sting-balance deflections from ambient to cryogenic temperatures. Space limitations in some NTF models do not allow the use of on-board angle of attack instrumentation. In order to obtain angle of attack data, pre-determined sting-balance bending data must be combined with arc sector angle measurements. Presently, obtaining pretest sting-balance data requires several cryogenic cycles and cold loadings over a period of several days. A method of reducing the calibration time required is to obtain only ambient temperature sting-balance bending data and correct for changes in material properties at cryogenic temperatures. To validate this method, two typical NTF sting-balance combinations were tested. The test results show excellent agreement with the predicted values and the repeatability of the data was 0.01 degree.

  17. Correcting the initialization of models with fractional derivatives via history-dependent conditions

    NASA Astrophysics Data System (ADS)

    Du, Maolin; Wang, Zaihua

    2016-04-01

    Fractional differential equations are more and more used in modeling memory (history-dependent, non-local, or hereditary) phenomena. Conventional initial values of fractional differential equations are defined at a point, while recent works define initial conditions over histories. We prove that the conventional initialization of fractional differential equations with a Riemann-Liouville derivative is wrong with a simple counter-example. The initial values were assumed to be arbitrarily given for a typical fractional differential equation, but we find one of these values can only be zero. We show that fractional differential equations are of infinite dimensions, and the initial conditions, initial histories, are defined as functions over intervals. We obtain the equivalent integral equation for Caputo case. With a simple fractional model of materials, we illustrate that the recovery behavior is correct with the initial creep history, but is wrong with initial values at the starting point of the recovery. We demonstrate the application of initial history by solving a forced fractional Lorenz system numerically.

  18. Precise Measurements of the Masses of Cs, Rb and Na A New Route to the Fine Structure Constant

    NASA Astrophysics Data System (ADS)

    Rainville, Simon; Bradley, Michael P.; Porto, James V.; Thompson, James K.; Pritchard, David E.

    2001-01-01

    We report new values for the atomic masses of the alkali 133Cs, 87Rb, 85Rb, and 23Na with uncertainties ≤ 0.2 ppb. These results, obtained using Penning trap single ion mass spectrometry, are typically two orders of magnitude more accurate than previously measured values. Combined with values of h/m atom from atom interferometry measurements and accurate wavelength measurements for different atoms, these values will lead to new ppb-level determinations of the molar Planck constant N A h and the fine structure constant α. This route to α is based on simple physics. It can potentially achieve the several ppb level of accuracy needed to test the QED determination of α extracted from measurements of the electron g factor. We also demonstrate an electronic cooling technique that cools our detector and ion below the 4 K ambient temperature. This technique improves by about a factor of three our ability to measure the ion's axial motion.

  19. Analysis of zenith tropospheric delay in tropical latitudes

    NASA Astrophysics Data System (ADS)

    Zablotskyj, Fedir; Zablotska, Alexandra

    2010-05-01

    The paper studies some peculiarities of the nature of zenith tropospheric delay in tropical latitudes. There are shown the values of dry and wet components of zenith tropospheric delay obtained by an integration of the radiosonde data at 9 stations: Guam, Seyshelles, Singapore, Pago Pago, Hilo, Koror, San Cristobal, San Juan and Belem. There were made 350 atmospheric models for the period from 11th to 20th of January, April, July and October 2008 at 0h and 12h UT (Universal Time). The quantities of the dry dd(aer) and wet dw(aer) components of zenith tropospheric delay were determined by means of the integration for each atmospheric model. Then the quantities of the dry dd(SA), dd(HO) and wet dw(SA), dw(HO) components of zenith tropospheric delay (Saastamoinen and Hopfield analytical models) were calculated by the surface values of the pressure P0, temperature t0, relative air humidity U0 on the height H0 and by the geographic latitude φ. It must be point out the following from the analysis of the averaged quantities and differences δdd(SA), δdd(HO), δdw(SA), δdw(HO) between the correspondent components of zenith tropospheric delay obtained by the radiosonde data and by the analytical models: zenith tropospheric delay obtained by the radiosonde data amounts to considerably larger value in the equatorial zone, especially, at the expense of the wet component, in contrast to high and middle latitudes. Thus, the dry component of zenith tropospheric delay is equal at the average 2290 mm and the wet component is 290 mm; by the results of the analysis of Saastamoinen and Hopfield models the dry component differences δdd(SA) and δdd(HO) are negative in all cases and average -20 mm. It is not typical neither for high latitudes nor for middle ones; the differences between the values of the wet components obtained from radiosonde data and of Saastamoinen and Hopfield models are positive in general. Therewith the δdw(HO) values are larger than the correspondent δdw(SA) ones on 20 ÷ 30 mm. This is because of that the tropospheric height, founded in the determination of the wet component by Hopfield model, does not correspond the mean real tropospheric height which is typical for the tropical latitudes; there are the considerable differences in the average values of zenith tropospheric delay between the stations of the equatorial zone. By the radiosonde data they can amount to 100 and more millimeters. These differences are caused by different character of the air humidity distribution along a height. Thus, for example, in the lower half of the troposphere the mean partial pressure of the water vapour is about 2 ÷ 2,5 times larger at Singapore station than at Hilo one. The recommendations concerning the modification of Saastamoinen and Hopfield models for the zone of tropical latitudes are given in conclusion of the paper.

  20. Conductivity independent scaling laws for convection and magnetism in fast rotating planets

    NASA Astrophysics Data System (ADS)

    Starchenko, S.

    2012-09-01

    In the limit of negligible molecular diffusivity, viscosity and magnetic diffusivity effects, I derive scaling laws for convection and magnetism from the first principles for fast rotating planets. In the Earth, Jupiter, Saturn and ancient dynamo active Mars it is reasonable to suppose domination of magnetic energy over kinetic one that results in the typical magnetic field B proportional to the third root of the buoyancy flux F [3] driving the convection, while B is independent on conductivity σ and angular rotation rate Ω. The same scaling law was previously obtained via compilation of many numerical planetary dynamo simulations [1-3]. Besides, I obtained scaling laws for typical hydrodynamic scale h, velocity V, Archimedean acceleration A, electromagnetic scale d and sinus of the angle between magnetic and velocity vector s. In Uranus, Neptune and Ganymede a local magnetic Reynolds number rm=μσVd~1 with magnetic permeability in vacuum μ. Correspondent magnetic energy could be of order kinetic energy resulting in relatively lower magnetic field strength B=(μρ)1/2V with density ρ. That may explain magnetic field values and nondipolar structures in Uranus, Neptune and Ganymede.

  1. Image quality, meteorological optical range, and fog particulate number evaluation using the Sandia National Laboratories fog chamber

    DOE PAGES

    Birch, Gabriel C.; Woo, Bryana L.; Sanchez, Andres L.; ...

    2017-08-24

    The evaluation of optical system performance in fog conditions typically requires field testing. This can be challenging due to the unpredictable nature of fog generation and the temporal and spatial nonuniformity of the phenomenon itself. We describe the Sandia National Laboratories fog chamber, a new test facility that enables the repeatable generation of fog within a 55 m×3 m×3 m (L×W×H) environment, and demonstrate the fog chamber through a series of optical tests. These tests are performed to evaluate system image quality, determine meteorological optical range (MOR), and measure the number of particles in the atmosphere. Relationships between typical opticalmore » quality metrics, MOR values, and total number of fog particles are described using the data obtained from the fog chamber and repeated over a series of three tests.« less

  2. Image quality, meteorological optical range, and fog particulate number evaluation using the Sandia National Laboratories fog chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birch, Gabriel C.; Woo, Bryana L.; Sanchez, Andres L.

    The evaluation of optical system performance in fog conditions typically requires field testing. This can be challenging due to the unpredictable nature of fog generation and the temporal and spatial nonuniformity of the phenomenon itself. We describe the Sandia National Laboratories fog chamber, a new test facility that enables the repeatable generation of fog within a 55 m×3 m×3 m (L×W×H) environment, and demonstrate the fog chamber through a series of optical tests. These tests are performed to evaluate system image quality, determine meteorological optical range (MOR), and measure the number of particles in the atmosphere. Relationships between typical opticalmore » quality metrics, MOR values, and total number of fog particles are described using the data obtained from the fog chamber and repeated over a series of three tests.« less

  3. Mean glandular dose to patients from stereotactic breast biopsy procedures.

    PubMed

    Paixão, Lucas; Chevalier, Margarita; Hurtado-Romero, Antonio E; Garayoa, Julia

    2018-06-07

    The aim of this work is to study the radiation doses delivered to a group of patients that underwent a stereotactic breast biopsy (SBB) procedure. Mean glandular doses (MGD) were estimated from the air-kerma measured at the breast surface entrance multiplying by specific conversion coefficients (DgN) that were estimated using Monte Carlo simulations. DgN were calculated for the 0º and ±15º projections used in SBB and for the particular beam quality. Data on 61 patients were collected showing that a typical SBB procedure is composed by 10 images. MGD was on average (4 ± 2) mGy with (0.38 ± 0.06) mGy per image. The use of specific conversion coefficients instead of typical DgN for mammography/tomosynthesis yields to obtain MGD values for SBB that are around a 65% lower on average. © 2018 Institute of Physics and Engineering in Medicine.

  4. Identification of branched-chain amino acid aminotransferases active towards (R)-(+)-1-phenylethylamine among PLP fold type IV transaminases.

    PubMed

    Bezsudnova, Ekaterina Yu; Dibrova, Daria V; Nikolaeva, Alena Yu; Rakitina, Tatiana V; Popov, Vladimir O

    2018-04-10

    New class IV transaminases with activity towards L-Leu, which is typical of branched-chain amino acid aminotransferases (BCAT), and with activity towards (R)-(+)-1-phenylethylamine ((R)-PEA), which is typical of (R)-selective (R)-amine:pyruvate transaminases, were identified by bioinformatics analysis, obtained in recombinant form, and analyzed. The values of catalytic activities in the reaction with L-Leu and (R)-PEA are comparable to those measured for characteristic transaminases with the corresponding specificity. Earlier, (R)-selective class IV transaminases were found to be active, apart from (R)-PEA, only with some other (R)-primary amines and D-amino acids. Sequences encoding new transaminases with mixed type of activity were found by searching for changes in the conserved motifs of sequences of BCAT by different bioinformatics tools. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Bias Reduction in Short Records of Satellite Soil Moisture

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; Koster, Randal D.

    2004-01-01

    Although surface soil moisture data from different sources (satellite retrievals, ground measurements, and land model integrations of observed meteorological forcing data) have been shown to contain consistent and useful information in their seasonal cycle and anomaly signals, they typically exhibit very different mean values and variability. These biases pose a severe obstacle to exploiting the useful information contained in satellite retrievals through data assimilation. A simple method of bias removal is to match the cumulative distribution functions (cdf) of the satellite and model data. However, accurate cdf estimation typically requires a long record of satellite data. We demonstrate here that by wing spatial sampling with a 2 degree moving window we can obtain local statistics based on a one-year satellite record that are a good approximation to those that would be derived from a much longer time series. This result should increase the usefulness of relatively short satellite data records.

  6. Comparison of non-invasive MRI measurements of cerebral blood flow in a large multisite cohort.

    PubMed

    Dolui, Sudipto; Wang, Ze; Wang, Danny Jj; Mattay, Raghav; Finkel, Mack; Elliott, Mark; Desiderio, Lisa; Inglis, Ben; Mueller, Bryon; Stafford, Randall B; Launer, Lenore J; Jacobs, David R; Bryan, R Nick; Detre, John A

    2016-07-01

    Arterial spin labeling and phase contrast magnetic resonance imaging provide independent non-invasive methods for measuring cerebral blood flow. We compared global cerebral blood flow measurements obtained using pseudo-continuous arterial spin labeling and phase contrast in 436 middle-aged subjects acquired at two sites in the NHLBI CARDIA multisite study. Cerebral blood flow measured by phase contrast (CBFPC: 55.76 ± 12.05 ml/100 g/min) was systematically higher (p < 0.001) and more variable than cerebral blood flow measured by pseudo-continuous arterial spin labeling (CBFPCASL: 47.70 ± 9.75). The correlation between global cerebral blood flow values obtained from the two modalities was 0.59 (p < 0.001), explaining less than half of the observed variance in cerebral blood flow estimates. Well-established correlations of global cerebral blood flow with age and sex were similarly observed in both CBFPCASL and CBFPC CBFPC also demonstrated statistically significant site differences, whereas no such differences were observed in CBFPCASL No consistent velocity-dependent effects on pseudo-continuous arterial spin labeling were observed, suggesting that pseudo-continuous labeling efficiency does not vary substantially across typical adult carotid and vertebral velocities, as has previously been suggested. Although CBFPCASL and CBFPC values show substantial similarity across the entire cohort, these data do not support calibration of CBFPCASL using CBFPC in individual subjects. The wide-ranging cerebral blood flow values obtained by both methods suggest that cerebral blood flow values are highly variable in the general population. © The Author(s) 2016.

  7. Improving xylem hydraulic conductivity measurements by correcting the error caused by passive water uptake.

    PubMed

    Torres-Ruiz, José M; Sperry, John S; Fernández, José E

    2012-10-01

    Xylem hydraulic conductivity (K) is typically defined as K = F/(P/L), where F is the flow rate through a xylem segment associated with an applied pressure gradient (P/L) along the segment. This definition assumes a linear flow-pressure relationship with a flow intercept (F(0)) of zero. While linearity is typically the case, there is often a non-zero F(0) that persists in the absence of leaks or evaporation and is caused by passive uptake of water by the sample. In this study, we determined the consequences of failing to account for non-zero F(0) for both K measurements and the use of K to estimate the vulnerability to xylem cavitation. We generated vulnerability curves for olive root samples (Olea europaea) by the centrifuge technique, measuring a maximally accurate reference K(ref) as the slope of a four-point F vs P/L relationship. The K(ref) was compared with three more rapid ways of estimating K. When F(0) was assumed to be zero, K was significantly under-estimated (average of -81.4 ± 4.7%), especially when K(ref) was low. Vulnerability curves derived from these under-estimated K values overestimated the vulnerability to cavitation. When non-zero F(0) was taken into account, whether it was measured or estimated, more accurate K values (relative to K(ref)) were obtained, and vulnerability curves indicated greater resistance to cavitation. We recommend accounting for non-zero F(0) for obtaining accurate estimates of K and cavitation resistance in hydraulic studies. Copyright © Physiologia Plantarum 2012.

  8. Line-driven winds revisited in the context of Be stars: Ω-slow solutions with high k values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silaj, J.; Jones, C. E.; Curé, M.

    2014-11-01

    The standard, or fast, solutions of m-CAK line-driven wind theory cannot account for slowly outflowing disks like the ones that surround Be stars. It has been previously shown that there exists another family of solutions—the Ω-slow solutions—that is characterized by much slower terminal velocities and higher mass-loss rates. We have solved the one-dimensional m-CAK hydrodynamical equation of rotating radiation-driven winds for this latter solution, starting from standard values of the line force parameters (α, k, and δ), and then systematically varying the values of α and k. Terminal velocities and mass-loss rates that are in good agreement with those foundmore » in Be stars are obtained from the solutions with lower α and higher k values. Furthermore, the equatorial densities of such solutions are comparable to those that are typically assumed in ad hoc models. For very high values of k, we find that the wind solutions exhibit a new kind of behavior.« less

  9. Structure of thermal pair clouds around gamma-ray-emitting black holes

    NASA Technical Reports Server (NTRS)

    Liang, Edison P.

    1991-01-01

    Using certain simplifying assumptions, the general structure of a quasi-spherical thermal pair-balanced cloud surrounding an accreting black hole is derived from first principles. Pair-dominated hot solutions exist only for a restricted range of the viscosity parameter. These results are applied as examples to the 1979 HEAO 3 gamma-ray data of Cygnus X-1 and the Galactic center. Values are obtained for the viscosity parameter lying in the range of about 0.1-0.01. Since the lack of synchrotron soft photons requires the magnetic field to be typically less than 1 percent of the equipartition value, a magnetic field cannot be the main contributor to the viscous stress of the inner accretion flow, at least during the high gamma-ray states.

  10. Discovery of a suspected giant radio galaxy with the KAT-7 array

    NASA Astrophysics Data System (ADS)

    Colafrancesco, S.; Mhlahlo, N.; Jarrett, T.; Oozeer, N.; Marchegiani, P.

    2016-02-01

    We detect a new suspected giant radio galaxy (GRG) discovered by KAT-7. The GRG core is identified with the Wide-field Infrared Survey Explorer source J013313.50-130330.5, an extragalactic source based on its infrared colours and consistent with a misaligned active galactic nuclei-type spectrum at z ≈ 0.3. The multi-ν spectral energy distribution (SED) of the object associated with the GRG core shows a synchrotron peak at ν ≈ 1014 Hz consistent with the SED of a radio galaxy blazar-like core. The angular size of the lobes are ˜4 arcmin for the NW lobe and ˜1.2 arcmin for the SE lobe, corresponding to projected linear distances of ˜1078 kpc and ˜324 kpc, respectively. The best-fitting parameters for the SED of the GRG core and the value of jet boosting parameter δ = 2, indicate that the GRG jet has maximum inclination θ ≈ 30 deg with respect to the line of sight, a value obtained for δ = Γ, while the minimum value of θ is not constrained due to the degeneracy existing with the value of Lorentz factor Γ. Given the photometric redshift z ≈ 0.3, this GRG shows a core luminosity of P1.4 GHz ≈ 5.52 × 1024 W Hz-1, and a luminosity P1.4 GHz ≈ 1.29 × 1025 W Hz-1 for the NW lobe and P1.4 GHz ≈ 0.46 × 1025 W Hz-1 for the SE lobe, consistent with the typical GRG luminosities. The radio lobes show a fractional linear polarization ≈9 per cent consistent with typical values found in other GRG lobes.

  11. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  12. Preparation of Authigenic Pyrite from Methane-bearing Sediments for In Situ Sulfur Isotope Analysis Using SIMS.

    PubMed

    Lin, Zhiyong; Sun, Xiaoming; Peckmann, Jörn; Lu, Yang; Strauss, Harald; Xu, Li; Lu, Hongfeng; Teichert, Barbara M A

    2017-08-31

    Different sulfur isotope compositions of authigenic pyrite typically result from the sulfate-driven anaerobic oxidation of methane (SO4-AOM) and organiclastic sulfate reduction (OSR) in marine sediments. However, unravelling the complex pyritization sequence is a challenge because of the coexistence of different sequentially formed pyrite phases. This manuscript describes a sample preparation procedure that enables the use of secondary ion mass spectroscopy (SIMS) to obtain in situ δ 34 S values of various pyrite generations. This allows researchers to constrain how SO4-AOM affects pyritization in methane-bearing sediments. SIMS analysis revealed an extreme range in δ 34 S values, spanning from -41.6 to +114.8‰, which is much wider than the range of δ 34 S values obtained by the traditional bulk sulfur isotope analysis of the same samples. Pyrite in the shallow sediment mainly consists of 34 S-depleted framboids, suggesting early diagenetic formation by OSR. Deeper in the sediment, more pyrite occurs as overgrowths and euhedral crystals, which display much higher SIMS δ 34 S values than the framboids. Such 34 S-enriched pyrite is related to enhanced SO4-AOM at the sulfate-methane transition zone, postdating OSR. High-resolution in situ SIMS sulfur isotope analyses allow for the reconstruction of the pyritization processes, which cannot be resolved by bulk sulfur isotope analysis.

  13. Year rather than farming system influences protein utilization and energy value of vegetables when measured in a rat model.

    PubMed

    Jørgensen, Henry; Brandt, Kirsten; Lauridsen, Charlotte

    2008-12-01

    The aim of the study was to measure protein utilization and energy value of dried apple, carrot, kale, pea, and potato prepared for human consumption and grown in 2 consecutive years with 3 different farming systems: (1) low input of fertilizer without pesticides (LIminusP), (2) low input of fertilizers and high input of pesticides (LIplusP), (3) and high input of fertilizers and high input of pesticides (HIplusP). In addition, the study goal was to verify the nutritional values, taking into consideration the physiologic state. In experiment 1, the nutritive values, including protein digestibility-corrected amino acid score, were determined in single ingredients in trials with young rats (3-4 weeks) as recommended by the Food and Agriculture Organization of the United Nations/World Health Organization for all age groups. A second experiment was carried out with adult rats to assess the usefulness of digestibility values to predict the digestibility and nutritive value of mixed diets and study the age aspect. Each plant material was included in the diet with protein-free basal mixtures or casein to contain 10% dietary protein. The results showed that variations in protein utilization and energy value determined on single ingredients between cultivation strategies were inconsistent and smaller than between harvest years. Overall, dietary crude fiber was negatively correlated with energy digestibility. The energy value of apple, kale, and pea was lower than expected from literature values. A mixture of plant ingredients fed to adult rats showed lower protein digestibility and higher energy digestibility than predicted. The protein digestibility data obtained using young rats in the calculation of protein digestibility-corrected amino acid score overestimates protein digestibility and quality and underestimates energy value for mature rats. The present study provides new data on protein utilization and energy digestibility of some typical plant foods that may contribute new information for databases on food quality. Growing year but not cultivation system influenced the protein quality and energy value of the vegetables and fruit typical for human consumption.

  14. Bio-optical characteristics of a red tide induced by Mesodinium rubrum in the Cariaco Basin, Venezuela

    NASA Astrophysics Data System (ADS)

    Guzmán, Laurencia; Varela, Ramón; Muller-Karger, Frank; Lorenzoni, Laura

    2016-08-01

    The bio-optical changes of the water induced by red tides depend on the type of organism present, and the spectral characterization of such changes can provide useful information on the organism, abundance and distribution. Here we present results from the bio-optical characterization of a non-toxic red tide induced by the autotrophic ciliate Mesodinium rubrum. Particle absorption was high [ap(440) = 1.78 m- 1], as compared to measurements done in the same region [ap(440) = 0.09 ± 0.06 m- 1], with detrital components contributing roughly 11% [ad(440) = 0.19 m- 1]. The remainder was attributed to absorption by phytoplankton pigments [aph(440) = 1.60 m- 1]. These aph values were ~ 15 times higher than typical values for these waters. High chlorophyll a concentrations were also measured (52.73 μg L- 1), together with alloxanthin (9.52 μg L- 1) and chlorophyll c (6.25 μg L- 1). This suite of pigment is typical of the algal class Cryptophyceae, from which Mesodinium obtains its chloroplasts. Remote sensing reflectance showed relatively low values [Rrs(440) = 0.0007 sr- 1], as compared to other Rrs values for the region under high bloom conditions [Rrs(440) = 0.0028 sr- 1], with maxima at 388, 484, 520, 596 and 688 nm. Based on the low reflection in the green-yellow, as compared to other red tides, we propose a new band ratio [Rrs(688)/Rrs(564)] to identify blooms of this particular group of organisms.

  15. Titan Crossing a 5:1 MMR with Iapetus : Constraining the Tidal Recession of Titan and Giving an Explanation for Lapetus' Current Orbit

    NASA Astrophysics Data System (ADS)

    POLYCARPE, William; Lainey, Valery; Vienne, Alain; Noyelles, Benoît; Saillenfest, Melaine; Rambaux, Nicolas

    2018-04-01

    Iapetus orbits Saturn with an orbtial eccentricity of 3% and possesses an constant tilt to its local Laplace plane of around 7°, both elements are today poorly explained. The objective of the this work is to investigate if these orbtial characteristics may be explained in the frame of rapid tidal migration in the saturnian system [Lainey et al., 2012, 2017] [Fuller et al. 2016]. We present several sets of numerical simulations of a past 5:1 mean motion resonance crossing between Titan and Iapetus. Iapetus was placed initially on its local Laplace plane with a circular orbit. Simulations show that the outcomes of this resonance are very dependent on the migration speed of Titan, and therefore on the effective quality factor Q of Saturn. Iapetus will generally be ejected of the system due to this resonance when the migration is too slow, typically Q higher than 1500. Lower values allow Iapetus to survive with an eccentricity of a few percent, consistent with today's value. This resonance would also act on the inclination and can bring the tilt up to several degrees, and even reach 7° and more on rare occasions. It seems, in general, that the current value of the eccentricity can be easily explained by this resonance. On the other hand the tilt is more difficult to obtain for fast tidal migration (Q lower than 20), but high values are possible for medium migration rate (typically Q between 200 and 1500).

  16. New Kohn-Sham density functional based on microscopic nuclear and neutron matter equations of state

    NASA Astrophysics Data System (ADS)

    Baldo, M.; Robledo, L. M.; Schuck, P.; Viñas, X.

    2013-06-01

    A new version of the Barcelona-Catania-Paris energy functional is applied to a study of nuclear masses and other properties. The functional is largely based on calculated ab initio nuclear and neutron matter equations of state. Compared to typical Skyrme functionals having 10-12 parameters apart from spin-orbit and pairing terms, the new functional has only 2 or 3 adjusted parameters, fine tuning the nuclear matter binding energy and fixing the surface energy of finite nuclei. An energy rms value of 1.58 MeV is obtained from a fit of these three parameters to the 579 measured masses reported in the Audi and Wapstra [Nucl. Phys. ANUPABL0375-947410.1016/j.nuclphysa.2003.11.003 729, 337 (2003)] compilation. This rms value compares favorably with the one obtained using other successful mean field theories, which range from 1.5 to 3.0 MeV for optimized Skyrme functionals and 0.7 to 3.0 for the Gogny functionals. The other properties that have been calculated and compared to experiment are nuclear radii, the giant monopole resonance, and spontaneous fission lifetimes.

  17. Intrinsic physical conditions and structure of relativistic jets in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Nokhrina, E. E.; Beskin, V. S.; Kovalev, Y. Y.; Zheltoukhov, A. A.

    2015-03-01

    The analysis of the frequency dependence of the observed shift of the cores of relativistic jets in active galactic nuclei (AGNs) allows us to evaluate the number density of the outflowing plasma ne and, hence, the multiplicity parameter λ = ne/nGJ, where nGJ is the Goldreich-Julian number density. We have obtained the median value for λmed = 3 × 1013 and the median value for the Michel magnetization parameter σM, med = 8 from an analysis of 97 sources. Since the magnetization parameter can be interpreted as the maximum possible Lorentz factor Γ of the bulk motion which can be obtained for relativistic magnetohydrodynamic (MHD) flow, this estimate is in agreement with the observed superluminal motion of bright features in AGN jets. Moreover, knowing these key parameters, one can determine the transverse structure of the flow. We show that the poloidal magnetic field and particle number density are much larger in the centre of the jet than near the jet boundary. The MHD model can also explain the typical observed level of jet acceleration. Finally, casual connectivity of strongly collimated jets is discussed.

  18. A Novel, Real-Valued Genetic Algorithm for Optimizing Radar Absorbing Materials

    NASA Technical Reports Server (NTRS)

    Hall, John Michael

    2004-01-01

    A novel, real-valued Genetic Algorithm (GA) was designed and implemented to minimize the reflectivity and/or transmissivity of an arbitrary number of homogeneous, lossy dielectric or magnetic layers of arbitrary thickness positioned at either the center of an infinitely long rectangular waveguide, or adjacent to the perfectly conducting backplate of a semi-infinite, shorted-out rectangular waveguide. Evolutionary processes extract the optimal physioelectric constants falling within specified constraints which minimize reflection and/or transmission over the frequency band of interest. This GA extracted the unphysical dielectric and magnetic constants of three layers of fictitious material placed adjacent to the conducting backplate of a shorted-out waveguide such that the reflectivity of the configuration was 55 dB or less over the entire X-band. Examples of the optimization of realistic multi-layer absorbers are also presented. Although typical Genetic Algorithms require populations of many thousands in order to function properly and obtain correct results, verified correct results were obtained for all test cases using this GA with a population of only four.

  19. Performance and Characteristics of a Cyclone Gasifier for Gasification of Sawdust

    NASA Astrophysics Data System (ADS)

    Azman Miskam, Muhamad; Zainal, Z. A.; Idroas, M. Y.

    The performance and characteristics of a cyclone gasifier for gasification of sawdust has been studied and evaluated. The system applied a technique to gasify sawdust through the concept of cyclonic motion driven by air injected at atmospheric pressure. This study covers the results obtained for gasification of ground sawdust from local furniture industries with size distribution ranging from 0.25 to 1 mm. It was found that the typical wall temperature for initiating stable gasification process was about 400°C. The heating value of producer gas was about 3.9 MJ m-3 that is sufficient for stable combustion in a dual-fuel engine generator. The highest thermal output from the cyclone gasifier was 57.35 kWT. The highest value of mass conversion efficiency and enthalpy balance were 60 and 98.7%, respectively. The highest efficiency of the cyclone gasifier obtained was 73.4% and this compares well with other researchers. The study has identified the optimum operational condition for gasifying sawdust in a cyclone gasifier and made conclusions as to how the steady gasification process can be achieved.

  20. Performance characterization and transient investigation of multipropellant resistojets

    NASA Technical Reports Server (NTRS)

    Braunscheidel, Edward P.

    1989-01-01

    The multipropellant resistojet thruster design initially was characterized for performance in a vacuum tank using argon, carbon dioxide, nitrogen, and hydrogen, with gas inlet pressures ranging from 13.7 to 310 kPa (2 to 45 psia) over a heat exchanger temperature range of ambient to 1200 C (2200 F). Specific impulse, the measure of performance, had values ranging from 120 to 600 seconds for argon and hydrogen respectively, with a constant heat exchanger temperature of 1200 C (2200 F). When operated under ambient conditions typical specific impulse values obtained for argon and hydrogen ranged from 55 to 290 seconds, respectively. Performance measured with several mixtures of argon and nitrogen showed no significant deviation from predictions obtained by directly weighting the argon and nitrogen individual performance results. Another aspect of the program investigating transient behavior, showed responses depended heavily on the start-up scenario used. Steady state heater temperatures were achieved in 20 to 75 minutes for argon, and in 10 to 90 minutes for hydrogen. Steady state specific impulses were achieved in 25 to 60, and 20 to 60 minutes respectively.

  1. An investigation of the normal momentum transfer for gases on tungsten

    NASA Technical Reports Server (NTRS)

    Moskal, E. J.

    1971-01-01

    The near monoenergetic beam of neutral helium and argon atoms impinged on a single crystal tungsten target, with the (100) face exposed to the beam. The target was mounted on a torsion balance. The rotation of this torsion balance was monitored by an optical lever, and this reading was converted to a measurement of the momentum exchange between the beam and the target. The tungsten target was flashed to a temperature in excess of 2000 C before every clean run, and the vacuum levels in the final chamber were typically between 0.5 and 1 ntorr. The momentum exchange for the helium-tungsten surface and the argon-tungsten surface combination was obtained over approximately a decade of incoming energy (for the argon gas) at angles of incidence of 0, 30, and 41 deg on both clean and dirty (gas covered) surfaces. The results exhibited a significant variation in momentum transfer between the data obtained for the clean and dirty surfaces. The values of normal momentum accommodation coefficient for the clean surface were found to be lower than the values previously reported.

  2. A 5 x 40 cm rectangular-beam multipole ion source

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Kaufman, H. R.; Haynes, C. M.

    1981-01-01

    A rectangular ion source particularly suited for the continuous sputter processing of materials over a wide area is discussed. A multipole magnetic field configuration was used to design an ion source with a 5 x 40 cm beam area, while a three-grid ion optics system was used to maximize ion current density at the design ion energy of 500 eV. An average extracted current density of about 4 mA/sq cm could be obtained from 500 eV Ar ions. The difference between the experimental performance and the design value of 6 mA/sq cm is attributed to grid misalignment due to thermal expansion. The discharge losses at typical operating conditions ranged from about 600 to 1000 eV/ion, in reasonable agreement with the design value of 800 eV/ion. The use of multiple rectangular-beam ion sources to process wider areas than would be possible with a single source was also studied, and the most uniform coverage was found to be obtainable with a 0 to 2 cm overlap.

  3. Correlational study on atmospheric concentrations of fine particulate matter and children cough variant asthma.

    PubMed

    Zhang, Y-X; Liu, Y; Xue, Y; Yang, L-Y; Song, G-D; Zhao, L

    2016-06-01

    We explored the relationship between atmospheric concentrations of fine particulate matter and children cough variant asthma. 48 children all diagnosed with cough variant asthma were placed in the cough asthma group while 50 children suffering from typical asthma were place in typical asthma group. We also had 50 cases of chronic pneumonia (the pneumonia group) and 50 cases of healthy children (the control group). We calculated the average PM 2.5 and temperature values during spring, summer, autumn and winter and monitored serum lymphocyte ratio, CD4+/CD8+T, immunoglobulin IgE, ventilatory index and high-sensitivity C-reactive protein (hs-CRP) levels. Our results showed that PM 2.5 values in spring and winter were remarkably higher compared to other seasons. Correlated analysis demonstrated that the onset of cough asthma group was happening in spring. The onset of typical asthma group happened mostly in winter, followed by spring. We established a positive correlation between the onset of asthma of cough asthma group and PM 2.5 value (r = 0.623, p = 0.017), and there was also a positive correlation between the onset of asthma of typical asthma group and PM 2.5 value (r = 0.714, p = 0.015). Our results showed that lymphocyte ratio and IgE level in the cough asthma group and the typical asthma group were significantly higher. CD4+/CD8+T was significantly lower in the cough asthma group and the typical asthma group. The hs-CRP level in cough asthma, typical asthma and pneumonia groups were significantly higher than that of the control group. The FEV1/predicted value, FEV1/FVC and MMEF/predicted value in the cough asthma group and the typical asthma group were significantly lower than those in other groups, however when comparing between two groups respectively, the difference was not statistically significant. Our findings showed that PM2.5 was related to the onset of children cough variant asthma. PM2.5 reduced immune regulation and ventilatory function.

  4. Towards quantitative [18F]FDG-PET/MRI of the brain: Automated MR-driven calculation of an image-derived input function for the non-invasive determination of cerebral glucose metabolic rates.

    PubMed

    Sundar, Lalith Ks; Muzik, Otto; Rischka, Lucas; Hahn, Andreas; Rausch, Ivo; Lanzenberger, Rupert; Hienert, Marius; Klebermass, Eva-Maria; Füchsel, Frank-Günther; Hacker, Marcus; Pilz, Magdalena; Pataraia, Ekaterina; Traub-Weidinger, Tatjana; Beyer, Thomas

    2018-01-01

    Absolute quantification of PET brain imaging requires the measurement of an arterial input function (AIF), typically obtained invasively via an arterial cannulation. We present an approach to automatically calculate an image-derived input function (IDIF) and cerebral metabolic rates of glucose (CMRGlc) from the [18F]FDG PET data using an integrated PET/MRI system. Ten healthy controls underwent test-retest dynamic [18F]FDG-PET/MRI examinations. The imaging protocol consisted of a 60-min PET list-mode acquisition together with a time-of-flight MR angiography scan for segmenting the carotid arteries and intermittent MR navigators to monitor subject movement. AIFs were collected as the reference standard. Attenuation correction was performed using a separate low-dose CT scan. Assessment of the percentage difference between area-under-the-curve of IDIF and AIF yielded values within ±5%. Similar test-retest variability was seen between AIFs (9 ± 8) % and the IDIFs (9 ± 7) %. Absolute percentage difference between CMRGlc values obtained from AIF and IDIF across all examinations and selected brain regions was 3.2% (interquartile range: (2.4-4.3) %, maximum < 10%). High test-retest intravariability was observed between CMRGlc values obtained from AIF (14%) and IDIF (17%). The proposed approach provides an IDIF, which can be effectively used in lieu of AIF.

  5. Methods and Applications of Time Series Analysis. Part I. Regression, Trends, Smoothing, and Differencing.

    DTIC Science & Technology

    1980-07-01

    FUNCTION ( t) CENTERED AT C WITH PERIOD n -nr 0 soTIME t FIGURE 3.4S RECTAPOOLAR PORN )=C FUNCTION g t) CENTERED AT 0 WITH PERIOD n n n 52n tI y I (h...of a typical family in Kabiria (a city in Northern Algeria) over the time period Jan.-Feb. 1975 through Nov.-Dec. 1977. We would like to obtain a...values of y .. .. ... -75- Table 4.2 The Average Bi-Monthly Expenses of a Family in Kabiria and Their Fourier Representation Fourier Coefficients x k

  6. Determination of spin polarization using an unconventional iron superconductor

    DOE PAGES

    Gifford, J. A.; Chen, B. B.; Zhang, J.; ...

    2016-11-21

    Here, an unconventional iron superconductor, SmO 0.7F 0.3FeAs, has been utilized to determine the spin polarization and temperature dependence of a highly spin-polarized material, La 0.67Sr 0.33MnO 3, with Andreev reflection spectroscopy. The polarization value obtained is the same as that determined using a conventional superconductor Pb but the temperature dependence of the spin polarization can be measured up to 52 K, a temperature range, which is several times wider than that using a typical conventional superconductor. The result excludes spin-parallel triplet pairing in the iron superconductor.

  7. Automation of the Image Analysis for Thermographic Inspection

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1998-01-01

    Several data processing procedures for the pulse thermal inspection require preliminary determination of an unflawed region. Typically, an initial analysis of the thermal images is performed by an operator to determine the locations of unflawed and the defective areas. In the present work an algorithm is developed for automatically determining a reference point corresponding to an unflawed region. Results are obtained for defects which are arbitrarily located in the inspection region. A comparison is presented of the distributions of derived values with right and wrong localization of the reference point. Different algorithms of automatic determination of the reference point are compared.

  8. Cyclic voltammetric study of Co-Ni-Fe alloys electrodeposition in sulfate medium

    NASA Astrophysics Data System (ADS)

    Hanafi, I.; Daud, A. R.; Radiman, S.

    2013-11-01

    Electrochemical technique has been used to study the electrodeposition of cobalt, nickel, iron and Co-Ni-Fe alloy on indium tin oxide (ITO) coated glass substrate. To obtain the nucleation mechanism, cyclic voltammetry is used to characterize the Co-Ni-Fe system. The scanning rate effect on the deposition process was investigated. Deposition of single metal occurs at potential values more positive than that estimated stability potential. Based on the cyclic voltammetry results, the electrodeposition of cobalt, nickel, iron and Co-Ni-Fe alloy clearly show that the process of diffusion occurs is controlled by the typical nucleation mechanism.

  9. Q branches of the nu7 fundamental of ethane (C2H6) Integrated intensity measurements for atmospheric measurement applications

    NASA Technical Reports Server (NTRS)

    Rinsland, C. P.; Harvey, G. A.; Levine, J. S.; Smith, M. A. H.; Malathy Devi, V.; Thakur, K. B.

    1986-01-01

    Laboratory spectra covering the nu7 band of ethane (C2H6) have been recorded, and measurements of integrated intensities of selected Q branches from these spectra are reported. The method by which the spectra were obtained is described, and a typical spectrum covering the PQ3 branch at 2976.8/cm is shown along with a plot of equivalent width vs. optical density for this branch. The values of the integrated intensities reported for each branch are the means of five different optical densities.

  10. Experimental Investigation of Droplet Evaporation of Water with Ground Admixtures while Motion in a Flame of Liquid Fuel

    NASA Astrophysics Data System (ADS)

    Dmitriyenko, Margarita A.; Nyashina, Galina S.; Zhdanova, Alena O.; Vysokomornaya, Olga V.

    2016-02-01

    The evaporation features for the atomized flow of suspension on the base of water with ground admixtures in an area of high-temperature combustion products of liquid flammable substance (acetone) were investigated experimentally by the optical methods of gas flow diagnostic and the high-speed video recording. The scales of influence of clay and silt concentration in droplets of atomized flow on the intensity of its evaporation were determined. The approximation dependences describing a decrease in typical size of suspension droplets at various values of ground admixtures were obtained.

  11. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  12. Dancing the aerobics ''hearing loss'' choreography

    NASA Astrophysics Data System (ADS)

    Pinto, Beatriz M.; Carvalho, Antonio P. O.; Gallagher, Sergio

    2002-11-01

    This paper presents an overview of gymnasiums' acoustic problems when used for aerobics exercises classes (and similar) with loud noise levels of amplified music. This type of gymnasium is usually a highly reverberant space, which is a consequence of a large volume surrounded by hard surfaces. A sample of five schools in Portugal was chosen for this survey. Noise levels in each room were measured using a precision sound level meter, and analyzed to calculate the standardized daily personal noise exposure levels (LEP,d). LEP,d values from 79 to 91 dB(A) were found to be typical values in this type of room, inducing a health risk for its occupants. The reverberation time (RT) values were also measured and compared with some European legal requirements (Portugal, France, and Belgium) for nearly similar situations. RT values (1 kHz) from 0.9 s to 2.8 s were found. These reverberation time values clearly differentiate between good and acoustically inadequate rooms. Some noise level and RT limits for this type of environment are given and suggestions for the improvement of the acoustical environment are shown. Significant reductions in reverberation time values and noise levels can be obtained by simple measures.

  13. Laser-induced fluorescence detection of lead atoms in a laser-induced plasma: An experimental analytical optimization study

    NASA Astrophysics Data System (ADS)

    Laville, Stéphane; Goueguel, Christian; Loudyi, Hakim; Vidal, François; Chaker, Mohamed; Sabsabi, Mohamad

    2009-04-01

    The combination of the laser-induced breakdown spectroscopy (LIBS) and laser-induced fluorescence (LIF) techniques was investigated to improve the limit of detection (LoD) of trace elements in solid matrices. The influence of the main experimental parameters on the LIF signal, namely the ablation fluence, the excitation energy, and the inter-pulse delay, was studied experimentally and a discussion of the results was presented. For illustrative purpose we considered detection of lead in brass samples. The plasma was produced by a Q-switched Nd:YAG laser and then re-excited by a nanosecond Optical Parametric Oscillator (OPO) laser. The experiments were performed in air at atmospheric pressure. We found out that the optimal conditions were obtained for our experimental set-up using relatively weak ablation fluence of 2-3 J/cm 2 and an inter-pulse delay of about 5-10 μs. Also, a few tens of microjoules was typically required to maximize the LIF signal. Using the LIBS-LIFS technique, a single-shot LoD for lead of about 1.5 part per million (ppm) was obtained while a value of 0.2 ppm was obtained after accumulating over 100 shots. These values represent an improvement of about two orders of magnitude with respect to LIBS.

  14. Wildfire risk assessment in a typical Mediterranean wildland-urban interface of Greece.

    PubMed

    Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita

    2015-04-01

    The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.

  15. Wildfire Risk Assessment in a Typical Mediterranean Wildland-Urban Interface of Greece

    NASA Astrophysics Data System (ADS)

    Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita

    2015-04-01

    The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.

  16. A multiplex competitive ELISA for the detection and characterization of gluten in fermented-hydrolyzed foods.

    PubMed

    Panda, Rakhi; Boyer, Marc; Garber, Eric A E

    2017-12-01

    A novel competitive ELISA was developed utilizing the G12, R5, 2D4, MIoBS, and Skerritt antibody-HRP conjugates employed in nine commercial ELISA test kits that are routinely used for gluten detection. This novel multiplex competitive ELISA simultaneously measures gliadin-, deamidated gliadin-, and glutenin-specific epitopes. The assay was used to evaluate 20 wheat beers, 20 barley beers, 6 barley beers processed to reduce gluten, 15 soy sauces, 6 teriyaki sauces, 6 Worcestershire sauces, 6 vinegars, and 8 sourdough breads. For wheat beers, the apparent gluten concentration values obtained by the G12 and Skerritt antibodies were typically higher than those obtained using the R5 antibodies. The sourdough bread samples resulted in higher apparent gluten concentration values with the Skerritt antibody, while the values generated by the G12 and R5 antibodies were comparable. Although the soy-based sauces showed non-specific inhibition with the multiple R5 and G12 antibodies, their overall profile was distinguishable from the other categories of fermented foods. Cluster analysis of the apparent gluten concentration values obtained by the multiplex competitive ELISA, as well as the relative response of the nine gluten-specific antibodies used in the assay to different gluten proteins/peptides, distinguishes among the different categories of fermented-hydrolyzed foods by recognizing the differences in the protein/peptide profiles characteristic of each product. This novel gluten-based multiplex competitive ELISA provides insight into the extent of proteolysis resulting from various fermentation processes, which is essential for accurate gluten quantification in fermented-hydrolyzed foods. Graphical abstract A novel multiplex competitive ELISA for the detection and characterization of gluten in fermented-hydrolyzed foods.

  17. Alternative metrics for real-ear-to-coupler difference average values in children.

    PubMed

    Blumsack, Judith T; Clark-Lewis, Sandra; Watts, Kelli M; Wilson, Martha W; Ross, Margaret E; Soles, Lindsey; Ennis, Cydney

    2014-10-01

    Ideally, individual real-ear-to-coupler difference (RECD) measurements are obtained for pediatric hearing instrument-fitting purposes. When RECD measurements cannot be obtained, age-related average RECDs based on typically developing North American children are used. Evidence suggests that these values may not be appropriate for populations of children with retarded growth patterns. The purpose of this study was to determine if another metric, such as head circumference, height, or weight, can be used for prediction of RECDs in children. Design was a correlational study. For all participants, RECD values in both ears, head circumference, height, and weight were measured. The sample consisted of 68 North American children (ages 3-11 yr). Height, weight, head circumference, and RECDs were measured and were analyzed for both ears at 500, 750, 1000, 1500, 2000, 3000, 4000, and 6000 Hz. A backward elimination multiple-regression analysis was used to determine if age, height, weight, and/or head circumference are significant predictors of RECDs. For the left ear, head circumference was retained as the only statistically significant variable in the final model. For the right ear, head circumference was retained as the only statistically significant independent variable at all frequencies except at 2000 and 4000 Hz. At these latter frequencies, weight was retained as the only statistically significant independent variable after all other variables were eliminated. Head circumference can be considered as a metric for RECD prediction in children when individual measurements cannot be obtained. In developing countries where equipment is often unavailable and stunted growth can reduce the value of using age as a metric, head circumference can be considered as an alternative metric in the prediction of RECDs. American Academy of Audiology.

  18. Measurement of multiple scattering of 13 and 20 MeV electrons by thin foils

    PubMed Central

    Ross, C. K.; McEwen, M. R.; McDonald, A. F.; Cojocaru, C. D.; Faddegon, B. A.

    2008-01-01

    To model the transport of electrons through material requires knowledge of how the electrons lose energy and scatter. Theoretical models are used to describe electron energy loss and scatter and these models are supported by a limited amount of measured data. The purpose of this work was to obtain additional data that can be used to test models of electron scattering. Measurements were carried out using 13 and 20 MeV pencil beams of electrons produced by the National Research Council of Canada research accelerator. The electron fluence was measured at several angular positions from 0° to 9° for scattering foils of different thicknesses and with atomic numbers ranging from 4 to 79. The angle, θ1∕e, at which the fluence has decreased to 1∕e of its value on the central axis was used to characterize the distributions. Measured values of θ1∕e ranged from 1.5° to 8° with a typical uncertainty of about 1%. Distributions calculated using the EGSnrc Monte Carlo code were compared to the measured distributions. In general, the calculated distributions are narrower than the measured ones. Typically, the difference between the measured and calculated values of θ1∕e is about 1.5%, with the maximum difference being 4%. The measured and calculated distributions are related through a simple scaling of the angle, indicating that they have the same shape. No significant trends with atomic number were observed. PMID:18841865

  19. Use of a liquid-crystal, heater-element composite for quantitative, high-resolution heat transfer coefficients on a turbine airfoil, including turbulence and surface roughness effects

    NASA Astrophysics Data System (ADS)

    Hippensteele, Steven A.; Russell, Louis M.; Torres, Felix J.

    1987-05-01

    Local heat transfer coefficients were measured along the midchord of a three-times-size turbine vane airfoil in a static cascade operated at roon temperature over a range of Reynolds numbers. The test surface consisted of a composite of commercially available materials: a Mylar sheet with a layer of cholestric liquid crystals, which change color with temperature, and a heater made of a polyester sheet coated with vapor-deposited gold, which produces uniform heat flux. After the initial selection and calibration of the composite sheet, accurate, quantitative, and continuous heat transfer coefficients were mapped over the airfoil surface. Tests were conducted at two free-stream turbulence intensities: 0.6 percent, which is typical of wind tunnels; and 10 percent, which is typical of real engine conditions. In addition to a smooth airfoil, the effects of local leading-edge sand roughness were also examined for a value greater than the critical roughness. The local heat transfer coefficients are presented for both free-stream turbulence intensities for inlet Reynolds numbers from 1.20 to 5.55 x 10 to the 5th power. Comparisons are also made with analytical values of heat transfer coefficients obtained from the STAN5 boundary layer code.

  20. Use of a liquid-crystal and heater-element composite for quantitative, high-resolution heat-transfer coefficients on a turbine airfoil including turbulence and surface-roughness effects

    NASA Astrophysics Data System (ADS)

    Hippensteele, S. A.; Russell, L. M.; Torres, F. J.

    Local heat transfer coefficients were measured along the midchord of a three-times-size turbine vane airfoil in a static cascade operated at room temperature over a range of Reynolds numbers. The test surface consisted of a composite of commercially available materials: a Mylar sheet with a layer of cholestric liquid crystals, which change color with temperature, and a heater made of a polyester sheet coated with vapor-deposited gold, which produces uniform heat flux. After the initial selection and calibration of the composite sheet, accurate, quantitative, and continuous heat transfer coefficients were mapped over the airfoil surface. Tests were conducted at two free-stream turbulence intensities: 0.6 percent, which is typical of wind tunnels; and 10 percent, which is typical of real engine conditions. In addition to a smooth airfoil, the effects of local leading-edge sand roughness were also examined for a value greater than the critical roughness. The local heat transfer coefficients are presented for both free-stream turbulence intensities for inlet Reynolds numbers from 1.20 to 5.55 x 10 to the 5th power. Comparisons are also made with analytical values of heat transfer coefficients obtained from the STAN5 boundary layer code.

  1. Use of a liquid-crystal, heater-element composite for quantitative, high-resolution heat transfer coefficients on a turbine airfoil, including turbulence and surface roughness effects

    NASA Technical Reports Server (NTRS)

    Hippensteele, Steven A.; Russell, Louis M.; Torres, Felix J.

    1987-01-01

    Local heat transfer coefficients were measured along the midchord of a three-times-size turbine vane airfoil in a static cascade operated at roon temperature over a range of Reynolds numbers. The test surface consisted of a composite of commercially available materials: a Mylar sheet with a layer of cholestric liquid crystals, which change color with temperature, and a heater made of a polyester sheet coated with vapor-deposited gold, which produces uniform heat flux. After the initial selection and calibration of the composite sheet, accurate, quantitative, and continuous heat transfer coefficients were mapped over the airfoil surface. Tests were conducted at two free-stream turbulence intensities: 0.6 percent, which is typical of wind tunnels; and 10 percent, which is typical of real engine conditions. In addition to a smooth airfoil, the effects of local leading-edge sand roughness were also examined for a value greater than the critical roughness. The local heat transfer coefficients are presented for both free-stream turbulence intensities for inlet Reynolds numbers from 1.20 to 5.55 x 10 to the 5th power. Comparisons are also made with analytical values of heat transfer coefficients obtained from the STAN5 boundary layer code.

  2. Electron beam physical vapor deposition of YSZ electrolyte coatings for SOFCs

    NASA Astrophysics Data System (ADS)

    He, Xiaodong; Meng, Bin; Sun, Yue; Liu, Bochao; Li, Mingwei

    2008-09-01

    YSZ electrolyte coatings were prepared by electron beam physical vapor deposition (EB-PVD) at a high deposition rate of up to 1 μm/min. The YSZ coating consisted of a single cubic phase and no phase transformation occurred after annealing treatment at 1000 °C. A typical columnar structure was observed in this coating by SEM and feather-like characteristics appeared in every columnar grain. In columnar grain boundaries there were many micron-sized gaps and pores. In TEM image, many white lines were found, originating from the alignment of nanopores existing within feather-like columnar grains. The element distribution along the cross-section of the coating was homogeneous except Zr with a slight gradient. The coating exhibited a characteristic anisotropic behavior in electrical conductivity. In the direction perpendicular to coating surface the electrical conductivity was remarkably higher than that in the direction parallel to coating surface. This mainly attributed to the typical columnar structure for EB-PVD coating and the existence of many grain boundaries along the direction parallel to coating surface. For as-deposited coating, the gas permeability coefficient of 9.78 × 10 -5 cm 4 N -1 s -1 was obtained and this value was close to the critical value of YSZ electrolyte layer required for solid oxide fuel cell (SOFC) operation.

  3. [Raising children with mental disabilities: mothers' narratives].

    PubMed

    Bastos, Olga Maria; Deslandes, Suely Ferreira

    2008-09-01

    Technical advances in neonatology have increased the life expectancy of children with serious health problems. Many of these children experience developmental delay (mental disability) and require special care. The family must adapt to better provide for the child's needs. This study aimed to identify mothers' reactions and the obstacles they face to obtain what they consider the best treatment for their children. The study methodology was based on analysis of the mothers' narratives, drawing on medical anthropology and linguistics. The most typical plots in the narratives showed the impact of the diagnosis and the search for means to adapt to the child's care, as well as the difficulties encountered in the public health system to obtain what the mothers considered adequate care. The value ascribed to characters in the support network showed the importance of such support in these situations.

  4. Molecular weight between entanglements for κ- and ι-carrageenans in an ionic liquid.

    PubMed

    Horinaka, Jun-ichi; Urabayashi, Yuhei; Wang, Xiaochen; Takigawa, Toshikazu

    2014-08-01

    The molecular weight between entanglements (Me) for κ- and ι-carrageenans, sulfated galactans, was examined in concentrated solutions using an ionic liquid 1-butyl-3-methylimidazolium acetate as a solvent. The dynamic viscoelasticity data for the solutions measured at different temperatures were overlapped according to the time-temperature superposition principle, and the obtained master curves exhibited the flow and rubbery plateau zones, being typical of concentrated polymer solutions having entanglement coupling. The values of Me for κ- and ι-carrageenans in the solutions were determined from the plateau moduli. Then the values of Me in the molten state (Me,melt) estimated as a material constant to be 6.6×10(3) and 7.2×10(3), respectively. The close values of Me,melt for κ- and ι-carrageenans indicate that 4-sulfate group of ι-carrageenan are not so influential for the entanglement network. Compared with agarose, a non-sulfate galactan, carrageenans have larger values of average spacing between entanglements. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Bayesian model for fate and transport of polychlorinated biphenyl in upper Hudson River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinberg, L.J.; Reckhow, K.H.; Wolpert, R.L.

    1996-05-01

    Modelers of contaminant fate and transport in surface waters typically rely on literature values when selecting parameter values for mechanistic models. While the expert judgment with which these selections are made is valuable, the information contained in contaminant concentration measurements should not be ignored. In this full-scale Bayesian analysis of polychlorinated biphenyl (PCB) contamination in the upper Hudson River, these two sources of information are combined using Bayes` theorem. A simulation model for the fate and transport of the PCBs in the upper Hudson River forms the basis of the likelihood function while the prior density is developed from literaturemore » values. The method provides estimates for the anaerobic biodegradation half-life, aerobic biodegradation plus volatilization half-life, contaminated sediment depth, and resuspension velocity of 4,400 d, 3.2 d, 0.32 m, and 0.02 m/yr, respectively. These are significantly different than values obtained with more traditional methods, and are shown to produce better predictions than those methods when used in a cross-validation study.« less

  6. Measurement of pressure-broadening and lineshift coefficients at 77 and 296 K of methane lines in the 727 nm band using intracavity laser spectroscopy

    NASA Technical Reports Server (NTRS)

    Singh, Kuldip; O'Brien, James J.

    1994-01-01

    Pressure-broadening coefficients and pressure-induced lineshifts of several rotational-vibrational lines have been measured in the 727 nm absorption band of methane at temperatures of 77 and 296 K, using nitrogen, hydrogen, and helium as the foreign-gas collision partners. A technique involving intracavity laser spectroscopy is used to record the methane spectra. Average values of the broadening coefficients (/cm/atm) at 77 K are: 0.199, 0.139, 0.055, and 0.29 for collision partners N2, H2, He, and CH4, respectively. Typical average values of the pressure-induced lineshifts (/cm/atm) at 77 K and for the range of foreign gas pressures between 10 and 200 torr are -0.052 for N2, -0.063 for H2, and +0.031 for He. All the values obtained at 296 K are considerably different from the corresponding values at 77 K. This represents the first report of pressure-broadening and shifting coefficients for the methane transitions in a region where the delta nu(sub C-H) = 5 band occurs.

  7. Empirical phylogenies and species abundance distributions are consistent with preequilibrium dynamics of neutral community models with gene flow.

    PubMed

    Bonnet-Lebrun, Anne-Sophie; Manica, Andrea; Eriksson, Anders; Rodrigues, Ana S L

    2017-05-01

    Community characteristics reflect past ecological and evolutionary dynamics. Here, we investigate whether it is possible to obtain realistically shaped modeled communities-that is with phylogenetic trees and species abundance distributions shaped similarly to typical empirical bird and mammal communities-from neutral community models. To test the effect of gene flow, we contrasted two spatially explicit individual-based neutral models: one with protracted speciation, delayed by gene flow, and one with point mutation speciation, unaffected by gene flow. The former produced more realistic communities (shape of phylogenetic tree and species-abundance distribution), consistent with gene flow being a key process in macro-evolutionary dynamics. Earlier models struggled to capture the empirically observed branching tempo in phylogenetic trees, as measured by the gamma statistic. We show that the low gamma values typical of empirical trees can be obtained in models with protracted speciation, in preequilibrium communities developing from an initially abundant and widespread species. This was even more so in communities sampled incompletely, particularly if the unknown species are the youngest. Overall, our results demonstrate that the characteristics of empirical communities that we have studied can, to a large extent, be explained through a purely neutral model under preequilibrium conditions. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  8. Genotypic and technological diversity of Brevibacterium linens strains for use as adjunct starter cultures in 'Pecorino di Filiano' cheese ripened in two different environments.

    PubMed

    Bonomo, Maria Grazia; Cafaro, Caterina; Salzano, Giovanni

    2015-01-01

    Twenty-two Brevibacterium linens strains isolated from 'Pecorino di Filiano' cheese ripened in two different environments (natural cave and storeroom) were characterized and differentiated for features of technological interest and by genotypic methods, in order to select strains with specific features to be used as surface starter cultures. Results showed significant differences among strains on the basis of physiological and technological features, indicating heterogeneity within the species. A middle-low level of proteolytic activity was observed in 27.3 % of strains, while a small group (9.1 %) showed a high ability. Lipolytic activity was observed at three different temperatures and the highest value was detected at 20 °C with 13.6 % of strains, while an increase in temperature produced a slightly lower lipolysis in all strains. The evaluation of diacetyl production revealed that only 22.8 % of strains showed this ability, and most of them were isolated from product ripened in the natural cave. All strains exhibited only leu-aminopeptidase activity, with values more elevated in strains coming from the natural cave product. The combined analysis of genotypic results with the data obtained by the features of technological interest study established that the random amplified polymorphic DNA (RAPD) clusters obtained were composed not only of different genotypes but of different profiles based on technological properties too. This study demonstrated the importance of the ripening environment that affects the typical features of the artisanal product, leading to the selection of a specific surface microflora. Characterized strains could be associated within surface starters to standardize the production process of cheese, but preserving its typical organoleptic and sensory characteristics and improving the quality of the final product.

  9. Systematic errors in the determination of the spectroscopic g-factor in broadband ferromagnetic resonance spectroscopy: A proposed solution

    NASA Astrophysics Data System (ADS)

    Gonzalez-Fuentes, C.; Dumas, R. K.; García, C.

    2018-01-01

    A theoretical and experimental study of the influence of small offsets of the magnetic field (δH) on the measurement accuracy of the spectroscopic g-factor (g) and saturation magnetization (Ms) obtained by broadband ferromagnetic resonance (FMR) measurements is presented. The random nature of δH generates systematic and opposite sign deviations of the values of g and Ms with respect to their true values. A δH on the order of a few Oe leads to a ˜10% error of g and Ms for a typical range of frequencies employed in broadband FMR experiments. We propose a simple experimental methodology to significantly minimize the effect of δH on the fitted values of g and Ms, eliminating their apparent dependence in the range of frequencies employed. Our method was successfully tested using broadband FMR measurements on a 5 nm thick Ni80Fe20 film for frequencies ranging between 3 and 17 GHz.

  10. Rapid Spectral Variability of the Symbiotic Star CH Cyg During One Night

    NASA Astrophysics Data System (ADS)

    Mikayilov, Kh. M.; Rustamov, B. N.; Alakbarov, I. A.; Rustamova, A. B.

    2017-06-01

    During one night (15.07.2015), within 6 hours 14 echelle spectrograms of this star were obtained. It was revealed that the profile of Ha and Hβ lines have two-component emission structure with a central absorption, parameters which vary from spectrum to spectrum during the night. The intensity of blue emission component (V) have been changed strongly during the night: the value of ratio of intensities of violet and red components (V/R) of line Hα decreased from 0:93 to 0:49 in the beginning and then increased to a value of 0.97. The synchronous variations of values of V/R for the Hα and Hβ lines have been revealed. The parameters of blue emission components of Hα and of line Hel λ5876 Å are correlated. We propose that revealed by us the rapid spectral changes in the spectrum of the star CH Cyg could be connected with a flickering in the optical brightness of the star that is typical for the active phase of this system.

  11. Cellulose Nanocrystal Membranes as Excipients for Drug Delivery Systems

    PubMed Central

    Barbosa, Ananda M.; Robles, Eduardo; Ribeiro, Juliana S.; Lund, Rafael G.; Carreño, Neftali L. V.; Labidi, Jalel

    2016-01-01

    In this work, cellulose nanocrystals (CNCs) were obtained from flax fibers by an acid hydrolysis assisted by sonochemistry in order to reduce reaction times. The cavitation inducted during hydrolysis resulted in CNC with uniform shapes, and thus further pretreatments into the cellulose are not required. The obtained CNC exhibited a homogeneous morphology and high crystallinity, as well as typical values for surface charge. Additionally, CNC membranes were developed from CNC solution to evaluation as a drug delivery system by the incorporation of a model drug. The drug delivery studies were carried out using chlorhexidine (CHX) as a drug and the antimicrobial efficiency of the CNC membrane loaded with CHX was examined against Gram-positive bacteria Staphylococcus aureus (S. Aureus). The release of CHX from the CNC membranes is determined by UV-Vis. The obtaining methodology of the membranes proved to be simple, and these early studies showed a potential use in antibiotic drug delivery systems due to the release kinetics and the satisfactory antimicrobial activity. PMID:28774122

  12. NO(y) Correlation with N2O and CH4 in the Midlatitude Stratosphere

    NASA Technical Reports Server (NTRS)

    Kondo, Y.; Schmidt, U.; Sugita, T.; Engel, A.; Koike, M.; Aimedieu, P.; Gunson, M. R.; Rodriguez, J.

    1996-01-01

    Total reactive nitrogen (NO(y)), nitrous oxide (NO2), methane (CH4), and ozone (03) were measured on board a balloon launched from Aire sur l'Adour (44 deg N, 0 deg W), France on October 12, 1994. Generally, NO(y) was highly anti-correlated with N2O and CH4 at altitudes between 15 and 32 km. The linear NO(y) - N2O and NO(y) - CH4 relationships obtained by the present observations are very similar to those obtained on board ER-2 and DC-8 aircraft previously at altitude below 20 km in the northern hemisphere. They also agree well with the data obtained by the Atmospheric Trace Molecule Spectroscopy (ATMOS) instrument at 41 deg N in November 1994. Slight departures from linear correlations occurred around 29 km, where N2O and CH4 mixing ratios were larger than typical midlatitude values, suggesting horizontal transport of tropical airmasses to northern midlatitudes in a confined altitude region.

  13. Phytochemical profile, antioxidant and antimicrobial activity of extracts obtained from erva-mate (Ilex paraguariensis) fruit using compressed propane and supercritical CO2.

    PubMed

    Fernandes, Ciro E F; Scapinello, Jaqueline; Bohn, Aline; Boligon, Aline A; Athayde, Margareth L; Magro, Jacir Dall; Palliga, Marshall; Oliveira, J Vladimir; Tres, Marcus V

    2017-01-01

    Traditionally, Ilex paraguariensis leaves are consumed in tea form or as typical drinks like mate and terere, while the fruits are discarded processing and has no commercial value. The aim of this work to evaluate phytochemical properties, total phenolic compounds, antioxidant and antimicrobial activity of extracts of Ilex paraguariensis fruits obtained from supercritical CO 2 and compressed propane extraction. The extraction with compressed propane yielded 2.72 wt%, whereas with supercritical CO 2 1.51 wt% was obtained. The compound extracted in larger amount by the two extraction solvents was caffeine, 163.28 and 54.17 mg/g by supercritical CO 2 and pressurized propane, respectively. The antioxidant activity was more pronounced for the supercritical CO 2 extract, with no difference found in terms of minimum inhibitory concentration for Staphylococcus aureus for the two extracts and better results observed for Escherichia coli when using supercritical CO 2 .

  14. Analysis of Multi-Scale Radiometric Data Collected during the Cold Land Processes Experiment-1 (CLPX-1)

    NASA Technical Reports Server (NTRS)

    Tedesco, M.; Kim, E. J.; Gasiewski, A.; Stankov, B.

    2005-01-01

    Brightness temperature maps at 18.7 and 37 GHz collected at the Fraser and North Park Meso-Scale Areas during the Cold Land Processes Experiment by the NOAA Polarimetric Scanning Radiometer (PSWA) airborne sensor are analyzed. The Fraser site is mostly covered by forest with a typical snowpack depth of 1 m while North Park has no forest cover and is characterized by patches of shallow snow. We examine histograms of the brightness temperatures at 500 m resolution for both the Fraser and North Park areas. The histograms can be modelled by a log-normal distribution in the case of the Fraser MSA and by a bi-modal distribution in the case of the North Park MSA. The histograms of the brightness temperatures at coarser resolutions are also plotted to study the effects of sensor resolution on the shape of the distribution, on the values of the average brightness temperatures and standard deviations. Finally, the values of brightness temperatures obtained by re-sampling (aggregating) the data at 25 km resolution are compared with the values of the brightness temperatures collected by the Advanced Microwave Scanning Radiometer (AMSR-E) and Special Sensor Microwave/Imager (SSMII) satellite radiometers. The results show that in both areas for sensor footprint larger than 5000 m, the brightness temperatures show a flat distribution and the memory of the initial distribution is lost. The values of the brightness temperatures measured by the satellite radiometers are in good agreement with the values obtained averaging the airborne data, even if some discrepancies occur.

  15. FAIR exempting separate T (1) measurement (FAIREST): a novel technique for online quantitative perfusion imaging and multi-contrast fMRI.

    PubMed

    Lai, S; Wang, J; Jahng, G H

    2001-01-01

    A new pulse sequence, dubbed FAIR exempting separate T(1) measurement (FAIREST) in which a slice-selective saturation recovery acquisition is added in addition to the standard FAIR (flow-sensitive alternating inversion recovery) scheme, was developed for quantitative perfusion imaging and multi-contrast fMRI. The technique allows for clean separation between and thus simultaneous assessment of BOLD and perfusion effects, whereas quantitative cerebral blood flow (CBF) and tissue T(1) values are monitored online. Online CBF maps were obtained using the FAIREST technique and the measured CBF values were consistent with the off-line CBF maps obtained from using the FAIR technique in combination with a separate sequence for T(1) measurement. Finger tapping activation studies were carried out to demonstrate the applicability of the FAIREST technique in a typical fMRI setting for multi-contrast fMRI. The relative CBF and BOLD changes induced by finger-tapping were 75.1 +/- 18.3 and 1.8 +/- 0.4%, respectively, and the relative oxygen consumption rate change was 2.5 +/- 7.7%. The results from correlation of the T(1) maps with the activation images on a pixel-by-pixel basis show that the mean T(1) value of the CBF activation pixels is close to the T(1) of gray matter while the mean T(1) value of the BOLD activation pixels is close to the T(1) range of blood and cerebrospinal fluid. Copyright 2001 John Wiley & Sons, Ltd.

  16. Re-Evaluation of the AASHTO-Flexible Pavement Design Equation with Neural Network Modeling

    PubMed Central

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance. PMID:25397962

  17. Re-evaluation of the AASHTO-flexible pavement design equation with neural network modeling.

    PubMed

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance.

  18. Supervised neural network classification of pre-sliced cooked pork ham images using quaternionic singular values.

    PubMed

    Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul

    2010-03-01

    The quaternionic singular value decomposition is a technique to decompose a quaternion matrix (representation of a colour image) into quaternion singular vector and singular value component matrices exposing useful properties. The objective of this study was to use a small portion of uncorrelated singular values, as robust features for the classification of sliced pork ham images, using a supervised artificial neural network classifier. Images were acquired from four qualities of sliced cooked pork ham typically consumed in Ireland (90 slices per quality), having similar appearances. Mahalanobis distances and Pearson product moment correlations were used for feature selection. Six highly discriminating features were used as input to train the neural network. An adaptive feedforward multilayer perceptron classifier was employed to obtain a suitable mapping from the input dataset. The overall correct classification performance for the training, validation and test set were 90.3%, 94.4%, and 86.1%, respectively. The results confirm that the classification performance was satisfactory. Extracting the most informative features led to the recognition of a set of different but visually quite similar textural patterns based on quaternionic singular values. Copyright 2009 Elsevier Ltd. All rights reserved.

  19. Assessment of inhibitory effects on major human cytochrome P450 enzymes by spasmolytics used in the treatment of overactive bladder syndrome.

    PubMed

    Dahlinger, Dominik; Aslan, Sevinc; Pietsch, Markus; Frechen, Sebastian; Fuhr, Uwe

    2017-07-01

    The objective of this study was to examine the inhibitory potential of darifenacin, fesoterodine, oxybutynin, propiverine, solifenacin, tolterodine and trospium chloride on the seven major human cytochrome P450 enzymes (CYP) by using a standardized and validated seven-in-one cytochrome P450 cocktail inhibition assay. An in vitro cocktail of seven highly selective probe substrates was incubated with human liver microsomes and varying concentrations of the seven test compounds. The major metabolites of the probe substrates were simultaneously analysed using a validated liquid chromatography tandem mass spectrometry (LC-MS/MS) method. Enzyme kinetics were estimated by determining IC 50 and K i values via nonlinear regression. Obtained K i values were used for predictions of potential clinical impact of the inhibition using a static mechanistic prediction model. In this study, 49 IC 50 experiments were conducted. In six cases, IC 50 values lower than the calculated threshold for drug-drug interactions (DDIs) in the gut wall were observed. In these cases, no increase in inhibition was determined after a 30 min preincubation. Considering a typical dosing regimen and applying the obtained K i values of 0.72 µM (darifenacin, 15 mg daily) and 7.2 µM [propiverine, 30 mg daily, immediate release (IR)] for the inhibition of CYP2D6 yielded a predicted 1.9-fold and 1.4-fold increase in the area under the curve (AUC) of debrisoquine (CYP2D6 substrate), respectively. Due to the inhibition of the particular intestinal CYP3A4, the obtained K i values of 14 µM of propiverine (30 mg daily, IR) resulted in a predicted doubling of the AUC for midazolam (CYP3A4 substrate). In vitro / in vivo extrapolation based on pharmacokinetic data and the conducted screening experiments yielded similar effects of darifenacin on CYP2D6 and propiverine on CYP3A4 as obtained in separately conducted in vivo DDI studies. As a novel finding, propiverine was identified to potentially inhibit CYP2D6 at clinically occurring concentrations.

  20. Assessment of inhibitory effects on major human cytochrome P450 enzymes by spasmolytics used in the treatment of overactive bladder syndrome

    PubMed Central

    Dahlinger, Dominik; Aslan, Sevinc; Pietsch, Markus; Frechen, Sebastian; Fuhr, Uwe

    2017-01-01

    Background: The objective of this study was to examine the inhibitory potential of darifenacin, fesoterodine, oxybutynin, propiverine, solifenacin, tolterodine and trospium chloride on the seven major human cytochrome P450 enzymes (CYP) by using a standardized and validated seven-in-one cytochrome P450 cocktail inhibition assay. Methods: An in vitro cocktail of seven highly selective probe substrates was incubated with human liver microsomes and varying concentrations of the seven test compounds. The major metabolites of the probe substrates were simultaneously analysed using a validated liquid chromatography tandem mass spectrometry (LC-MS/MS) method. Enzyme kinetics were estimated by determining IC50 and Ki values via nonlinear regression. Obtained Ki values were used for predictions of potential clinical impact of the inhibition using a static mechanistic prediction model. Results: In this study, 49 IC50 experiments were conducted. In six cases, IC50 values lower than the calculated threshold for drug–drug interactions (DDIs) in the gut wall were observed. In these cases, no increase in inhibition was determined after a 30 min preincubation. Considering a typical dosing regimen and applying the obtained Ki values of 0.72 µM (darifenacin, 15 mg daily) and 7.2 µM [propiverine, 30 mg daily, immediate release (IR)] for the inhibition of CYP2D6 yielded a predicted 1.9-fold and 1.4-fold increase in the area under the curve (AUC) of debrisoquine (CYP2D6 substrate), respectively. Due to the inhibition of the particular intestinal CYP3A4, the obtained Ki values of 14 µM of propiverine (30 mg daily, IR) resulted in a predicted doubling of the AUC for midazolam (CYP3A4 substrate). Conclusions: In vitro/in vivo extrapolation based on pharmacokinetic data and the conducted screening experiments yielded similar effects of darifenacin on CYP2D6 and propiverine on CYP3A4 as obtained in separately conducted in vivo DDI studies. As a novel finding, propiverine was identified to potentially inhibit CYP2D6 at clinically occurring concentrations. PMID:28747995

  1. Rock Fracture Toughness Study Under Mixed Mode I/III Loading

    NASA Astrophysics Data System (ADS)

    Aliha, M. R. M.; Bahmani, A.

    2017-07-01

    Fracture growth in underground rock structures occurs under complex stress states, which typically include the in- and out-of-plane sliding deformation of jointed rock masses before catastrophic failure. However, the lack of a comprehensive theoretical and experimental fracture toughness study for rocks under contributions of out-of plane deformations (i.e. mode III) is one of the shortcomings of this field. Therefore, in this research the mixed mode I/III fracture toughness of a typical rock material is investigated experimentally by means of a novel cracked disc specimen subjected to bend loading. It was shown that the specimen can provide full combinations of modes I and III and consequently a complete set of mixed mode I/III fracture toughness data were determined for the tested marble rock. By moving from pure mode I towards pure mode III, fracture load was increased; however, the corresponding fracture toughness value became smaller. The obtained experimental fracture toughness results were finally predicted using theoretical and empirical fracture models.

  2. Enhancement of structural stiffness in MEMS structures

    NASA Astrophysics Data System (ADS)

    Ilias, Samir; Picard, Francis; Topart, Patrice; Larouche, Carl; Jerominek, Hubert

    2006-01-01

    Many optical applications require smooth micromirror reflective surfaces with large radius of curvature. Usually when using surface micromachining technology and as a result of residual stress and stress gradient in thin films, the control of residual curvature is a difficult task. In this work, two engineering approaches were developed to enhance structural stiffness of micromirrors. 1) By integrating stiffening structures and thermal annealing. The stiffening structures consist of U-shaped profiles integrated with the mirror (dimension 200×300 μm2). 2) By combining selective electroplating and flip-chip based technologies. Nickel was used as electroplated material with optimal stress values around +/-10 MPa for layer thicknesses of about 10 μm. With the former approach, typical curvature radii of about 1.5 cm and 0.6 cm along mirror width and length were obtained, respectively. With the latter approach, an important improvement in the micromirror planarity and flatness was achieved with curvature radius up to 23 cm and roughness lower than 5 nm rms for typical 1000×1000 μm2 micromirrors.

  3. A comparison of companion matrix methods to find roots of a trigonometric polynomial

    NASA Astrophysics Data System (ADS)

    Boyd, John P.

    2013-08-01

    A trigonometric polynomial is a truncated Fourier series of the form fN(t)≡∑j=0Naj cos(jt)+∑j=1N bj sin(jt). It has been previously shown by the author that zeros of such a polynomial can be computed as the eigenvalues of a companion matrix with elements which are complex valued combinations of the Fourier coefficients, the "CCM" method. However, previous work provided no examples, so one goal of this new work is to experimentally test the CCM method. A second goal is introduce a new alternative, the elimination/Chebyshev algorithm, and experimentally compare it with the CCM scheme. The elimination/Chebyshev matrix (ECM) algorithm yields a companion matrix with real-valued elements, albeit at the price of usefulness only for real roots. The new elimination scheme first converts the trigonometric rootfinding problem to a pair of polynomial equations in the variables (c,s) where c≡cos(t) and s≡sin(t). The elimination method next reduces the system to a single univariate polynomial P(c). We show that this same polynomial is the resultant of the system and is also a generator of the Groebner basis with lexicographic ordering for the system. Both methods give very high numerical accuracy for real-valued roots, typically at least 11 decimal places in Matlab/IEEE 754 16 digit floating point arithmetic. The CCM algorithm is typically one or two decimal places more accurate, though these differences disappear if the roots are "Newton-polished" by a single Newton's iteration. The complex-valued matrix is accurate for complex-valued roots, too, though accuracy decreases with the magnitude of the imaginary part of the root. The cost of both methods scales as O(N3) floating point operations. In spite of intimate connections of the elimination/Chebyshev scheme to two well-established technologies for solving systems of equations, resultants and Groebner bases, and the advantages of using only real-valued arithmetic to obtain a companion matrix with real-valued elements, the ECM algorithm is noticeably inferior to the complex-valued companion matrix in simplicity, ease of programming, and accuracy.

  4. Bias effects of short- and long-term color memory for unique objects.

    PubMed

    Bloj, Marina; Weiß, David; Gegenfurtner, Karl R

    2016-04-01

    Are objects remembered with a more saturated color? Some of the evidence supporting this statement comes from research using "memory colors"-the typical colors of particular objects, for example, the green of grass. The problematic aspect of these findings is that many different exemplars exist, some of which might exhibit a higher saturation than the one measured by the experimenter. Here we avoid this problem by using unique personal items and comparing long- and short-term color memory matches (in hue, value, and chroma) with those obtained with the object present. Our results, on average, confirm that objects are remembered as more saturated than they are.

  5. Cathode fall measurement in a dielectric barrier discharge in helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Yanpeng; Zheng, Bin; Liu, Yaoge

    2013-11-15

    A method based on the “zero-length voltage” extrapolation is proposed to measure cathode fall in a dielectric barrier discharge. Starting, stable, and discharge-maintaining voltages were measured to obtain the extrapolation zero-length voltage. Under our experimental conditions, the “zero-length voltage” gave a cathode fall of about 185 V. Based on the known thickness of the cathode fall region, the spatial distribution of the electric field strength in dielectric barrier discharge in atmospheric helium is determined. The strong cathode fall with a maximum field value of approximately 9.25 kV/cm was typical for the glow mode of the discharge.

  6. Theoretical colours and isochrones for some Hubble Space Telescope colour systems

    NASA Technical Reports Server (NTRS)

    Edvardsson, B.; Bell, R. A.

    1989-01-01

    Synthetic spectra for effective temperatures of 4000-7250 K, logarithmic surface gravities typical of dwarfs and subgiants, and metallicities from solar values to 0.001 of the solar metallicity were used to derive a grid of synthetic surface brightness magnitudes for 21 of the Hubble Space Telescope Wide Field Camera (WFC) band passes. The absolute magnitudes of these 21 band passes are also obtained for a set of globular cluster isochrones with different helium abundances, metallicities, oxygen abundances, and ages. The usefulness and efficiency of different sets of broad and intermediate bandwidth WFC colors for determining ages and metallicities for globular clusters are evaluated.

  7. Declustering of clustered preferential sampling for histogram and semivariogram inference

    USGS Publications Warehouse

    Olea, R.A.

    2007-01-01

    Measurements of attributes obtained more as a consequence of business ventures than sampling design frequently result in samplings that are preferential both in location and value, typically in the form of clusters along the pay. Preferential sampling requires preprocessing for the purpose of properly inferring characteristics of the parent population, such as the cumulative distribution and the semivariogram. Consideration of the distance to the nearest neighbor allows preparation of resampled sets that produce comparable results to those from previously proposed methods. Clustered sampling of size 140, taken from an exhaustive sampling, is employed to illustrate this approach. ?? International Association for Mathematical Geology 2007.

  8. Physical data measurements and mathematical modelling of simple gas bubble experiments in glass melts

    NASA Technical Reports Server (NTRS)

    Weinberg, Michael C.

    1986-01-01

    In this work consideration is given to the problem of the extraction of physical data information from gas bubble dissolution and growth measurements. The discussion is limited to the analysis of the simplest experimental systems consisting of a single, one component gas bubble in a glassmelt. It is observed that if the glassmelt is highly under- (super-) saturated, then surface tension effects may be ignored, simplifying the task of extracting gas diffusivity values from the measurements. If, in addition, the bubble rise velocity is very small (or very large) the ease of obtaining physical property data is enhanced. Illustrations are given for typical cases.

  9. Assessment of urban soundscapes with the focus on an architectural installation with musical features.

    PubMed

    Jambrošić, Kristian; Horvat, Marko; Domitrović, Hrvoje

    2013-07-01

    Urban soundscapes at five locations in the city of Zadar were perceptually assessed by on-site surveys and objectively evaluated based on monaural and binaural recordings. All locations were chosen so that they would display auditory and visual diversity as much as possible. The unique sound installation known as the Sea Organ was included as an atypical music-like environment. Typical objective parameters were calculated from the recordings related to the amount of acoustic energy, spectral properties of sound, the amount of fluctuations, and tonal properties. The subjective assessment was done on-site using a common survey for evaluating the properties of sound and visual environment. The results revealed the importance of introducing the context into soundscape research because objective parameters did not show significant correlation with responses obtained from interviewees. Excessive values of certain objective parameters could indicate that a sound environment will be perceived as unpleasant or annoying, but its overall perception depends on how well it agrees with people's expectations. This was clearly seen for the case of Sea Organ for which the highest values of objective parameters were obtained, but, at the same time, it was evaluated as the most positive sound environment in every aspect.

  10. Value addition of vegetable wastes by solid-state fermentation using Aspergillus niger for use in aquafeed industry.

    PubMed

    Rajesh, N; Imelda-Joseph; Raj, R Paul

    2010-11-01

    Vegetable waste typically has high moisture content and high levels of protein, vitamins and minerals. Its value as an agricultural feed can be enhanced through solid-state fermentation (SSF). Two experiments were conducted to evaluate the nutritional status of the products derived by SSF of a mixture of dried vegetable waste powder and oil cake mixture (soybean flour, wheat flour, groundnut oil cake and sesame oil cake at 4:3:2:1 ratio) using fungi Aspergillus niger S(1)4, a mangrove isolate, and A. niger NCIM 616. Fermentation was carried out for 9 days at 35% moisture level and neutral pH. Significant (p<0.05) increase in crude protein and amino acids were obtained in both the trials. The crude fat and crude fibre content showed significant reduction at the end of fermentation. Nitrogen free extract (NFE) showed a gradual decrease during the fermentation process. The results of the study suggest that the fermented product obtained on days 6 and 9 in case of A. niger S(1)4 and A. niger NCIM 616 respectively contained the highest levels of crude protein. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Impact of haze-fog days to radon progeny equilibrium factor and discussion of related factors.

    PubMed

    Hou, Changsong; Shang, Bing; Zhang, Qingzhao; Cui, Hongxing; Wu, Yunyun; Deng, Jun

    2015-11-01

    The equilibrium factor F between radon and its short-lived progenies is an important parameter to estimate radon exposure of humans. Therefore, indoor and outdoor concentrations of radon and its short-lived radon progeny were measured in Beijing area using a continuously measuring device, in an effort to obtain information on the F value. The results showed that the mean values of F were 0.58 ± 0.13 (0.25-0.95, n = 305) and 0.52 ± 0.12 (0.31-0.91, n = 64) for indoor and outdoor, respectively. The indoor F value during haze-fog days was higher than the typical value of 0.4 recommended by the United Nations Scientific Committee on the Effects of Atomic Radiation, and it was also higher than the values of 0.47 and 0.49 reported in the literature. A positive correlation was observed between indoor F values and PM2.5 concentrations (R (2) = 0.71). Since 2013, owing to frequent heavy haze-fog events in Beijing and surrounding areas, the number of the days with severe pollution remains at a high level. Future studies on the impact of the ambient fine particulate matter on indoor radon progeny equilibrium factor F could be important.

  12. Reduced-order model based active disturbance rejection control of hydraulic servo system with singular value perturbation theory.

    PubMed

    Wang, Chengwen; Quan, Long; Zhang, Shijie; Meng, Hongjun; Lan, Yuan

    2017-03-01

    Hydraulic servomechanism is the typical mechanical/hydraulic double-dynamics coupling system with the high stiffness control and mismatched uncertainties input problems, which hinder direct applications of many advanced control approaches in the hydraulic servo fields. In this paper, by introducing the singular value perturbation theory, the original double-dynamics coupling model of the hydraulic servomechanism was reduced to a integral chain system. So that, the popular ADRC (active disturbance rejection control) technology could be directly applied to the reduced system. In addition, the high stiffness control and mismatched uncertainties input problems are avoided. The validity of the simplified model is analyzed and proven theoretically. The standard linear ADRC algorithm is then developed based on the obtained reduced-order model. Extensive comparative co-simulations and experiments are carried out to illustrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. An info-gap application to robust design of a prestressed space structure under epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Hot, Aurélien; Weisser, Thomas; Cogan, Scott

    2017-07-01

    Uncertainty quantification is an integral part of the model validation process and is important to take into account during the design of mechanical systems. Sources of uncertainty are diverse but generally fall into two categories: aleatory due to random process and epistemic resulting from a lack of knowledge. This work focuses on the behavior of solar arrays in their stowed configuration. To avoid impacts during launch, snubbers are used to prestress the panels. Since the mechanical properties of the snubbers and the associated preload configurations are difficult to characterize precisely, an info-gap approach is proposed to investigate the influence of such uncertainties on design configurations obtained for different values of safety factors. This eventually allows to revise the typical values of these factors and to reevaluate them with respect to a targeted robustness level. The proposed methodology is illustrated using a simplified finite element model of a solar array.

  14. Piezo-optic and elasto-optic properties of monoclinic triglycine sulfate crystals.

    PubMed

    Mytsyk, Bogdan; Demyanyshyn, Natalya; Erba, Alessandro; Shut, Viktor; Mozzharov, Sergey; Kost, Yaroslav; Mys, Oksana; Vlokh, Rostyslav

    2017-12-01

    For the first time, to the best of our knowledge, we have experimentally determined all of the components of the piezo-optic tensor for monoclinic crystals. This has been implemented on a specific example of triglycine sulfate crystals. Based on the results obtained, the complete elasto-optic tensor has been calculated. Acousto-optic figures of merit (AOFMs) have been estimated for the case of acousto-optic interaction occurring in the principal planes of the optical indicatrix ellipsoid and for geometries in which the highest elasto-optic coefficients are involved as effective parameters. It has been found that the highest AOFM value is equal to 6.8×10 -15   s 3 /kg for the case of isotropic acousto-optic interaction with quasi-longitudinal acoustic waves in the principal planes. This AOFM is higher than the corresponding values typical for canonic acousto-optic materials, which are transparent in the deep ultraviolet spectral range.

  15. A NON-LTE STUDY OF SILICON ABUNDANCES IN GIANT STARS FROM THE Si i INFRARED LINES IN THE zJ -BAND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Kefeng; Shi, Jianrong; Zhao, Gang

    We investigate the feasibility of Si i infrared (IR) lines as Si abundance indicators for giant stars. We find that Si abundances obtained from the Si i IR lines based on the local thermodynamic equilibrium (LTE) analysis show large line-to-line scatter (mean value of 0.13 dex), and are higher than those from the optical lines. However, when non-LTE effects are taken into account, the line-to-line scatter reduces significantly (mean value of 0.06 dex), and the Si abundances are consistent with those from the optical lines. The typical average non-LTE correction of [Si/Fe] for our sample stars is about −0.35 dex.more » Our results demonstrate that the Si i IR lines could be reliable abundance indicators, provided that the non-LTE effects are properly taken into account.« less

  16. Emission efficiency limit of Si nanocrystals

    PubMed Central

    Limpens, Rens; Luxembourg, Stefan L.; Weeber, Arthur W.; Gregorkiewicz, Tom

    2016-01-01

    One of the important obstacles on the way to application of Si nanocrystals for development of practical devices is their typically low emissivity. In this study we explore the limits of external quantum yield of photoluminescence of solid-state dispersions of Si nanocrystals in SiO2. By making use of a low-temperature hydrogen passivation treatment we demonstrate a maximum emission quantum efficiency of approximately 35%. This is the highest value ever reported for this type of material. By cross-correlating PL lifetime with EQE values, we obtain a comprehensive understanding of the efficiency limiting processes induced by Pb-defects. We establish that the observed record efficiency corresponds to an interface density of Pb-centers of 1.3 × 1012 cm12, which is 2 orders of magnitude higher than for the best Si/SiO2 interface. This result implies that Si nanocrystals with up to 100% emission efficiency are feasible. PMID:26786062

  17. Hierarchical Volume Representation with 3{radical}2 Subdivision and Trivariate B-Spline Wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linsen, L; Gray, JT; Pascucci, V

    2002-01-11

    Multiresolution methods provide a means for representing data at multiple levels of detail. They are typically based on a hierarchical data organization scheme and update rules needed for data value computation. We use a data organization that is based on what we call n{radical}2 subdivision. The main advantage of subdivision, compared to quadtree (n = 2) or octree (n = 3) organizations, is that the number of vertices is only doubled in each subdivision step instead of multiplied by a factor of four or eight, respectively. To update data values we use n-variate B-spline wavelets, which yields better approximations formore » each level of detail. We develop a lifting scheme for n = 2 and n = 3 based on the n{radical}2-subdivision scheme. We obtain narrow masks that could also provide a basis for view-dependent visualization and adaptive refinement.« less

  18. [Prediction of soil nutrients spatial distribution based on neural network model combined with goestatistics].

    PubMed

    Li, Qi-Quan; Wang, Chang-Quan; Zhang, Wen-Jiang; Yu, Yong; Li, Bing; Yang, Juan; Bai, Gen-Chuan; Cai, Yan

    2013-02-01

    In this study, a radial basis function neural network model combined with ordinary kriging (RBFNN_OK) was adopted to predict the spatial distribution of soil nutrients (organic matter and total N) in a typical hilly region of Sichuan Basin, Southwest China, and the performance of this method was compared with that of ordinary kriging (OK) and regression kriging (RK). All the three methods produced the similar soil nutrient maps. However, as compared with those obtained by multiple linear regression model, the correlation coefficients between the measured values and the predicted values of soil organic matter and total N obtained by neural network model increased by 12. 3% and 16. 5% , respectively, suggesting that neural network model could more accurately capture the complicated relationships between soil nutrients and quantitative environmental factors. The error analyses of the prediction values of 469 validation points indicated that the mean absolute error (MAE) , mean relative error (MRE), and root mean squared error (RMSE) of RBFNN_OK were 6.9%, 7.4%, and 5. 1% (for soil organic matter), and 4.9%, 6.1% , and 4.6% (for soil total N) smaller than those of OK (P<0.01), and 2.4%, 2.6% , and 1.8% (for soil organic matter), and 2.1%, 2.8%, and 2.2% (for soil total N) smaller than those of RK, respectively (P<0.05).

  19. Nanomechanical characterization of alumina coatings grown on FeCrAl alloy by thermal oxidation.

    PubMed

    Frutos, E; González-Carrasco, J L; Polcar, T

    2016-04-01

    This work studies the feasibility of using repetitive-nano-impact tests with a cube-corner tip and low loads for obtaining quantitative fracture toughness values in thin and brittle coatings. For this purpose, it will be assumed that the impacts are able to produce a cracking, similar to the pattern developed for the classical fracture toughness tests in bulk materials, and therefore, from the crack developed in the repetitive impacts it will be possible to evaluate the suitability of the classical indentation models (Anstins and Laugier) for measuring fracture toughness. However, the length of this crack has to be lower than 10% of the total coating thickness to avoid substrate contributions. For this reason, and in order to ensure a small plastic region localized at the origin of the crack tip, low load values (or small distance between the indenter tip and the surface) have to be used. In order to demonstrate the validity of this technique, repetitive-nano-impact will be done in a fine and dense oxide layer (α-Al2O3), which has been developed on the top of oxide dispersion strengthened (ODS) FeCrAl alloys (PM 2000) by thermal oxidation at elevated temperatures. Moreover, it will be shown how it is possible to know with each new impact the crack geometry evolution from Palmqvist crack to half-penny crack, being able to study the proper evolution of the different values of fracture toughness in terms of both indentation models and as a function of the strain rate, ε̇, decreasing. Thereby, fracture toughness values for α-Al2O3 layer decrease from ~4.40MPam , for high ϵ̇ value (10(3)s(-1)), to ~3.21MPam, for quasi-static ϵ̇ value (10(-3)s(-1)). On the other hand, ϵ̇ a new process to obtain fracture toughness values will be analysed, when the classical indentation models are not met. These values are typically found in the literature for bulk α-Al2O3, demonstrating the use of repetitive-nano-impact tests which not only provide qualitative information about fracture resistance of the materials but it also can be used to obtain quantitative information as fracture toughness values in the case of brittle materials. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel; Fita, Ignacio

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothingmore » effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.« less

  1. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    NASA Astrophysics Data System (ADS)

    Goethe, Martin; Fita, Ignacio; Rubi, J. Miguel

    2016-03-01

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  2. Essential features of residual stress determination in thin-walled plane structures in a base of whole field interferometric measurements

    NASA Astrophysics Data System (ADS)

    Pisarev, Vladimir S.; Odintsev, I.; Balalov, V.; Apalkov, A.

    2003-05-01

    Sophisticated technique for reliable quantitative deriving residual stress values from initial experimental data, which are inherent in combined implementing the hole drilling method with both holographic and speckle interferometry, is described in detail. The approach developed includes both possible ways of obtaining initial experimental information. The first of them consists of recording a set of required interference fringe patterns, which are resulted from residual stress energy release after through hole drilling, in two orthogonal directions that coincide with principal strain directions. The second way is obtaining a series of interrelated fringe patterns when a direction of either observation in reflection hologram interferometry or dual-beam illumination in speckle interferometry lies arbitrary with respect to definite principal strain direction. A set of the most typical both actual and analogous reference fringe patterns, which are related to both reflection hologram and dual-beam speckle interferometry, are presented.

  3. The structure and phase of cloud tops as observed by polarization lidar

    NASA Technical Reports Server (NTRS)

    Spinhirne, J. D.; Hansen, M. Z.; Simpson, J.

    1983-01-01

    High-resolution observations of the structure of cloud tops have been obtained with polarization lidar operated from a high altitude aircraft. Case studies of measurements acquired from cumuliform cloud systems are presented, two from September 1979 observations in the area of Florida and adjacent waters and a third during the May 1981 CCOPE experiment in southeast Montana. Accurate cloud top height structure and relative density of hydrometers are obtained from the lidar return signal intensity. Correlation between the signal return intensity and active updrafts was noted. Thin cirrus overlying developing turrets was observed in some cases. Typical values of the observed backscatter cross section were 0.1-5 (km/sr) for cumulonimbus tops. The depolarization ratio of the lidar signals was a function of the thermodynamic phase of cloud top areas. An increase of the cloud top depolarization with decreasing temperature was found for temperatures above and below -40 C.

  4. Turbulent Superstructures in Rayleigh-Bénard convection at different Prandtl number

    NASA Astrophysics Data System (ADS)

    Schumacher, Jörg; Pandey, Ambrish; Ender, Martin; Westermann, Rüdiger; Scheel, Janet D.

    2017-11-01

    Large-scale patterns of the temperature and velocity field in horizontally extended cells can be considered as turbulent superstructures in Rayleigh-Bénard convection (RBC). These structures are obtained once the turbulent fluctuations are removed by a finite-time average. Their existence has been reported for example in Bailon-Cuba et al.. This large-scale order obeys a strong similarity with the well-studied patterns from the weakly nonlinear regime at lower Rayleigh number in RBC. In the present work we analyze the superstructures of RBC at different Prandtl number for Prandtl values between Pr = 0.005 for liquid sodium and 7 for water. The characteristic evolution time scales, the typical spatial extension of the rolls and the properties of the defects of the resulting superstructure patterns are analyzed. Data are obtained from well-resolved spectral element direct numerical simulations. The work is supported by the Priority Programme SPP 1881 of the Deutsche Forschungsgemeinschaft.

  5. Increasing accuracy in the interval analysis by the improved format of interval extension based on the first order Taylor series

    NASA Astrophysics Data System (ADS)

    Li, Yi; Xu, Yan Long

    2018-05-01

    When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.

  6. Interaction between a circular inclusion and an arbitrarily oriented crack

    NASA Technical Reports Server (NTRS)

    Erdogan, F.; Gupta, G. D.; Ratwani, M.

    1975-01-01

    The plane interaction problem for a circular elastic inclusion embedded in an elastic matrix which contains an arbitrarily oriented crack is considered. Using the existing solutions for the edge dislocations as Green's functions, first the general problem of a through crack in the form of an arbitrary smooth arc located in the matrix in the vicinity of the inclusion is formulated. The integral equations for the line crack are then obtained as a system of singular integral equations with simple Cauchy kernels. The singular behavior of the stresses around the crack tips is examined and the expressions for the stress-intensity factors representing the strength of the stress singularities are obtained in terms of the asymptotic values of the density functions of the integral equations. The problem is solved for various typical crack orientations and the corresponding stress-intensity factors are given.

  7. Some Classes of Imperfect Information Finite State-Space Stochastic Games with Finite-Dimensional Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEneaney, William M.

    2004-08-15

    Stochastic games under imperfect information are typically computationally intractable even in the discrete-time/discrete-state case considered here. We consider a problem where one player has perfect information.A function of a conditional probability distribution is proposed as an information state.In the problem form here, the payoff is only a function of the terminal state of the system,and the initial information state is either linear ora sum of max-plus delta functions.When the initial information state belongs to these classes, its propagation is finite-dimensional.The state feedback value function is also finite-dimensional,and obtained via dynamic programming,but has a nonstandard form due to the necessity ofmore » an expanded state variable.Under a saddle point assumption,Certainty Equivalence is obtained and the proposed function is indeed an information state.« less

  8. Extending, and repositioning, a thermochemical ladder: high-level quantum chemical calculations on the sodium cation affinity scale.

    PubMed

    Bloomfield, Jolyon; Davies, Erin; Gatt, Phillip; Petrie, Simon

    2006-01-26

    High-level ab initio quantum chemical calculations, at the CP-dG2thaw level of theory, are reported for coordination of Na+ to a wide assortment of small organic and inorganic ligands. The ligands range in size from H to C6H6, and include 22 of the ligands for which precise relative sodium ion binding free energies have been determined by recent Fourier transform ion cyclotron resonance and guided ion beam studies. Agreement with the relative experimental values is excellent (+/-1.1 kJ mol(-1)), and agreement with the absolute scale (obtained when these relative values are pegged to the CH3NH2 "anchor" value measured in a high-pressure mass spectrometric study) is only marginally poorer, with CP-dG2thaw values exceeding the absolute experimental DeltaG(298) values by an average of 2.1 kJ mol(-1). The excellent agreement between experiment and the CP-dG2thaw technique also suggests that the additional 97 ligands surveyed here (which, in many cases, are not readily susceptible to laboratory investigation) can also be reliably fitted to the existing experimental scale. However, while CP-dG2thaw and the experimental ladder are in close accord, a small set of higher level ab initio calculations on sodium ion/ligand complexes (including several values obtained here using the W1 protocol) suggests that the CP-dG2thaw values are themselves too low by approximately 2.5 kJ mol(-1), thereby implying that the accepted laboratory values are typically 4.6 kJ mol(-1) too low. The present work also highlights the importance of Na+/ligand binding energy determinations (whether by experimental or theoretical approaches) on a case-by-case basis: trends in increasing binding energy along homologous series of compounds are not reliably predictable, nor are binding site preferences or chelating tendencies in polyfunctional compounds.

  9. Light scattering by dust and anthropogenic aerosol at a remote site in the Negev desert, Israel

    NASA Astrophysics Data System (ADS)

    Andreae, Tracey W.; Andreae, Meinrat O.; Ichoku, Charles; Maenhaut, Willy; Cafmeyer, Jan; Karnieli, Arnon; Orlovsky, Leah

    2002-01-01

    We investigated aerosol optical properties, mass concentration, and chemical composition over a 2 year period at a remote site in the Negev desert, Israel (Sde Boker, 30° 51'N, 34° 47'E, 470 m above sea level). Light-scattering measurements were made at three wavelengths (450, 550, and 700 nm), using an integrating nephelometer, and included the separate determination of the backscatter fraction. Aerosol coarse and fine fractions were collected with stacked filter units; mass concentrations were determined by weighing, and the chemical composition by proton-induced X-ray emission and instrumental neutron activation analysis. The total scattering coefficient at 550 nm showed a median of 66.7 Mm-1(mean value 75.2 Mm-1, standard deviation 41.7 Mm-1) typical of moderately polluted continental air masses. Values of 1000 Mm-1and higher were encountered during severe dust storm events. During the study period, 31 such dust events were detected. In addition to high scattering levels, they were characterized by a sharp drop in the Ångström coefficient (i.e., the spectral dispersion of the light scattering) to values near zero. Mass-scattering efficiencies were obtained by a multivariate regression of the scattering coefficients on dust, sulfate, and residual components. An analysis of the contributions of these components to the total scattering observed showed that anthropogenic aerosol accounted for about 70% of scattering. The rest was dominated by the effect of the large dust events mentioned above and of small dust episodes typically occurring during midafternoon.

  10. Diagenetic overprinting of the sphaerosiderite palaeoclimate proxy: are records of pedogenic groundwater δ18O values preserved?

    USGS Publications Warehouse

    Ufnar, David F.; Gonzalez, Luis A.; Ludvigson, Greg A.; Brenner, Richard L.; Witzkes, Brian J.

    2004-01-01

    Meteoric sphaerosiderite lines (MSLs), defined by invariant ??18O and variable ??13C values, are obtained from ancient wetland palaeosol sphaerosiderites (millimetre-scale FeCO3 nodules), and are a stable isotope proxy record of terrestrial meteoric isotopic compositions. The palaeoclimatic utility of sphaerosiderite has been well tested; however, diagenetically altered horizons that do not yield simple MSLs have been encountered. Well-preserved sphaerosiderites typically exhibit smooth exteriors, spherulitic crystalline microstructures and relatively pure (> 95 mol% FeCO3) compositions. Diagenetically altered sphaerosiderites typically exhibit corroded margins, replacement textures and increased crystal lattice substitution of Ca2+, Mg2+ and Mn2+ for Fe2+. Examples of diagenetically altered Cretaceous sphaerosiderite-bearing palaeosols from the Dakota Formation (Kansas), the Swan River Formation (Saskatchewan) and the Success S2 Formation (Saskatchewan) were examined in this study to determine the extent to which original, early diagenetic ??18O and ??13C values are preserved. All three units contain poikilotopic calcite cements with significantly different ??18O and ??13C values from the co-occurring sphaerosiderites. The complete isolation of all carbonate phases is necessary to ensure that inadvertent physical mixing does not affect the isotopic analyses. The Dakota and Swan River samples ultimately yield distinct MSLs for the sphaerosiderites, and MCLs (meteoric calcite lines) for the calcite cements. The Success S2 sample yields a covariant ??18O vs. ??13C trend resulting from precipitation in pore fluids that were mixtures between meteoric and modified marine phreatic waters. The calcite cements in the Success S2 Formation yield meteoric ??18O and ??13C values. A stable isotope mass balance model was used to produce hyperbolic fluid mixing trends between meteoric and modified marine end-member compositions. Modelled hyperbolic fluid mixing curves for the Success S2 Formation suggest precipitation from fluids that were < 25% sea water. ?? 2004 International Association of Sedimentologists.

  11. Classification of autism spectrum disorder using supervised learning of brain connectivity measures extracted from synchrostates

    NASA Astrophysics Data System (ADS)

    Jamal, Wasifa; Das, Saptarshi; Oprescu, Ioana-Anastasia; Maharatna, Koushik; Apicella, Fabio; Sicca, Federico

    2014-08-01

    Objective. The paper investigates the presence of autism using the functional brain connectivity measures derived from electro-encephalogram (EEG) of children during face perception tasks. Approach. Phase synchronized patterns from 128-channel EEG signals are obtained for typical children and children with autism spectrum disorder (ASD). The phase synchronized states or synchrostates temporally switch amongst themselves as an underlying process for the completion of a particular cognitive task. We used 12 subjects in each group (ASD and typical) for analyzing their EEG while processing fearful, happy and neutral faces. The minimal and maximally occurring synchrostates for each subject are chosen for extraction of brain connectivity features, which are used for classification between these two groups of subjects. Among different supervised learning techniques, we here explored the discriminant analysis and support vector machine both with polynomial kernels for the classification task. Main results. The leave one out cross-validation of the classification algorithm gives 94.7% accuracy as the best performance with corresponding sensitivity and specificity values as 85.7% and 100% respectively. Significance. The proposed method gives high classification accuracies and outperforms other contemporary research results. The effectiveness of the proposed method for classification of autistic and typical children suggests the possibility of using it on a larger population to validate it for clinical practice.

  12. 'Nuisance Dust' - a Case for Recalibration?

    NASA Astrophysics Data System (ADS)

    Datson, Hugh; Marker, Brian

    2013-04-01

    This paper considers the case for a review and recalibration of limit values and acceptability criteria for 'nuisance dust', a widely encountered but poorly defined and regulated aspect of particulate matter pollution. Specific dust fractions such as PM10 and asbestiforms are well characterised and have limit values enshrined in legislation. National, and international, limit values for acceptable concentrations of PM10 and other fractions of particulate matter have been defined and agreed. In the United Kingdom (UK), these apply to both public and workplace exposures. By contrast, there is no standard definition or universal criteria against which acceptable levels for 'nuisance dust' can be assessed. This has implications for land-use planning and resource utilisation. Without meaningful limit values, inappropriate development might take place too near to residential dwellings or land containing economically important mineral resources may be effectively sterilised. Furthermore, the expression 'nuisance dust' is unhelpful in that 'nuisance' has a specific meaning in environmental law whilst 'nuisance dust' is often taken to mean 'generally visible particulate matter'. As such, it is associated with the social and broader environmental impacts of particulate matter. PM10 concentrations are usually expressed as a mass concentration over time. These can be determined using a range of techniques. While results from different instruments are generally comparable, data obtained from alternative methods for measuring 'nuisance dust' are rarely interchangeable. In the UK, many of the methods typically used are derived from approaches developed under the HMIP (Her Majesty's Inspectorate of Pollution) regime in the 1960s onwards. Typical methods for 'nuisance dust' sampling focus on measurement of dust mass (from the weight of dust collected in an open container over time) or dust soiling (from loss of reflectance and or obscuration of a surface discoloured by dust over time). 'Custom and practice' acceptance criteria for dust samples obtained by mass or soiling techniques have been developed and are widely applied even though they were not necessarily calibrated thoroughly and have not been reviewed recently. Furthermore, as sampling techniques have evolved, criteria developed for one method have been adapted for another. Criteria and limit values have sometimes been based on an insufficient knowledge of sampler characteristics. Ideally, limit values should be calibrated for the locality to take differences in dust density and visibility into account. Work is needed on the definition of criteria and limit values, and sampling practices for coarse dust fractions, followed by discussion of good practices for securing effective monitoring that is proportionate and fit for purpose. With social changes and the evolution of environmental controls since the 1960s, the public perception of 'nuisance dust' has changed and needs to be addressed by reviewing existing thresholds in relation to the range of monitoring devices currently in use.

  13. Relaxation and turbulence effects on sonic boom signatures

    NASA Technical Reports Server (NTRS)

    Pierce, Allan D.; Sparrow, Victor W.

    1992-01-01

    The rudimentary theory of sonic booms predicts that the pressure signatures received at the ground begin with an abrupt shock, such that the overpressure is nearly abrupt. This discontinuity actually has some structure, and a finite time is required for the waveform to reach its peak value. This portion of the waveform is here termed the rise phase, and it is with this portion that this presentation is primarily concerned. Any time characterizing the duration of the rise phase is loosely called the 'rise time.' Various definitions are used in the literature for this rise time. In the present discussion the rise time can be taken as the time for the waveform to rise from 10 percent of its peak value to 90 percent of its peak value. The available data on sonic booms that appears in the open literature suggests that typical values of shock over-pressure lie in the range of 30 Pa to 200 Pa, typical values of shock duration lie in the range of 150 ms to 250 ms, and typical values of the rise time lie in the range of 1 ms to 5 ms. The understanding of the rise phase of sonic booms is important because the perceived loudness of a shock depends primarily on the structure of the rise phase. A longer rise time typically implies a less loud shock. A primary question is just what physical mechanisms are most important for the determination of the detailed structure of the rise phase.

  14. Comparison between a typical and a simplified model for blast load-induced structural response

    NASA Astrophysics Data System (ADS)

    Abd-Elhamed, A.; Mahmoud, S.

    2017-02-01

    As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.

  15. Mapping apparent stress and energy radiation over fault zones of major earthquakes

    USGS Publications Warehouse

    McGarr, A.; Fletcher, Joe B.

    2002-01-01

    Using published slip models for five major earthquakes, 1979 Imperial Valley, 1989 Loma Prieta, 1992 Landers, 1994 Northridge, and 1995 Kobe, we produce maps of apparent stress and radiated seismic energy over their fault surfaces. The slip models, obtained by inverting seismic and geodetic data, entail the division of the fault surfaces into many subfaults for which the time histories of seismic slip are determined. To estimate the seismic energy radiated by each subfault, we measure the near-fault seismic-energy flux from the time-dependent slip there and then multiply by a function of rupture velocity to obtain the corresponding energy that propagates into the far-field. This function, the ratio of far-field to near-fault energy, is typically less than 1/3, inasmuch as most of the near-fault energy remains near the fault and is associated with permanent earthquake deformation. Adding the energy contributions from all of the subfaults yields an estimate of the total seismic energy, which can be compared with independent energy estimates based on seismic-energy flux measured in the far-field, often at teleseismic distances. Estimates of seismic energy based on slip models are robust, in that different models, for a given earthquake, yield energy estimates that are in close agreement. Moreover, the slip-model estimates of energy are generally in good accord with independent estimates by others, based on regional or teleseismic data. Apparent stress is estimated for each subfault by dividing the corresponding seismic moment into the radiated energy. Distributions of apparent stress over an earthquake fault zone show considerable heterogeneity, with peak values that are typically about double the whole-earthquake values (based on the ratio of seismic energy to seismic moment). The range of apparent stresses estimated for subfaults of the events studied here is similar to the range of apparent stresses for earthquakes in continental settings, with peak values of about 8 MPa in each case. For earthquakes in compressional tectonic settings, peak apparent stresses at a given depth are substantially greater than corresponding peak values from events in extensional settings; this suggests that crustal strength, inferred from laboratory measurements, may be a limiting factor. Lower bounds on shear stresses inferred from the apparent stress distribution of the 1995 Kobe earthquake are consistent with tectonic-stress estimates reported by Spudich et al. (1998), based partly on slip-vector rake changes.

  16. Nitrogen oxides and ozone in the tropopause region of the Northern Hemisphere: Measurements from commercial aircraft in 1995/1996 and 1997

    NASA Astrophysics Data System (ADS)

    Brunner, Dominik; Staehelin, Johannes; Jeker, Dominique; Wernli, Heini; Schumann, Ulrich

    2001-11-01

    Measurements of nitrogen oxides (NO and NO2) and ozone (O3) were performed from a Swissair B-747 passenger aircraft in two extended time periods (May 1995 to May 1996, August to November 1997) in the framework of the Swiss NOXAR and the European POLINAT 2 project. The measurements were obtained on a total of 623 flights between Europe and destinations in the United States and the Far East. NO2 measurements were obtained only after December 1995 and were less precise than the NO measurements. Therefore daytime NO2 values were derived from measured NO and O3 concentrations assuming photostationary equilibrium. The completed NOx data set (measured NO, measured NO2 during night, and calculated NO2 during day) includes a complete annual cycle and is the most extensive and representative data set currently available for the upper troposphere (UT) and the lower stratosphere (LS) covering a significant proportion of the northern hemisphere between 15°N and 65°N. NOx concentrations in midlatitudes (30°-60°N) showed a marked seasonal variation both in the UT and the LS with a maximum in summer (median/mean values of 159/264 pptv in UT, 199/237 pptv in LS) and a minimum in winter (51/99 pptv in UT, 67/91 pptv in LS). Mean NOx concentrations were generally much higher than the respective median values, in particular in the UT, which reflects the important contribution from comparatively few very high concentrations observed in large-scale convection/lightning and small-scale aircraft plumes. Seasonal mean NOx concentrations in the UT were up to 3-4 times higher over continental regions than over the North Atlantic during summer. Lightning production of NO and convective vertical transport from the polluted boundary layer thus appear to have dominated the upper tropospheric NOx budget over these continental regions, particularly during summer. Ozone concentrations at aircraft cruising levels typically varied by an order of magnitude due to the strong vertical gradient in the LS. Seasonal mean values were dominated by large-scale dynamical processes controlling the altitude of the tropopause and the O3 abundance in the LS. O3 in the UT in midlatitudes showed a broad maximum between June and August, typical of observations in the free troposphere.

  17. A multi-state fragment charge difference approach for diabatic states in electron transfer: Extension and automation

    NASA Astrophysics Data System (ADS)

    Yang, Chou-Hsun; Hsu, Chao-Ping

    2013-10-01

    The electron transfer (ET) rate prediction requires the electronic coupling values. The Generalized Mulliken-Hush (GMH) and Fragment Charge Difference (FCD) schemes have been useful approaches to calculate ET coupling from an excited state calculation. In their typical form, both methods use two eigenstates in forming the target charge-localized diabatic states. For problems involve three or four states, a direct generalization is possible, but it is necessary to pick and assign the locally excited or charge-transfer states involved. In this work, we generalize the 3-state scheme for a multi-state FCD without the need of manual pick or assignment for the states. In this scheme, the diabatic states are obtained separately in the charge-transfer or neutral excited subspaces, defined by their eigenvalues in the fragment charge-difference matrix. In each subspace, the Hamiltonians are diagonalized, and there exist off-diagonal Hamiltonian matrix elements between different subspaces, particularly the charge-transfer and neutral excited diabatic states. The ET coupling values are obtained as the corresponding off-diagonal Hamiltonian matrix elements. A similar multi-state GMH scheme can also be developed. We test the new multi-state schemes for the performance in systems that have been studied using more than two states with FCD or GMH. We found that the multi-state approach yields much better charge-localized states in these systems. We further test for the dependence on the number of state included in the calculation of ET couplings. The final coupling values are converged when the number of state included is increased. In one system where experimental value is available, the multi-state FCD coupling value agrees better with the previous experimental result. We found that the multi-state GMH and FCD are useful when the original two-state approach fails.

  18. Soil moisture data as a constraint for groundwater recharge estimation

    NASA Astrophysics Data System (ADS)

    Mathias, Simon A.; Sorensen, James P. R.; Butler, Adrian P.

    2017-09-01

    Estimating groundwater recharge rates is important for water resource management studies. Modeling approaches to forecast groundwater recharge typically require observed historic data to assist calibration. It is generally not possible to observe groundwater recharge rates directly. Therefore, in the past, much effort has been invested to record soil moisture content (SMC) data, which can be used in a water balance calculation to estimate groundwater recharge. In this context, SMC data is measured at different depths and then typically integrated with respect to depth to obtain a single set of aggregated SMC values, which are used as an estimate of the total water stored within a given soil profile. This article seeks to investigate the value of such aggregated SMC data for conditioning groundwater recharge models in this respect. A simple modeling approach is adopted, which utilizes an emulation of Richards' equation in conjunction with a soil texture pedotransfer function. The only unknown parameters are soil texture. Monte Carlo simulation is performed for four different SMC monitoring sites. The model is used to estimate both aggregated SMC and groundwater recharge. The impact of conditioning the model to the aggregated SMC data is then explored in terms of its ability to reduce the uncertainty associated with recharge estimation. Whilst uncertainty in soil texture can lead to significant uncertainty in groundwater recharge estimation, it is found that aggregated SMC is virtually insensitive to soil texture.

  19. Re-evaluation of traditional Mediterranean foods. The local landraces of 'Cipolla di Giarratana' (Allium cepa L.) and long-storage tomato(Lycopersicon esculentum L.): quality traits and polyphenol content.

    PubMed

    Siracusa, Laura; Avola, Giovanni; Patanè, Cristina; Riggi, Ezio; Ruberto, Giuseppe

    2013-11-01

    The heightened consumer awareness for food safety is reflected in the demand for products with well-defined individual characteristics due to specific production methods, composition and origin. In this context, of pivotal importance is the re-evaluation of folk/traditional foods by properly characterizing them in terms of peculiarity and nutritional value. The subjects of this study are two typical Mediterranean edible products. The main morphological, biometrical and productive traits and polyphenol contents of three onion genotypes ('Cipolla di Giarratana', 'Iblea' and 'Tonda Musona') and three long-storage tomato landraces ('Montallegro', 'Filicudi' and 'Principe Borghese') were investigated. Sicilian onion landraces were characterized by large bulbs, with 'Cipolla di Giarratana' showing the highest bulb weight (605 g), yield (151 t ha(-1)) and total polyphenol content (123.5 mg kg(-1)). Landraces of long-storage tomato were characterized by low productivity (up to 20 t ha(-1)), but more than 70% of the total production was obtained with the first harvest, allowing harvest costs to be reduced. High contents of polyphenols were found, probably related to the typical small fruit size and thick skin characterizing these landraces. The present study overviews some of the most important traits that could support traditional landrace characterization and their nutritional value assessment. © 2013 Society of Chemical Industry.

  20. Chemistry by Way of Density Functional Theory

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Partridge, Harry; Langohff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    In this work we demonstrate that density functional theory (DFT) methods make an important contribution to understanding chemical systems and are an important additional method for the computational chemist. We report calibration calculations obtained with different functionals for the 55 G2 molecules to justify our selection of the B3LYP functional. We show that accurate geometries and vibrational frequencies obtained at the B3LYP level can be combined with traditional methods to simplify the calculation of accurate heats of formation. We illustrate the application of the B3LYP approach to a variety of chemical problems from the vibrational frequencies of polycyclic aromatic hydrocarbons to transition metal systems. We show that the B3LYP method typically performs better than the MP2 method at a significantly lower computational cost. Thus the B3LYP method allows us to extend our studies to much larger systems while maintaining a high degree of accuracy. We show that for transition metal systems, the B3LYP bond energies are typically of sufficient accuracy that they can be used to explain experimental trends and even differentiate between different experimental values. We show that for boron clusters the B3LYP energetics are not as good as for many of the other systems presented, but even in this case the B3LYP approach is able to help understand the experimental trends.

  1. Use of the 'real-ear to dial difference' to derive real-ear SPL from hearing level obtained with insert earphones.

    PubMed

    Munro, K J; Lazenby, A

    2001-10-01

    The electroacoustic characteristics of a hearing instrument are normally selected for individuals using data obtained during audiological assessment. The precise inter-relationship between the electroacoustic and audiometric variables is most readily appreciated when they have been measured at the same reference point, such as the tympanic membrane. However, it is not always possible to obtain the real-ear sound pressure level (SPL) directly if this is below the noise floor of the probe-tube microphone system or if the subject is unco-operative. The real-ear SPL may be derived by adding the subject's real-ear to dial difference (REDD) acoustic transform to the audiometer dial setting. The aim of the present study was to confirm the validity of the Audioscan RM500 to measure the REDD with the ER-3A insert earphone. A probe-tube microphone was used to measure the real-ear SPL and REDD from the right ears of 16 adult subjects ranging in age from 22 to 41 years (mean age 27 years). Measurements were made from 0.25 kHz to 6 kHz at a dial setting of 70 dB with an ER-3A insert earphone and two earmould configurations: the EAR-LINK foam ear-tip and the subjects' customized skeleton earmoulds. Mean REDD varied as a function of frequency but was typically approximately 12 dB with a standard deviation (SD) of +/- 1.7 dB and +/- 2.7 dB for the foam ear-tip and customized earmould, respectively. The mean test-retest difference of the REDD varied with frequency but was typically 0.5 dB (SD 1 dB). Over the frequency range 0.5-4 kHz, the derived values were found to be within 5 dB of the measured values in 95% of subjects when using the EAR-LINK foam ear-tip and within 4 dB when using the skeleton earmould. The individually measured REDD transform can be used in clinical practice to derive a valid estimate of real-ear SPL when it has not been possible to measure this directly.

  2. Using in situ pore water concentrations to estimate the phytotoxicity of nicosulfuron in soils to corn (Zea mays L.).

    PubMed

    Liu, Kailin; Cao, Zhengya; Pan, Xiong; Yu, Yunlong

    2012-08-01

    The phytotoxicity of an herbicide in soil is typically dependent on the soil characteristics. To obtain a comparable value of the concentration that inhibits growth by 50% (IC50), 0.01 M CaCl(2) , excess pore water (EPW) and in situ pore water (IPW) were used to extract the bioavailable fraction of nicosulfuron from five different soils to estimate the nicosulfuron phytotoxicity to corn (Zea mays L.). The results indicated that the phytotoxicity of nicosulfuron in soils to corn depended on the soil type, and the IC50 values calculated based on the amended concentration of nicosulfuron ranged from 0.77 to 9.77 mg/kg among the five tested soils. The range of variation in IC50 values for nicosulfuron was smaller when the concentrations of nicosulfuron extracted with 0.01 M CaCl(2) and EPW were used instead of the amended concentration. No significant difference was observed among the IC50 values calculated from the IPW concentrations of nicosulfuron in the five tested soils, suggesting that the concentration of nicosulfuron in IPW could be used to estimate the phytotoxicity of residual nicosulfuron in soils. Copyright © 2012 SETAC.

  3. Different binarization processes validated against manual counts of fluorescent bacterial cells.

    PubMed

    Tamminga, Gerrit G; Paulitsch-Fuchs, Astrid H; Jansen, Gijsbert J; Euverink, Gert-Jan W

    2016-09-01

    State of the art software methods (such as fixed value approaches or statistical approaches) to create a binary image of fluorescent bacterial cells are not as accurate and precise as they should be for counting bacteria and measuring their area. To overcome these bottlenecks, we introduce biological significance to obtain a binary image from a greyscale microscopic image. Using our biological significance approach we are able to automatically count about the same number of cells as an individual researcher would do by manual/visual counting. Using the fixed value or statistical approach to obtain a binary image leads to about 20% less cells in automatic counting. In our procedure we included the area measurements of the bacterial cells to determine the right parameters for background subtraction and threshold values. In an iterative process the threshold and background subtraction values were incremented until the number of particles smaller than a typical bacterial cell is less than the number of bacterial cells with a certain area. This research also shows that every image has a specific threshold with respect to the optical system, magnification and staining procedure as well as the exposure time. The biological significance approach shows that automatic counting can be performed with the same accuracy, precision and reproducibility as manual counting. The same approach can be used to count bacterial cells using different optical systems (Leica, Olympus and Navitar), magnification factors (200× and 400×), staining procedures (DNA (Propidium Iodide) and RNA (FISH)) and substrates (polycarbonate filter or glass). Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Adaptive Value Normalization in the Prefrontal Cortex Is Reduced by Memory Load.

    PubMed

    Holper, L; Van Brussel, L D; Schmidt, L; Schulthess, S; Burke, C J; Louie, K; Seifritz, E; Tobler, P N

    2017-01-01

    Adaptation facilitates neural representation of a wide range of diverse inputs, including reward values. Adaptive value coding typically relies on contextual information either obtained from the environment or retrieved from and maintained in memory. However, it is unknown whether having to retrieve and maintain context information modulates the brain's capacity for value adaptation. To address this issue, we measured hemodynamic responses of the prefrontal cortex (PFC) in two studies on risky decision-making. In each trial, healthy human subjects chose between a risky and a safe alternative; half of the participants had to remember the risky alternatives, whereas for the other half they were presented visually. The value of safe alternatives varied across trials. PFC responses adapted to contextual risk information, with steeper coding of safe alternative value in lower-risk contexts. Importantly, this adaptation depended on working memory load, such that response functions relating PFC activity to safe values were steeper with presented versus remembered risk. An independent second study replicated the findings of the first study and showed that similar slope reductions also arose when memory maintenance demands were increased with a secondary working memory task. Formal model comparison showed that a divisive normalization model fitted effects of both risk context and working memory demands on PFC activity better than alternative models of value adaptation, and revealed that reduced suppression of background activity was the critical parameter impairing normalization with increased memory maintenance demand. Our findings suggest that mnemonic processes can constrain normalization of neural value representations.

  5. Sensor-triggered sampling to determine instantaneous airborne vapor exposure concentrations.

    PubMed

    Smith, Philip A; Simmons, Michael K; Toone, Phillip

    2018-06-01

    It is difficult to measure transient airborne exposure peaks by means of integrated sampling for organic chemical vapors, even with very short-duration sampling. Selection of an appropriate time to measure an exposure peak through integrated sampling is problematic, and short-duration time-weighted average (TWA) values obtained with integrated sampling are not likely to accurately determine actual peak concentrations attained when concentrations fluctuate rapidly. Laboratory analysis for integrated exposure samples is preferred from a certainty standpoint over results derived in the field from a sensor, as a sensor user typically must overcome specificity issues and a number of potential interfering factors to obtain similarly reliable data. However, sensors are currently needed to measure intra-exposure period concentration variations (i.e., exposure peaks). In this article, the digitized signal from a photoionization detector (PID) sensor triggered collection of whole-air samples when toluene or trichloroethylene vapors attained pre-determined levels in a laboratory atmosphere generation system. Analysis by gas chromatography-mass spectrometry of whole-air samples (with both 37 and 80% relative humidity) collected using the triggering mechanism with rapidly increasing vapor concentrations showed good agreement with the triggering set point values. Whole-air samples (80% relative humidity) in canisters demonstrated acceptable 17-day storage recoveries, and acceptable precision and bias were obtained. The ability to determine exceedance of a ceiling or peak exposure standard by laboratory analysis of an instantaneously collected sample, and to simultaneously provide a calibration point to verify the correct operation of a sensor was demonstrated. This latter detail may increase the confidence in reliability of sensor data obtained across an entire exposure period.

  6. Thermodynamics of enzyme-catalyzed esterifications: II. Levulinic acid esterification with short-chain alcohols.

    PubMed

    Altuntepe, Emrah; Emel'yanenko, Vladimir N; Forster-Rotgers, Maximilian; Sadowski, Gabriele; Verevkin, Sergey P; Held, Christoph

    2017-10-01

    Levulinic acid was esterified with methanol, ethanol, and 1-butanol with the final goal to predict the maximum yield of these equilibrium-limited reactions as function of medium composition. In a first step, standard reaction data (standard Gibbs energy of reaction Δ R g 0 ) were determined from experimental formation properties. Unexpectedly, these Δ R g 0 values strongly deviated from data obtained with classical group contribution methods that are typically used if experimental standard data is not available. In a second step, reaction equilibrium concentrations obtained from esterification catalyzed by Novozym 435 at 323.15 K were measured, and the corresponding activity coefficients of the reacting agents were predicted with perturbed-chain statistical associating fluid theory (PC-SAFT). The so-obtained thermodynamic activities were used to determine Δ R g 0 at 323.15 K. These results could be used to cross-validate Δ R g 0 from experimental formation data. In a third step, reaction-equilibrium experiments showed that equilibrium position of the reactions under consideration depends strongly on the concentration of water and on the ratio of levulinic acid: alcohol in the initial reaction mixtures. The maximum yield of the esters was calculated using Δ R g 0 data from this work and activity coefficients of the reacting agents predicted with PC-SAFT for varying feed composition of the reaction mixtures. The use of the new Δ R g 0 data combined with PC-SAFT allowed good agreement to the measured yields, while predictions based on Δ R g 0 values obtained with group contribution methods showed high deviations to experimental yields.

  7. Nutritional values and metabolic profile with and without boiled treatment of 'Gallo Matese' beans (Phaseolus vulgaris L.), a landrace from Southern Italy.

    PubMed

    Landi, Nicola; Ragucci, Sara; Fiorentino, Michelina; Guida, Vincenzo; Di Maro, Antimo

    2017-01-01

    'Gallo Matese' beans are known as a typical legume of Southern Italy and continue to be consumed as a traditional food preserving the diversity of this region. Nonetheless, no information about the nutritional values of this legume is available. The objective of the present investigation was to determine the nutritional value and metabolic profile of 'Gallo Matese' beans. Results. 'Gallo Matese' beans contain high levels of proteins (22.64 g/100 g) and essential amino acids  (8.3 g/100 g). Furthermore, differentunsaturated fatty acidscontributetothetotalamountoflipids (0.97 g/100 g); among them, the essential PUFA α-linolenic (0.48 g/100 g) and linoleic (0.39 g/100 g) acids are the most abundant. The total phenol content was revealed and ABTS and ORAC-fluorescein methods were applied to determine the radical scavenging capabilities of the extract with and without boiled treatment. Finally, a de- crease in trypsin and chymotrypsin inhibitory activities was estimated before and after the boiling procedure. Conclusion. The data obtained show that 'Gallo Matese' beans are a functional food with healthy qualities. Overall, these results are useful to promote their cultivation and consumption, thus preserving Italian beans biodiversity due to consumer interest in choosing a healthy diet, such as the Mediterran. nd. 'Gallo Matese' beans are known as a typical legume of Southern Italy and continue to be consumed as a traditional food preserving the diversity of this region. Nonetheless, no information about the nutritional values of this legume is available. The objective of the present investigation was to determine the nutritional value and metabolic profile of 'Gallo Matese' beans. 'Gallo Matese' beans contain high levels of proteins (22.64 g/100 g) and essential amino acids  (8.3 g/100 g). Furthermore, differentunsaturated fatty acidscontributetothetotalamountoflipids (0.97 g/100 g); among them, the essential PUFA α-linolenic (0.48 g/100 g) and linoleic (0.39 g/100 g) acids are the most abundant. The total phenol content was revealed and ABTS and ORAC-fluorescein methods were applied to determine the radical scavenging capabilities of the extract with and without boiled treatment. Finally, a de- crease in trypsin and chymotrypsin inhibitory activities was estimated before and after the boiling procedure. The data obtained show that 'Gallo Matese' beans are a functional food with healthy qualities. Overall, these results are useful to promote their cultivation and consumption, thus preserving Italian beans biodiversity due to consumer interest in choosing a healthy diet, such as the Mediterranean diet.

  8. Mechanical behaviour of pressed and sintered CP Ti and Ti-6Al-7Nb alloy obtained from master alloy addition powder.

    PubMed

    Bolzoni, L; Weissgaerber, T; Kieback, B; Ruiz-Navas, E M; Gordo, E

    2013-04-01

    The Ti-6Al-7Nb alloy was obtained using the blending elemental approach with a master alloy and elemental titanium powders. Both the elemental titanium and the Ti-6Al-7Nb powders were characterised using X-ray diffraction, differential thermal analysis and dilatometry. The powders were processed using the conventional powder metallurgy route that includes uniaxial pressing and sintering. The trend of the relative density with the sintering temperature and the microstructural evolution of the materials sintered at different temperatures were analysed using scanning electron microscopy and X-ray diffraction. A minimum sintering temperature of 1200°C has to be used to ensure the homogenisation of the alloying elements and to obtain a pore structure composed of spherical pores. The sintered samples achieve relative density values that are typical for powder metallurgy titanium and no intermetallic phases were detected. Mechanical properties comparable to those specified for wrought Ti-6Al-7Nb medical devices are normally obtained. Therefore, the produced materials are promising candidates for load bearing applications as implant materials. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Cloud-to-ground lightning activity in Colombia: A 14-year study using lightning location system data

    NASA Astrophysics Data System (ADS)

    Herrera, J.; Younes, C.; Porras, L.

    2018-05-01

    This paper presents the analysis of 14 years of cloud-to-ground lightning activity observation in Colombia using lightning location systems (LLS) data. The first Colombian LLS operated from 1997 to 2001. After a few years, this system was upgraded and a new LLS has been operating since 2007. Data obtained from these two systems was analyzed in order to obtain lightning parameters used in designing lightning protection systems. The flash detection efficiency was estimated using average peak current maps and some theoretical results previously published. Lightning flash multiplicity was evaluated using a stroke grouping algorithm resulting in average values of about 1.0 and 1.6 for positive and negative flashes respectively and for both LLS. The time variation of this parameter changes slightly for the years considered in this study. The first stroke peak current for negative and positive flashes shows median values close to 29 kA and 17 kA respectively for both networks showing a great dependence on the flash detection efficiency. The average percentage of negative and positive flashes shows a 74.04% and 25.95% of occurrence respectively. The daily variation shows a peak between 23 and 02 h. The monthly variation of this parameter exhibits a bimodal behavior typical of the regions located near The Equator. The lightning flash density was obtained dividing the study area in 3 × 3 km cells and resulting in maximum average values of 25 and 35 flashes km- 2 year- 1 for each network respectively. A comparison of these results with global lightning activity hotspots was performed showing good correlation. Besides, the lightning flash density variation with altitude shows an inverse relation between these two variables.

  10. Fuzzy Performance between Surface Fitting and Energy Distribution in Turbulence Runner

    PubMed Central

    Liang, Zhongwei; Liu, Xiaochu; Ye, Bangyan; Brauwer, Richard Kars

    2012-01-01

    Because the application of surface fitting algorithms exerts a considerable fuzzy influence on the mathematical features of kinetic energy distribution, their relation mechanism in different external conditional parameters must be quantitatively analyzed. Through determining the kinetic energy value of each selected representative position coordinate point by calculating kinetic energy parameters, several typical algorithms of complicated surface fitting are applied for constructing microkinetic energy distribution surface models in the objective turbulence runner with those obtained kinetic energy values. On the base of calculating the newly proposed mathematical features, we construct fuzzy evaluation data sequence and present a new three-dimensional fuzzy quantitative evaluation method; then the value change tendencies of kinetic energy distribution surface features can be clearly quantified, and the fuzzy performance mechanism discipline between the performance results of surface fitting algorithms, the spatial features of turbulence kinetic energy distribution surface, and their respective environmental parameter conditions can be quantitatively analyzed in detail, which results in the acquirement of final conclusions concerning the inherent turbulence kinetic energy distribution performance mechanism and its mathematical relation. A further turbulence energy quantitative study can be ensured. PMID:23213287

  11. Simulating the production and dispersion of environmental pollutants in aerosol phase in an urban area of great historical and cultural value.

    PubMed

    Librando, Vito; Tringali, Giuseppe; Calastrini, Francesca; Gualtieri, Giovanni

    2009-11-01

    Mathematical models were developed to simulate the production and dispersion of aerosol phase atmospheric pollutants which are the main cause of the deterioration of monuments of great historical and cultural value. This work focuses on Particulate Matter (PM) considered the primary cause of monument darkening. Road traffic is the greatest contributor to PM in urban areas. Specific emission and dispersion models were used to study typical urban configurations. The area selected for this study was the city of Florence, a suitable test bench considering the magnitude of architectural heritage together with the remarkable effect of the PM pollution from road traffic. The COPERT model, to calculate emissions, and the street canyon model coupled with the CALINE model, to simulate pollutant dispersion, were used. The PM concentrations estimated by the models were compared to actual PM concentration measurements, as well as related to the trend of some meteorological variables. The results obtained may be defined as very encouraging even the models correlated poorly: the estimated concentration trends as daily averages moderately reproduce the same trends of the measured values.

  12. Sample preparation composite and replicate strategy case studies for assay of solid oral drug products.

    PubMed

    Nickerson, Beverly; Harrington, Brent; Li, Fasheng; Guo, Michele Xuemei

    2017-11-30

    Drug product assay is one of several tests required for new drug products to ensure the quality of the product at release and throughout the life cycle of the product. Drug product assay testing is typically performed by preparing a composite sample of multiple dosage units to obtain an assay value representative of the batch. In some cases replicate composite samples may be prepared and the reportable assay value is the average value of all the replicates. In previously published work by Harrington et al. (2014) [5], a sample preparation composite and replicate strategy for assay was developed to provide a systematic approach which accounts for variability due to the analytical method and dosage form with a standard error of the potency assay criteria based on compendia and regulatory requirements. In this work, this sample preparation composite and replicate strategy for assay is applied to several case studies to demonstrate the utility of this approach and its application at various stages of pharmaceutical drug product development. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Variation in biogeochemical parameters across intertidal seagrass meadows in the central Great Barrier Reef region.

    PubMed

    Mellors, Jane; Waycott, Michelle; Marsh, Helene

    2005-01-01

    This survey provides baseline information on sediment characteristics, porewater, adsorbed and plant tissue nutrients from intertidal coastal seagrass meadows in the central region of the Great Barrier Reef World Heritage Area. Data collected from 11 locations, representative of intertidal coastal seagrass beds across the region, indicated that the chemical environment was typical of other tropical intertidal areas. Results using two different extraction methods highlight the need for caution when choosing an adsorbed phosphate extraction technique, as sediment type affects the analytical outcome. Comparison with published values indicates that the range of nutrient parameters measured is equivalent to those measured across tropical systems globally. However, the nutrient values in seagrass leaves and their molar ratios for Halophila ovalis and Halodule uninervis were much higher than the values from the literature from this and other regions, obtained using the same techniques, suggesting that these species act as nutrient sponges, in contrast with Zostera capricorni. The limited historical data from this region suggest that the nitrogen and phosphorus content of seagrass leaves has increased since the 1970s concomitant with changing land use practice.

  14. Test Characteristics of Neck Fullness and Witnessed Neck Pulsations in the Diagnosis of Typical AV Nodal Reentrant Tachycardia

    PubMed Central

    Sakhuja, Rahul; Smith, Lisa M; Tseng, Zian H; Badhwar, Nitish; Lee, Byron K; Lee, Randall J; Scheinman, Melvin M; Olgin, Jeffrey E; Marcus, Gregory M

    2011-01-01

    Summary Background Claims in the medical literature suggest that neck fullness and witnessed neck pulsations are useful in the diagnosis of typical AV nodal reentrant tachycardia (AVNRT). Hypothesis Neck fullness and witnessed neck pulsations have a high positive predictive value in the diagnosis of typical AVNRT. Methods We performed a cross sectional study of consecutive patients with palpitations presenting to a single electrophysiology (EP) laboratory over a 1 year period. Each patient underwent a standard questionnaire regarding neck fullness and/or witnessed neck pulsations during their palpitations. The reference standard for diagnosis was determined by electrocardiogram and invasive EP studies. Results Comparing typical AVNRT to atrial fibrillation (AF) or atrial flutter (AFL) patients, the proportions with neck fullness and witnessed neck pulsations did not significantly differ: in the best case scenario (using the upper end of the 95% confidence interval [CI]), none of the positive or negative predictive values exceeded 79%. After restricting the population to those with supraventricular tachycardia other than AF or AFL (SVT), neck fullness again exhibited poor test characteristics; however, witnessed neck pulsations exhibited a specificity of 97% (95% CI 90–100%) and a positive predictive value of 83% (95% CI 52–98%). After adjustment for potential confounders, SVT patients with witnessed neck pulsations had a 7 fold greater odds of having typical AVNRT, p=0.029. Conclusions Although neither neck fullness nor witnessed neck pulsations are useful in distinguishing typical AVNRT from AF or AFL, witnessed neck pulsations are specific for the presence of typical AVNRT among those with SVT. PMID:19479968

  15. Cryogenic Insulation System for Soft Vacuum

    NASA Technical Reports Server (NTRS)

    Augustynowicz, S. D.; Fesmire, J. E.

    1999-01-01

    The development of a cryogenic insulation system for operation under soft vacuum is presented in this paper. Conventional insulation materials for cryogenic applications can be divided into three levels of thermal performance, in terms of apparent thermal conductivity [k-value in milliwatt per meter-kelvin (mW/m-K)]. System k-values below 0.1 can be achieved for multilayer insulation operating at a vacuum level below 1 x 10(exp -4) torr. For fiberglass or powder operating below 1 x 10(exp -3) torr, k-values of about 2 are obtained. For foam and other materials at ambient pressure, k-values around 30 are typical. New industry and aerospace applications require a versatile, robust, low-cost thermal insulation with performance in the intermediate range. The target for the new composite insulation system is a k-value below 4.8 mW/m-K (R-30) at a soft vacuum level (from 1 to 10 torr) and boundary temperatures of approximately 77 and 293 kelvin (K). Many combinations of radiation shields, spacers, and composite materials were tested from high vacuum to ambient pressure using cryostat boiloff methods. Significant improvement over conventional systems in the soft vacuum range was demonstrated. The new layered composite insulation system was also shown to provide key benefits for high vacuum applications as well.

  16. Prediction and typicality in multiverse cosmology

    NASA Astrophysics Data System (ADS)

    Azhar, Feraz

    2014-02-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.

  17. Thermal residual stress evaluation based on phase-shift lateral shearing interferometry

    NASA Astrophysics Data System (ADS)

    Dai, Xiangjun; Yun, Hai; Shao, Xinxing; Wang, Yanxia; Zhang, Donghuan; Yang, Fujun; He, Xiaoyuan

    2018-06-01

    An interesting phase-shift lateral shearing interferometry system was proposed to evaluate the thermal residual stress distribution in transparent specimen. The phase-shift interferograms was generated by moving a parallel plane plate. Based on analyzing the fringes deflected by deformation and refractive index change, the stress distribution can be obtained. To verify the validity of the proposed method, a typical experiment was elaborately designed to determine thermal residual stresses of a transparent PMMA plate subjected to the flame of a lighter. The sum of in-plane stress distribution was demonstrated. The experimental data were compared with values measured by digital gradient sensing method. Comparison of the results reveals the effectiveness and feasibility of the proposed method.

  18. Unification of the family of Garrison-Wright's phases.

    PubMed

    Cui, Xiao-Dong; Zheng, Yujun

    2014-07-24

    Inspired by Garrison and Wight's seminal work on complex-valued geometric phases, we generalize the concept of Pancharatnam's "in-phase" in interferometry and further develop a theoretical framework for unification of the abelian geometric phases for a biorthogonal quantum system modeled by a parameterized or time-dependent nonhermitian hamiltonian with a finite and nondegenerate instantaneous spectrum, that is, the family of Garrison-Wright's phases, which will no longer be confined in the adiabatic and nonadiabatic cyclic cases. Besides, we employ a typical example, Bethe-Lamb model, to illustrate how to apply our theory to obtain an explicit result for the Garrison-Wright's noncyclic geometric phase, and also to present its potential applications in quantum computation and information.

  19. Terahertz generation via laser coupling to anharmonic carbon nanotube array

    NASA Astrophysics Data System (ADS)

    Sharma, Soni; Vijay, A.

    2018-02-01

    A scheme of terahertz radiation generation employing a matrix of anharmonic carbon nanotubes (CNTs) embedded in silica is proposed. The matrix is irradiated by two collinear laser beams that induce large excursions on CNT electrons and exert a nonlinear force at the beat frequency ω = ω1-ω2. The force derives a nonlinear current producing THz radiation. The THz field is resonantly enhanced at the plasmon resource, ω = ω p ( 1 + β ) / √{ 2 } , where ωp is the plasma frequency and β is a characteristic parameter. Collisions are a limiting factor, suppressing the plasmon resonance. For typical values of plasma parameters, we obtain power conversion efficiency of the order of 10-6.

  20. Attenuation, dispersion and nonlinearity effects in graphene-based waveguides

    PubMed Central

    Mota, João Cesar Moura; Sombra, Antonio Sergio Bezerra

    2015-01-01

    Summary We simulated and analyzed in detail the behavior of ultrashort optical pulses, which are typically used in telecommunications, propagating through graphene-based nanoribbon waveguides. In this work, we showed the changes that occur in the Gaussian and hyperbolic secant input pulses due to the attenuation, high-order dispersive effects and nonlinear effects. We concluded that it is possible to control the shape of the output pulses with the value of the input signal power and the chemical potential of the graphene nanoribbon. We believe that the obtained results will be highly relevant since they can be applied to other nanophotonic devices, for example, filters, modulators, antennas, switches and other devices. PMID:26171299

  1. Average M shell fluorescence yields for elements with 70≤Z≤92

    NASA Astrophysics Data System (ADS)

    Kahoul, A.; Deghfel, B.; Aylikci, V.; Aylikci, N. K.; Nekkab, M.

    2015-03-01

    The theoretical, experimental and analytical methods for the calculation of average M-shell fluorescence yield (ω¯M ) of different elements are very important because of the large number of their applications in various areas of physical chemistry and medical research. In this paper, the bulk of the average M-shell fluorescence yield measurements reported in the literature, covering the period 1955 to 2005 are interpolated by using an analytical function to deduce the empirical average M-shell fluorescence yield in the atomic range of 70≤Z≤92. The results were compared with the theoretical and fitted values reported by other authors. Reasonable agreement was typically obtained between our result and other works.

  2. Investigation of frequency-response characteristics of engine speed for a typical turbine-propeller engine

    NASA Technical Reports Server (NTRS)

    Taylor, Burt L , III; Oppenheimer, Frank L

    1951-01-01

    Experimental frequency-response characteristics of engine speed for a typical turbine-propeller engine are presented. These data were obtained by subjecting the engine to sinusoidal variations of fuel flow and propeller-blade-angle inputs. Correlation is made between these experimental data and analytical frequency-response characteristics obtained from a linear differential equation derived from steady-state torque-speed relations.

  3. Allelic frequencies and statistical data obtained from 48 AIM INDEL loci in an admixed population from the Brazilian Amazon.

    PubMed

    Francez, Pablo Abdon da Costa; Ribeiro-Rodrigues, Elzemar Martins; dos Santos, Sidney Emanuel Batista

    2012-01-01

    Allelic frequencies of 48 informative insert-delete (INDEL) loci were obtained from a sample set of 130 unrelated individuals living in Macapá, a city located in the northern Amazon region, in Brazil. The values of heterozygosity (H), polymorphic information content (PIC), power of discrimination (PD), power of exclusion (PE), matching probability (MP) and typical paternity index (TPI) were calculated and showed the forensic efficiency of these genetic markers. Based on the allele frequency obtained for the population of Macapá, we estimated an interethnic admixture for the three parental groups (European, Native American and African) of, respectively, 50%, 21% and 29%. Comparing these allele frequencies with those of other Brazilian populations and the parental populations, statistically significant distances were found. The interpopulation genetic distance (F(ST) coefficients) to the present database ranged from F(ST)=0.0431 (p<0.00001) between Macapá and Belém to F(ST)=0.266 (p<0.00001) between Macapá and the Native American group. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. Axial interface optical phonon modes in a double-nanoshell system.

    PubMed

    Kanyinda-Malu, C; Clares, F J; de la Cruz, R M

    2008-07-16

    Within the framework of the dielectric continuum (DC) model, we analyze the axial interface optical phonon modes in a double system of nanoshells. This system is constituted by two identical equidistant nanoshells which are embedded in an insulating medium. To illustrate our results, typical II-VI semiconductors are used as constitutive polar materials of the nanoshells. Resolution of Laplace's equation in bispherical coordinates for the potentials derived from the interface vibration modes is made. By imposing the usual electrostatic boundary conditions at the surfaces of the two-nanoshell system, recursion relations for the coefficients appearing in the potentials are obtained, which entails infinite matrices. The problem of deriving the interface frequencies is reduced to the eigenvalue problem on infinite matrices. A truncating method for these matrices is used to obtain the interface phonon branches. Dependences of the interface frequencies on the ratio of inter-nanoshell separation to core size are obtained for different systems with several values of nanoshell interdistance. Effects due to the change of shell and embedding materials are also investigated in interface phonon modes.

  5. Analytically optimal parameters of dynamic vibration absorber with negative stiffness

    NASA Astrophysics Data System (ADS)

    Shen, Yongjun; Peng, Haibo; Li, Xianghong; Yang, Shaopu

    2017-02-01

    In this paper the optimal parameters of a dynamic vibration absorber (DVA) with negative stiffness is analytically studied. The analytical solution is obtained by Laplace transform method when the primary system is subjected to harmonic excitation. The research shows there are still two fixed points independent of the absorber damping in the amplitude-frequency curve of the primary system when the system contains negative stiffness. Then the optimum frequency ratio and optimum damping ratio are respectively obtained based on the fixed-point theory. A new strategy is proposed to obtain the optimum negative stiffness ratio and make the system remain stable at the same time. At last the control performance of the presented DVA is compared with those of three existing typical DVAs, which were presented by Den Hartog, Ren and Sims respectively. The comparison results in harmonic and random excitation show that the presented DVA in this paper could not only reduce the peak value of the amplitude-frequency curve of the primary system significantly, but also broaden the efficient frequency range of vibration mitigation.

  6. Modeling, estimation and identification methods for static shape determination of flexible structures. [for large space structure design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.

  7. Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.

    2018-03-01

    Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.

  8. Near-field entrainment in black smoker plumes

    NASA Astrophysics Data System (ADS)

    Smith, J. E.; Germanovich, L. N.; Lowell, R. P.

    2013-12-01

    In this work, we study the entrainment rate of the ambient fluid into a plume in the extreme conditions of hydrothermal venting at ocean floor depths that would be difficult to reproduce in the laboratory. Specifically, we investigate the flow regime in the lower parts of three black smoker plumes in the Main Endeavour Field on the Juan de Fuca Ridge discharging at temperatures of 249°C, 333°C, and 336°C and a pressure of 21 MPa. Such flow conditions are typical for ocean floor hydrothermal venting but would be difficult to reproduce in the laboratory. The centerline temperature was measured at several heights in the plume above the orifice. Using a previously developed turbine flow meter, we also measured the mean flow velocity at the orifice. Measurements were conducted during dives 4452 and 4518 on the submersible Alvin. Using these measurements, we obtained a range of 0.064 - 0.068 for values of the entrainment coefficient α, which is assumed constant near the orifice. This is half the value of α ≈ 0.12 - 0.13 that would be expected for plume flow regimes based on the existing laboratory results and field measurements in lower temperature and pressure conditions. In fact, α = 0.064 - 0.068 is even smaller than the value of α ≈ 0.075 characteristic of jet flow regimes and appears to be the lowest reported in the literature. Assuming that the mean value α = 0.066 is typical for hydrothermal venting at ocean floor depths, we then characterized the flow regimes of 63 black smoker plumes located on the Endeavor Segment of the Juan de Fuca Ridge. Work with the obtained data is ongoing, but current results indicate that approximately half of these black smokers are lazy in the sense that their plumes exhibit momentum deficits compared to the pure plume flow that develops as the plume rises. The remaining half produces forced plumes that show the momentum excess compared to the pure plumes. The lower value of the entrainment coefficient has important implications for measurements of mass and heat output at mid-oceanic ridges. For example, determining heat output based on the maximum height of plume rise has become a common method of measuring heat flux produced by hydrothermal circulation at mid-oceanic ridges. The fundamental theory for the rise and spreading of turbulent buoyant plumes suggests that the heat output in this method is proportional to α2 and is, therefore, sensitive to the value of α. The considerably different entrainment rates in lazy and forced black smoker plumes may be important for understanding larvae transport mechanism in the life cycle of macrofauna near hydrothermal vents.

  9. Range Performance of Bombers Powered by Turbine-Propeller Power Plants

    NASA Technical Reports Server (NTRS)

    Cline, Charles W.

    1950-01-01

    Calculations have been made to find range? attainable by bombers of gross weights from l40,000 to 300,000 pounds powered by turbine-propeller power plants. Only conventional configurations were considered and emphasis was placed upon using data for structural and aerodynamic characteristics which are typical of modern military airplanes. An effort was made to limit the various parameters invoked in the airplane configuration to practical values. Therefore, extremely high wing loadings, large amounts of sweepback, and very high aspect ratios have not been considered. Power-plant performance was based upon the performance of a typical turbine-propeller engine equipped with propellers designed to maintain high efficiencies at high-subsonic speeds. Results indicated, in general, that the greatest range, for a given gross weight, is obtained by airplanes of high wing loading, unless the higher cruising speeds associated with the high-wing-loading airplanes require-the use of thinner wing sections. Further results showed the effect of cruising at-high speeds, of operation at very high altitudes, and of carrying large bomb loads.

  10. Communication Range Dynamics and Performance Analysis for a Self-Adaptive Transmission Power Controller.

    PubMed

    Lucas Martínez, Néstor; Martínez Ortega, José-Fernán; Hernández Díaz, Vicente; Del Toro Matamoros, Raúl M

    2016-05-12

    The deployment of the nodes in a Wireless Sensor and Actuator Network (WSAN) is typically restricted by the sensing and acting coverage. This implies that the locations of the nodes may be, and usually are, not optimal from the point of view of the radio communication. Additionally, when the transmission power is tuned for those locations, there are other unpredictable factors that can cause connectivity failures, like interferences, signal fading due to passing objects and, of course, radio irregularities. A control-based self-adaptive system is a typical solution to improve the energy consumption while keeping good connectivity. In this paper, we explore how the communication range for each node evolves along the iterations of an energy saving self-adaptive transmission power controller when using different parameter sets in an outdoor scenario, providing a WSAN that automatically adapts to surrounding changes keeping good connectivity. The results obtained in this paper show how the parameters with the best performance keep a k-connected network, where k is in the range of the desired node degree plus or minus a specified tolerance value.

  11. Force analysis of magnetic bearings with power-saving controls

    NASA Technical Reports Server (NTRS)

    Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.

    1992-01-01

    Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.

  12. Communication Range Dynamics and Performance Analysis for a Self-Adaptive Transmission Power Controller †

    PubMed Central

    Lucas Martínez, Néstor; Martínez Ortega, José-Fernán; Hernández Díaz, Vicente; del Toro Matamoros, Raúl M.

    2016-01-01

    The deployment of the nodes in a Wireless Sensor and Actuator Network (WSAN) is typically restricted by the sensing and acting coverage. This implies that the locations of the nodes may be, and usually are, not optimal from the point of view of the radio communication. Additionally, when the transmission power is tuned for those locations, there are other unpredictable factors that can cause connectivity failures, like interferences, signal fading due to passing objects and, of course, radio irregularities. A control-based self-adaptive system is a typical solution to improve the energy consumption while keeping good connectivity. In this paper, we explore how the communication range for each node evolves along the iterations of an energy saving self-adaptive transmission power controller when using different parameter sets in an outdoor scenario, providing a WSAN that automatically adapts to surrounding changes keeping good connectivity. The results obtained in this paper show how the parameters with the best performance keep a k-connected network, where k is in the range of the desired node degree plus or minus a specified tolerance value. PMID:27187397

  13. An Inexpensive Biophysics Laboratory Apparatus for Acquiring Pulmonary Function Data with Clinical Applications

    NASA Astrophysics Data System (ADS)

    Harkay, Gregory

    2001-11-01

    Interest on the part of the Physics Department at KSC in developing a computer interfaced lab with appeal to biology majors and a need to perform a clinical pulmonological study to fulfill a biology requirement led to the author's undergraduate research project in which a recording spirometer (typical cost: $15K) was constructed from readily available materials and a typical undergraduate lab computer interface. Simple components, including a basic photogate circuit, CPU fan, and PVC couplings were used to construct an instrument for measuring flow rates as a function of time. Pasco software was used to build an experiment in which data was collected and integration performed such that one could obtain accurate values for FEV1 (forced expiratory volume for one second) and FVC (forced vital capacity) and their ratio for a large sample of subjects. Results were compared to published norms and subjects with impaired respiratory mechanisms identified. This laboratory exercise is one with which biology students can clearly identify and would be a robust addition to the repertoire for a HS or college physics or biology teaching laboratory.

  14. Romantic preferences in Brazilian undergraduate students: from the short term to the long term.

    PubMed

    Castro, Felipe Nalon; de Araújo Lopes, Fívia

    2011-09-01

    A number of studies have described different preference patterns typically found for men and women when choosing romantic mates. These vary according to the involvement level expected in the relationship. Despite the number of investigations on the topic, one must be careful not to generalize because most studies use samples composed of North American university undergraduates. This study sought to determine if the preference patterns typically found in other countries also occur among Brazilian undergraduates. The importance of characteristics and modifications in preference patterns under gradually restrictive conditions was also investigated. In general, the results obtained suggest that the preferences found in a number of countries also occur in Brazil. In short-term relationships, men prioritize physical attributes, whereas personal traits gain importance when involvement increases. Women in short-term relationships value physical and personal traits, whereas in the long term, they emphasize personal characteristics and their mate's desire to acquire resources. Resource-related traits were less important than the other traits, and were more important for women than for men.

  15. Determination of compound-specific Hg isotope ratios from transient signals using gas chromatography coupled to multicollector inductively coupled plasma mass spectrometry (MC-ICP/MS).

    PubMed

    Dzurko, Mark; Foucher, Delphine; Hintelmann, Holger

    2009-01-01

    MeHg and inorganic Hg compounds were measured in aqueous media for isotope ratio analysis using aqueous phase derivatization, followed by purge-and-trap preconcentration. Compound-specific isotope ratio measurements were performed by gas chromatography interfaced to MC-ICP/MS. Several methods of calculating isotope ratios were evaluated for their precision and accuracy and compared with conventional continuous flow cold vapor measurements. An apparent fractionation of Hg isotopes was observed during the GC elution process for all isotope pairs, which necessitated integration of signals prior to the isotope ratio calculation. A newly developed average peak ratio method yielded the most accurate isotope ratio in relation to values obtained by a continuous flow technique and the best reproducibility. Compound-specific isotope ratios obtained after GC separation were statistically not different from ratios measured by continuous flow cold vapor measurements. Typical external uncertainties were 0.16 per thousand RSD (n = 8) for the (202)Hg(/198)Hg ratio of MeHg and 0.18 per thousand RSD for the same ratio in inorganic Hg using the optimized operating conditions. Using a newly developed reference standard addition method, the isotopic composition of inorganic Hg and MeHg synthesized from this inorganic Hg was measured in the same run, obtaining a value of delta (202)Hg = -1.49 +/- 0.47 (2SD; n = 10). For optimum performance a minimum mass of 2 ng per Hg species should be introduced onto the column.

  16. Particulate Matter Mass Concentration in Residential Prefabricated Buildings Related to Temperature and Moisture

    NASA Astrophysics Data System (ADS)

    Kraus, Michal; Juhásová Šenitková, Ingrid

    2017-10-01

    Building environmental audit and the assessment of indoor air quality (IAQ) in typical residential buildings is necessary process to ensure users’ health and well-being. The paper deals with the concentrations on indoor dust particles (PM10) in the context of hygrothermal microclimate in indoor environment. The indoor temperature, relative humidity and air movement are basic significant factors determining the PM10 concentration [μg/m3]. The experimental measurements in this contribution represent the impact of indoor physical parameters on the concentration of particulate matter mass concentration. The occurrence of dust particles is typical for the almost two-thirds of interiors of the buildings. Other parameters indoor environment, such as air change rate, volume of the room, roughness and porosity of the building material surfaces, static electricity, light ions and others, were set constant and they are not taken into account in this study. The mass concentration of PM10 is measured during summer season in apartment of residential prefabricated building. The values of global temperature [°C] and relative humidity of indoor air [%] are also monitored. The quantity of particulate mass matter is determined gravimetrically by weighing according to CSN EN 12 341 (2014). The obtained results show that the temperature difference of the internal environment does not have a significant effect on the concentration PM10. Vice versa, the difference of relative humidity exhibits a difference of the concentration of dust particles. Higher levels of indoor particulates are observed for low values of relative humidity. The decreasing of relative air humidity about 10% caused 10µg/m3 of PM10 concentration increasing. The hygienic limit value of PM10 concentration is not exceeded at any point of experimental measurement.

  17. Experimental investigation of the strength and failure behavior of layered sandstone under uniaxial compression and Brazilian testing

    NASA Astrophysics Data System (ADS)

    Yin, Peng-Fei; Yang, Sheng-Qi

    2018-05-01

    As a typical inherently anisotropic rock, layered sandstones can differ from each other in several aspects, including grain size, type of material, type of cementation, and degree of compaction. An experimental study is essential to obtain and convictive evidence to characterize the mechanical behavior of such rock. In this paper, the mechanical behavior of a layered sandstone from Xuzhou, China, is investigated under uniaxial compression and Brazilian test conditions. The loading tests are conducted on 7 sets of bedding inclinations, which are defined as the angle between the bedding plane and horizontal direction. The uniaxial compression strength (UCS) and elastic modulus values show an undulatory variation when the bedding inclination increases. The overall trend of the UCS and elastic modulus values with bedding inclination is decreasing. The BTS value decreases with respect to the bedding inclination and the overall trend of it is approximating a linear variation. The 3D digital high-speed camera images reveal that the failure and fracture of a specimen are related to the surface deformation. Layered sandstone tested under uniaxial compression does not show a typical failure mode, although shear slip along the bedding plane occurs at high bedding inclinations. Strain gauge readings during the Brazilian tests indicate that the normal stress on the bedding plane transforms from compression to tension as the bedding inclination increases. The stress parallel to the bedding plane in a rock material transforms from tension to compression and agrees well with the fracture patterns; "central fractures" occur at bedding inclinations of 0°-75°, "layer activation" occurs at high bedding inclinations of 75°-90°, and a combination of the two occurs at 75°.

  18. The effective temperature of Peptide ions dissociated by sustained off-resonance irradiation collisional activation in fourier transform mass spectrometry.

    PubMed

    Schnier, P D; Jurchen, J C; Williams, E R

    1999-01-28

    A method for determining the internal energy of biomolecule ions activated by collisions is demonstrated. The dissociation kinetics of protonated leucine enkephalin and doubly protonated bradykinin were measured using sustained off-resonance irradiation (SORI) collisionally activated dissociation (CAD) in a Fourier transform mass spectrometer. Dissociation rate constants are obtained from these kinetic data. In combination with Arrhenius parameters measured with blackbody infrared radiative dissociation, the "effective" temperatures of these ions are obtained. Effects of excitation voltage and frequency and the ion cell pressure were investigated. With typical SORI-CAD experimental conditions, the effective temperatures of these peptide ions range between 200 and 400 degrees C. Higher temperatures can be easily obtained for ions that require more internal energy to dissociate. The effective temperatures of both protonated leucine enkephalin and doubly protonated bradykinin measured with the same experimental conditions are similar. Effective temperatures for protonated leucine enkephalin can also be obtained from the branching ratio of the b(4) and (M + H - H(2)O)(+) pathways. Values obtained from this method are in good agreement with those obtained from the overall dissociation rate constants. Protonated leucine enkephalin is an excellent "thermometer" ion and should be well suited to establishing effective temperatures of ions activated by other dissociation techniques, such as infrared photodissociation, as well as ionization methods, such as matrix assisted laser desorption/ionization.

  19. The Effective Temperature of Peptide Ions Dissociated by Sustained Off-Resonance Irradiation Collisional Activation in Fourier Transform Mass Spectrometry

    PubMed Central

    Schnier, Paul D.; Jurchen, John C.; Williams, Evan R.

    2005-01-01

    A method for determining the internal energy of biomolecule ions activated by collisions is demonstrated. The dissociation kinetics of protonated leucine enkephalin and doubly protonated bradykinin were measured using sustained off-resonance irradiation (SORI) collisionally activated dissociation (CAD) in a Fourier transform mass spectrometer. Dissociation rate constants are obtained from these kinetic data. In combination with Arrhenius parameters measured with blackbody infrared radiative dissociation, the “effective” temperatures of these ions are obtained. Effects of excitation voltage and frequency and the ion cell pressure were investigated. With typical SORI–CAD experimental conditions, the effective temperatures of these peptide ions range between 200 and 400 °C. Higher temperatures can be easily obtained for ions that require more internal energy to dissociate. The effective temperatures of both protonated leucine enkephalin and doubly protonated bradykinin measured with the same experimental conditions are similar. Effective temperatures for protonated leucine enkephalin can also be obtained from the branching ratio of the b4 and (M + H − H2O)+ pathways. Values obtained from this method are in good agreement with those obtained from the overall dissociation rate constants. Protonated leucine enkephalin is an excellent “thermometer” ion and should be well suited to establishing effective temperatures of ions activated by other dissociation techniques, such as infrared photodissociation, as well as ionization methods, such as matrix assisted laser desorption/ionization. PMID:16614752

  20. Lower limb muscle volume estimation from maximum cross-sectional area and muscle length in cerebral palsy and typically developing individuals.

    PubMed

    Vanmechelen, Inti M; Shortland, Adam P; Noble, Jonathan J

    2018-01-01

    Deficits in muscle volume may be a significant contributor to physical disability in young people with cerebral palsy. However, 3D measurements of muscle volume using MRI or 3D ultrasound may be difficult to make routinely in the clinic. We wished to establish whether accurate estimates of muscle volume could be made from a combination of anatomical cross-sectional area and length measurements in samples of typically developing young people and young people with bilateral cerebral palsy. Lower limb MRI scans were obtained from the lower limbs of 21 individuals with cerebral palsy (14.7±3years, 17 male) and 23 typically developing individuals (16.8±3.3years, 16 male). The volume, length and anatomical cross-sectional area were estimated from six muscles of the left lower limb. Analysis of Covariance demonstrated that the relationship between the length*cross-sectional area and volume was not significantly different depending on the subject group. Linear regression analysis demonstrated that the product of anatomical cross-sectional area and length bore a strong and significant relationship to the measured muscle volume (R 2 values between 0.955 and 0.988) with low standard error of the estimates of 4.8 to 8.9%. This study demonstrates that muscle volume may be estimated accurately in typically developing individuals and individuals with cerebral palsy by a combination of anatomical cross-sectional area and muscle length. 2D ultrasound may be a convenient method of making these measurements routinely in the clinic. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Integrating SANS and fluid-invasion methods to characterize pore structure of typical American shale oil reservoirs.

    PubMed

    Zhao, Jianhua; Jin, Zhijun; Hu, Qinhong; Jin, Zhenkui; Barber, Troy J; Zhang, Yuxiang; Bleuel, Markus

    2017-11-13

    An integration of small-angle neutron scattering (SANS), low-pressure N 2 physisorption (LPNP), and mercury injection capillary pressure (MICP) methods was employed to study the pore structure of four oil shale samples from leading Niobrara, Wolfcamp, Bakken, and Utica Formations in USA. Porosity values obtained from SANS are higher than those from two fluid-invasion methods, due to the ability of neutrons to probe pore spaces inaccessible to N 2 and mercury. However, SANS and LPNP methods exhibit a similar pore-size distribution, and both methods (in measuring total pore volume) show different results of porosity and pore-size distribution obtained from the MICP method (quantifying pore throats). Multi-scale (five pore-diameter intervals) inaccessible porosity to N 2 was determined using SANS and LPNP data. Overall, a large value of inaccessible porosity occurs at pore diameters <10 nm, which we attribute to low connectivity of organic matter-hosted and clay-associated pores in these shales. While each method probes a unique aspect of complex pore structure of shale, the discrepancy between pore structure results from different methods is explained with respect to their difference in measurable ranges of pore diameter, pore space, pore type, sample size and associated pore connectivity, as well as theoretical base and interpretation.

  2. Adsorption of TCDD molecule onto CNTs and BNNTs: Ab initio van der Waals density-functional study

    NASA Astrophysics Data System (ADS)

    Darvish Ganji, M.; Alinezhad, H.; Soleymani, E.; Tajbakhsh, M.

    2015-03-01

    2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCCD) is one of the most dangerous compounds that infect the environment and hence its removal is crucial for safety in human life. In this work, we have investigated the interaction of TCDD with boron nitride nanotubes (BNNTs) and carbon nanotubes (CNTs) by using the density functional theory (DFT) calculations. Our first-principles results have been validated by experiment and also other theoretical values for the similar system. The adsorption energies for TCDD molecule on the BNNTs and CNT are calculated. It was found that TCDD adsorption ability of BNNT is slightly stronger than that of CNT and TCDD molecule prefers to be adsorbed on BNNTs with molecular axis parallel to the tube axis. The results obtained indicate that TCDD is weakly bound to the outer surface of all the considered nanotubes and the obtained adsorption energy values and binding distance are typical for the physisorption. We also evaluated the influence of curvature and introduced defects on the TCDD adsorption ability of BNNTs. Furthermore, we have analyzed the electronic structure and charge population for the energetically most favorable complexes and the results indicate that no significant hybridization between the respective orbitals of the two entities was accomplished.

  3. Timber value—a matter of choice: a study of how end use assumptions affect timber values.

    Treesearch

    John H. Beuter

    1971-01-01

    The relationship between estimated timber values and actual timber prices is discussed. Timber values are related to how, where, and when the timber is used. An analysis demonstrates the relative values of a typical Douglas-fir stand under assumptions about timber use.

  4. Sci-Thur PM: YIS - 07: Monte Carlo simulations to obtain several parameters required for electron beam dosimetry.

    PubMed

    Muir, B; Rogers, D; McEwen, M

    2012-07-01

    When current dosimetry protocols were written, electron beam data were limited and had uncertainties that were unacceptable for reference dosimetry. Protocols for high-energy reference dosimetry are currently being updated leading to considerable interest in accurate electron beam data. To this end, Monte Carlo simulations using the EGSnrc user-code egs_chamber are performed to extract relevant data for reference beam dosimetry. Calculations of the absorbed dose to water and the absorbed dose to the gas in realistic ion chamber models are performed as a function of depth in water for cobalt-60 and high-energy electron beams between 4 and 22 MeV. These calculations are used to extract several of the parameters required for electron beam dosimetry - the beam quality specifier, R 50 , beam quality conversion factors, k Q and k R50 , the electron quality conversion factor, k' R50 , the photon-electron conversion factor, k ecal , and ion chamber perturbation factors, P Q . The method used has the advantage that many important parameters can be extracted as a function of depth instead of determination at only the reference depth as has typically been done. Results obtained here are in good agreement with measured and other calculated results. The photon-electron conversion factors obtained for a Farmer-type NE2571 and plane-parallel PTW Roos, IBA NACP-02 and Exradin A11 chambers are 0.903, 0.896, 0.894 and 0.906, respectively. These typically differ by less than 0.7% from the contentious TG-51 values but have much smaller systematic uncertainties. These results are valuable for reference dosimetry of high-energy electron beams. © 2012 American Association of Physicists in Medicine.

  5. Chemical abundances of the PRGs UGC 7576 and UGC 9796. I. Testing the formation scenario

    NASA Astrophysics Data System (ADS)

    Spavone, M.; Iodice, E.; Arnaboldi, M.; Longo, G.; Gerhard, O.

    2011-07-01

    Context. The study of both the chemical abundances of HII regions in polar ring galaxies and their implications for the evolutionary scenario of these systems has been a step forward both in tracing the formation history of the galaxy and giving hints toward the mechanisms at work during the building of a disk by cold accretion process. It is now important to establish whether such results are typical of the class of polar disk galaxies as a whole. Aims: The present work aims at checking the cold accretion of gas through a "cosmic filament" as a possible scenario for the formation of the polar structures in UGC 7576 and UGC 9796. If these form by cold accretion, we expect the HII regions abundances and metallicities to be lower than those of same-luminosity spiral disks, with values of Z ~ 1/10 Z⊙, as predicted by cosmological simulations. Methods: We used deep long-slit spectra, obtained with DOLORES@TNG in the optical wavelengths, of the brightest HII regions associated with the polar structures to derive their chemical abundances and star formation rate. We used the empirical methods, based on the intensities of easily observable lines, to derive the oxygen abundance 12 + log (O/H) of both galaxies. Such values are compared with those typical of different morphological galaxy types of comparable luminosity. Results: The average metallicity values for UGC 7576 and UGC 9796 are Z = 0.4 Z⊙ and Z = 0.1 Z⊙, respectively. Both values are lower than those measured for ordinary spirals of similar luminosity, and UGC 7576 presents no metallicity gradient along the polar structure. These data, together with other observed features available for the two PRGs in previous works, are compared with the predictions of simulations of tidal accretion, cold accretion, and merging to disentangle these scenarios.

  6. Continuum Foreground Polarization and Na I Absorption in Type Ia SNe

    NASA Astrophysics Data System (ADS)

    Zelaya, P.; Clocchiatti, A.; Baade, D.; Höflich, P.; Maund, J.; Patat, F.; Quinn, J. R.; Reilly, E.; Wang, L.; Wheeler, J. C.; Förster, F.; González-Gaitán, S.

    2017-02-01

    We present a study of the continuum polarization over the 400-600 nm range of 19 SNe Ia obtained with FORS at the VLT. We separate them into those that show Na I D lines at the velocity of their hosts and those that do not. Continuum polarization of the sodium sample near maximum light displays a broad range of values, from extremely polarized cases like SN 2006X to almost unpolarized ones like SN 2011ae. The non-sodium sample shows, typically, smaller polarization values. The continuum polarization of the sodium sample in the 400-600 nm range is linear with wavelength and can be characterized by the mean polarization ({P}{mean}). Its values span a wide range and show a linear correlation with color, color excess, and extinction in the visual band. Larger dispersion correlations were found with the equivalent width of the Na I D and Ca II H and K lines, and also a noisy relation between {P}{mean} and R V , the ratio of total to selective extinction. Redder SNe show stronger continuum polarization, with larger color excesses and extinctions. We also confirm that high continuum polarization is associated with small values of R V . The correlation between extinction and polarization—and polarization angles—suggest that the dominant fraction of dust polarization is imprinted in interstellar regions of the host galaxies. We show that Na I D lines from foreground matter in the SN host are usually associated with non-galactic ISM, challenging the typical assumptions in foreground interstellar polarization models. Based on observations made with ESO Telescopes at the Paranal Observatory under programs 068.D-0571(A), 069.D-0438(A), 070.D-0111(A), 076.D-0178(A), 079.D-0090(A), 080.D-0108(A), 081.D-0558(A), 085.D-0731(A), and 086.D-0262(A). Also based on observations collected at the German-Spanish Astronomical Center, Calar Alto (Spain).

  7. A safety analysis of food waste-derived animal feeds from three typical conversion techniques in China.

    PubMed

    Chen, Ting; Jin, Yiying; Shen, Dongsheng

    2015-11-01

    This study was based on the food waste to animal feed demonstration projects in China. A safety analysis of animal feeds from three typical treatment processes (i.e., fermentation, heat treatment, and coupled hydrothermal treatment and fermentation) was presented. The following factors are considered in this study: nutritive values characterized by organoleptic properties and general nutritional indices; the presence of bovine- and sheep-derived materials; microbiological indices for Salmonella, total coliform (TC), total aerobic plate counts (TAC), molds and yeast (MY), Staphylococcus Aureus (SA), and Listeria; chemical contaminant indices for hazardous trace elements such as Cr, Cd, and As; and nitrite and organic contaminants such as aflatoxin B1 (AFB1) and hexachlorocyclohexane (HCH). The present study reveals that the feeds from all three conversion processes showed balanced nutritional content and retained a certain feed value. The microbiological indices and the chemical contaminant indices for HCH, dichlorodiphenyltrichloroethane (DDT), nitrite, and mercury all met pertinent feed standards; however, the presence of bovine- and sheep-derived materials and a few chemical contaminants such as Pb were close to or might exceed the legislation permitted values in animal feeding. From the view of treatment techniques, all feed retained part of the nutritional values of the food waste after the conversion processes. Controlled heat treatment can guarantee the inactivation of bacterial pathogens, but none of the three techniques can guarantee the absence of cattle- and sheep-derived materials and acceptable levels of certain contaminants. The results obtained in this research and the feedstuffs legislation related to animal feed indicated that food waste-derived feed could be considered an adequate alternative to be used in animal diets, while the feeding action should be changed with the different qualities of the products, such as restrictions on the application of ruminants, and recycling as formula feeds. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. An Eye-Tracking Study of Multiple Feature Value Category Structure Learning: The Role of Unique Features

    PubMed Central

    Liu, Zhiya; Song, Xiaohong; Seger, Carol A.

    2015-01-01

    We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting. PMID:26274332

  9. An Eye-Tracking Study of Multiple Feature Value Category Structure Learning: The Role of Unique Features.

    PubMed

    Liu, Zhiya; Song, Xiaohong; Seger, Carol A

    2015-01-01

    We examined whether the degree to which a feature is uniquely characteristic of a category can affect categorization above and beyond the typicality of the feature. We developed a multiple feature value category structure with different dimensions within which feature uniqueness and typicality could be manipulated independently. Using eye tracking, we found that the highest attentional weighting (operationalized as number of fixations, mean fixation time, and the first fixation of the trial) was given to a dimension that included a feature that was both unique and highly typical of the category. Dimensions that included features that were highly typical but not unique, or were unique but not highly typical, received less attention. A dimension with neither a unique nor a highly typical feature received least attention. On the basis of these results we hypothesized that subjects categorized via a rule learning procedure in which they performed an ordered evaluation of dimensions, beginning with unique and strongly typical dimensions, and in which earlier dimensions received higher weighting in the decision. This hypothesis accounted for performance on transfer stimuli better than simple implementations of two other common theories of category learning, exemplar models and prototype models, in which all dimensions were evaluated in parallel and received equal weighting.

  10. Adaptive Value Normalization in the Prefrontal Cortex Is Reduced by Memory Load

    PubMed Central

    Burke, C. J.; Seifritz, E.; Tobler, P. N.

    2017-01-01

    Abstract Adaptation facilitates neural representation of a wide range of diverse inputs, including reward values. Adaptive value coding typically relies on contextual information either obtained from the environment or retrieved from and maintained in memory. However, it is unknown whether having to retrieve and maintain context information modulates the brain’s capacity for value adaptation. To address this issue, we measured hemodynamic responses of the prefrontal cortex (PFC) in two studies on risky decision-making. In each trial, healthy human subjects chose between a risky and a safe alternative; half of the participants had to remember the risky alternatives, whereas for the other half they were presented visually. The value of safe alternatives varied across trials. PFC responses adapted to contextual risk information, with steeper coding of safe alternative value in lower-risk contexts. Importantly, this adaptation depended on working memory load, such that response functions relating PFC activity to safe values were steeper with presented versus remembered risk. An independent second study replicated the findings of the first study and showed that similar slope reductions also arose when memory maintenance demands were increased with a secondary working memory task. Formal model comparison showed that a divisive normalization model fitted effects of both risk context and working memory demands on PFC activity better than alternative models of value adaptation, and revealed that reduced suppression of background activity was the critical parameter impairing normalization with increased memory maintenance demand. Our findings suggest that mnemonic processes can constrain normalization of neural value representations. PMID:28462394

  11. Dynamical Typicality Approach to Eigenstate Thermalization

    NASA Astrophysics Data System (ADS)

    Reimann, Peter

    2018-06-01

    We consider the set of all initial states within a microcanonical energy shell of an isolated many-body quantum system, which exhibit an arbitrary but fixed nonequilibrium expectation value for some given observable A . On the condition that this set is not too small, it is shown by means of a dynamical typicality approach that most such initial states exhibit thermalization if and only if A satisfies the so-called weak eigenstate thermalization hypothesis (wETH). Here, thermalization means that the expectation value of A spends most of its time close to the microcanonical value after initial transients have died out. The wETH means that, within the energy shell, most eigenstates of the pertinent system Hamiltonian exhibit very similar expectation values of A .

  12. Modelling dental implant extraction by pullout and torque procedures.

    PubMed

    Rittel, D; Dorogoy, A; Shemtov-Yona, K

    2017-07-01

    Dental implants extraction, achieved either by applying torque or pullout force, is used to estimate the bone-implant interfacial strength. A detailed description of the mechanical and physical aspects of the extraction process in the literature is still missing. This paper presents 3D nonlinear dynamic finite element simulations of a commercial implant extraction process from the mandible bone. Emphasis is put on the typical load-displacement and torque-angle relationships for various types of cortical and trabecular bone strengths. The simulations also study of the influence of the osseointegration level on those relationships. This is done by simulating implant extraction right after insertion when interfacial frictional contact exists between the implant and bone, and long after insertion, assuming that the implant is fully bonded to the bone. The model does not include a separate representation and model of the interfacial layer for which available data is limited. The obtained relationships show that the higher the strength of the trabecular bone the higher the peak extraction force, while for application of torque, it is the cortical bone which might dictate the peak torque value. Information on the relative strength contrast of the cortical and trabecular components, as well as the progressive nature of the damage evolution, can be revealed from the obtained relations. It is shown that full osseointegration might multiply the peak and average load values by a factor 3-12 although the calculated work of extraction varies only by a factor of 1.5. From a quantitative point of view, it is suggested that, as an alternative to reporting peak load or torque values, an average value derived from the extraction work be used to better characterize the bone-implant interfacial strength. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Investigation of in-house superconducting radio-frequency 9-cell cavity made of large grain niobium at KEK

    NASA Astrophysics Data System (ADS)

    Dohmae, Takeshi; Umemori, Kensei; Yamanaka, Masashi; Watanabe, Yuichi; Inoue, Hitoshi

    2017-12-01

    The first in-house, 9-cell, superconducting radio-frequency cavity made of large grain Nb was fabricated at KEK. Some characteristic techniques were employed for the fabrication that were not used for fine grain (FG) Nb. Even though a penetrated hole was created during electron beam welding, it was successfully repaired and did not affect the cavity performance. The completed cavity then underwent vertical tests (VTs) via several surface treatment processes. A defect that caused quenches was found after a VT at 25 mm from the equator where the typical local grinding machine developed at KEK could not be utilized. A new local grinding machine using a 3D printer was thus developed for the first time, and it completely removed this defect. Finally, the cavity achieved a maximum Q0 value of 3.8 ×1010 and accelerating gradient of 38 MV/m. The obtained Q0 value is about 1.5 times higher than that for the KEK in-house FG cavity.

  14. Measurement of radiation exposure of astronauts by radiochemical techniques

    NASA Technical Reports Server (NTRS)

    Brodzinski, R. L.

    1972-01-01

    Only two of the fecal specimens collected inflight during the Apollo 15 mission were returned for analysis. Difficulty in obtaining reasonably accurate radiation dose estimates based on the cosmogenic radionuclide content of the specimens was encountered due to the limited sampling. The concentrations of Na-22, K-40, Cr-51, Fe-59, and Cs-137 are reported. The concentrations of 24 major, minor, and trace elements in these two specimens were determined. Most concentrations are typical of those observed previously. Major exceptions are extremely low values for selenium and extraordinarily high values for rare earth elements. The net Po-210 activities in the Apollo 11 and 12 Solar Wind Composition foils and in the Apollo 8 and 12 spacecraft reflective coatings due to lunar exposure have been determined. Equilibrium concentrations of 0.082 + or - 0.012 disintegrations /sq cm sec of Rn-222 in the lunar atmosphere and 0.0238 + or - 0.0035 disintegrations /sq cm sec of Po-210 on the lunar surface have been calculated for Oceanus Procellarum.

  15. Fabrication and thermoelectric properties of Ca-Co-O ceramics with negative Seebeck coefficient

    NASA Astrophysics Data System (ADS)

    Gong, Chunlin; Shi, Zongmo; Zhang, Yi; Chen, Yongsheng; Hu, Jiaxin; Gou, Jianjun; Qin, Mengjie; Gao, Feng

    2018-06-01

    Ca-Co-O ceramics is typically p-type thermoelectric materials and possesses positive Seebeck coefficient. In this work, n-type Ca-Co-O ceramics with negative Seebeck coefficients were fabricated by sintering and annealing in a reducing atmosphere. The microstructures and thermoelectric properties of the ceramics were investigated. The results show that the carrier concentration and the carrier mobility dramatically increase after the samples were annealed in the reducing atmosphere. The electrical resistivity increases from 0.0663 mΩ·cm to 0.2974 mΩ·cm, while the negative Seebeck coefficients varies from -24.9 μV/K to -56.3 μV/K as the temperature increases from 323 K to 823 K, and the maximum power factor (PF, 1.536 mW/m·K2) is obtained at 623 K. The samples have n-type thermoelectric properties with large PF values and ZT value (ZT = 0.39, 823 K). The unusual results will pave a new way for studying Ca-Co-O thermoelectric ceramics.

  16. Precise calibration of spatial phase response nonuniformity arising in liquid crystal on silicon.

    PubMed

    Xu, Jingquan; Qin, SiYi; Liu, Chen; Fu, Songnian; Liu, Deming

    2018-06-15

    In order to calibrate the spatial phase response nonuniformity of liquid crystal on silicon (LCoS), we propose to use a Twyman-Green interferometer to characterize the wavefront distortion, due to the inherent curvature of the device. During the characterization, both the residual carrier frequency introduced by the Fourier transform evaluation method and the lens aberration are error sources. For the tilted phase error introduced by residual carrier frequency, the least mean square fitting method is used to obtain the tilted phase error. Meanwhile, we use Zernike polynomials fitting based on plane mirror calibration to mitigate the lens aberration. For a typical LCoS with 1×12,288 pixels after calibration, the peak-to-valley value of the inherent wavefront distortion is approximately 0.25λ at 1550 nm, leading to a half-suppression of wavefront distortion. All efforts can suppress the root mean squares value of the inherent wavefront distortion to approximately λ/34.

  17. Experiments in dilution jet mixing

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.; Srinivasan, R.; Berenfeld, A.

    1983-01-01

    Experimental results are given on the mixing of a single row of jets with an isothermal mainstream in a straight duct, to include flow and geometric variations typical of combustion chambers in gas turbine engines. The principal conclusions reached from these experiments were: at constant momentum ratio, variations in density ratio have only a second-order effect on the profiles; a first-order approximation to the mixing of jets with a variable temperature mainstream can be obtained by superimposing the jets-in-an isothermal-crossflow and mainstream profiles; flow area convergence, especially injection-wall convergence, significantly improves the mixing; for opposed rows of jets, with the orifice centerlines in-line, the optimum ratio of orifice spacing to duct height is one half of the optimum value for single side injection at the same momentum ratio; and for opposed rows of jets, with the orifice centerlines staggered, the optimum ratio of orifice spacing to duct height is twice the optimum value for single side injection at the same momentum ratio.

  18. Development of Porosity Measurement Method in Shale Gas Reservoir Rock

    NASA Astrophysics Data System (ADS)

    Siswandani, Alita; Nurhandoko, BagusEndar B.

    2016-08-01

    The pore scales have impacts on transport mechanisms in shale gas reservoirs. In this research, digital helium porosity meter is used for porosity measurement by considering real condition. Accordingly it is necessary to obtain a good approximation for gas filled porosity. Shale has the typical effective porosity that is changing as a function of time. Effective porosity values for three different shale rocks are analyzed by this proposed measurement. We develop the new measurement method for characterizing porosity phenomena in shale gas as a time function by measuring porosity in a range of minutes using digital helium porosity meter. The porosity of shale rock measured in this experiment are free gas and adsorbed gas porosoty. The pressure change in time shows that porosity of shale contains at least two type porosities: macro scale porosity (fracture porosity) and fine scale porosity (nano scale porosity). We present the estimation of effective porosity values by considering Boyle-Gay Lussaac approximation and Van der Waals approximation.

  19. A New Global Core Plasma Model of the Plasmasphere

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Comfort, R. H.; Craven, P. D.

    2014-01-01

    The Global Core Plasma Model (GCPM) is the first empirical model for thermal inner magnetospheric plasma designed to integrate previous models and observations into a continuous in value and gradient representation of typical total densities. New information about the plasmasphere, in particular, make possible significant improvement. The IMAGE Mission Radio Plasma Imager (RPI) has obtained the first observations of total plasma densities along magnetic field lines in the plasmasphere and polar cap. Dynamics Explorer 1 Retarding Ion Mass Spectrometer (RIMS) has provided densities in temperatures in the plasmasphere for 5 ion species. These and other works enable a new more detailed empirical model of thermal in the inner magnetosphere that will be presented. Specifically shown here are the inner-plasmasphere RIMS measurements, radial fits to densities and temperatures for H(+), He(+), He(++), O(+), and O(+) and the error associated with these initial simple fits. Also shown are more subtle dependencies on the f10.7 P-value (see Richards et al. [1994]).

  20. A 3D joint interpretation of magnetotelluric and seismic tomographic models: The case of the volcanic island of Tenerife

    NASA Astrophysics Data System (ADS)

    García-Yeguas, Araceli; Ledo, Juanjo; Piña-Varas, Perla; Prudencio, Janire; Queralt, Pilar; Marcuello, Alex; Ibañez, Jesús M.; Benjumea, Beatriz; Sánchez-Alzola, Alberto; Pérez, Nemesio

    2017-12-01

    In this work we have done a 3D joint interpretation of magnetotelluric and seismic tomography models. Previously we have described different techniques to infer the inner structure of the Earth. We have focused on volcanic regions, specifically on Tenerife Island volcano (Canary Islands, Spain). In this area, magnetotelluric and seismic tomography studies have been done separately. The novelty of the present work is the combination of both techniques in Tenerife Island. For this aim we have applied Fuzzy Clusters Method at different depths obtaining several clusters or classes. From the results, a geothermal system has been inferred below Teide volcano, in the center of Tenerife Island. An edifice hydrothermally altered and full of fluids is situated below Teide, ending at 600 m below sea level. From this depth the resistivity and VP values increase downwards. We also observe a clay cap structure, a typical feature in geothermal systems related with low resistivity and low VP values.

  1. Determination of meteor parameters using laboratory simulation techniques

    NASA Technical Reports Server (NTRS)

    Friichtenicht, J. F.; Becker, D. G.

    1973-01-01

    Atmospheric entry of meteoritic bodies is conveniently and accurately simulated in the laboratory by techniques which employ the charging and electrostatic acceleration of macroscopic solid particles. Velocities from below 10 to above 50 km/s are achieved for particle materials which are elemental meteoroid constituents or mineral compounds with characteristics similar to those of meteoritic stone. The velocity, mass, and kinetic energy of each particle are measured nondestructively, after which the particle enters a target gas region. Because of the small particle size, free molecule flow is obtained. At typical operating pressures (0.1 to 0.5 torr), complete particle ablation occurs over distances of 25 to 50 cm; the spatial extent of the atmospheric interaction phenomena is correspondingly small. Procedures have been developed for measuring the spectrum of light from luminous trails and the values of fundamental quantities defined in meteor theory. It is shown that laboratory values for iron are in excellent agreement with those for 9 to 11 km/s artificial meteors produced by rocket injection of iron bodies into the atmosphere.

  2. Diffusion Weighted MRI and MRS to Differentiate Radiation Necrosis and Recurrent Disease in Gliomas

    NASA Astrophysics Data System (ADS)

    Ewell, Lars

    2006-03-01

    A difficulty encountered in the diagnosis of patients with gliomas is the differentiation between recurrent disease and Radiation Induced Necrosis (RIN). Both can appear as ‘enhancing lesions’ on a typical T2 weighted MRI scan. Magnetic Resonance Spectroscopy (MRS) and Diffusion Weighted MRI (DWMRI) have the potential to be helpful regarding this differentiation. MRS has the ability to measure the concentration of brain metabolites, such as Choline, Creatin and N- Acetyl Aspartate, the ratios of which have been shown to discriminate between RIN and recurrent disease. DWMRI has been linked via a rise in the Apparent Diffusion Coefficient (ADC) to successful treatment of disease. Using both of these complimentary non-invasive imaging modalities, we intend to initiate an imaging protocol whereby we will study how best to combine metabolite ratios and ADC values to obtain the most useful information in the least amount of scan time. We will look for correlations over time between ADC values, and MRS, among different sized voxels.

  3. A study on thermal properties of biodegradable polymers using photothermal methods

    NASA Astrophysics Data System (ADS)

    Siqueira, A. P. L.; Poley, L. H.; Sanchez, R.; da Silva, M. G.; Vargas, H.

    2005-06-01

    In this work is reported the use of photothermal techniques applied to the thermal characterization of biodegradable polymers of Polyhydroxyalkanoates (PHAs) family. This is a family of polymer produced by bacteria using renewable resources. It exhibits thermoplastic properties and therefore it can be an alternative product for engineering plastics, being also applied as packages for food industry and fruits. Thermal diffusivities were determined using the open photoacoustic cell (OPC) configuration. Specific heat capacity measurements were performed monitoring temperature of the samples under white light illumination against time. Typical values obtained for the thermal properties are in good agreement with those found in the literature for other polymers. Due to the incorporation of hydroxyvalerate in the monomer structure, the thermal diffusivity and thermal conductivity increase reaching a saturation value, otherwise the specific thermal capacity decreases as the concentration of the hydroxyvalerate (HV) increases. These results can be explained by polymers internal structure and are allowing new applications of these materials.

  4. Soil-to-plant halogens transfer studies 2. Root uptake of radiochlorine by plants.

    PubMed

    Kashparov, V; Colle, C; Zvarich, S; Yoschenko, V; Levchuk, S; Lundin, S

    2005-01-01

    Long-term field experiments have been carried out in the Chernobyl exclusion zone in order to determine the parameters governing radiochlorine (36Cl) transfer to plants from four types of soil, namely, podzoluvisol, greyzem, and typical and meadow chernozem. Radiochlorine concentration ratios (CR) in radish roots (15+/-10), lettuce leaves (30+/-15), bean pods (15+/-11) and wheat seed (23+/-11) and straw (210+/-110) for fresh weight of plants were obtained. These values correlate well with stable chlorine values for the same plants. One year after injection, 36Cl reached a quasi-equilibrium with stable chlorine in the agricultural soils and its behavior in the soil-plant system mimicked the behavior of stable chlorine (this behavior was determined by soil moisture transport in the investigated soils). In the absence of intensive vertical migration, more than half of 36Cl activity in arable layer of soil passes into the radish, lettuce and the aboveground parts of wheat during a single vegetation period.

  5. Optimization of complex slater-type functions with analytic derivative methods for describing photoionization differential cross sections.

    PubMed

    Matsuzaki, Rei; Yabushita, Satoshi

    2017-05-05

    The complex basis function (CBF) method applied to various atomic and molecular photoionization problems can be interpreted as an L2 method to solve the driven-type (inhomogeneous) Schrödinger equation, whose driven term being dipole operator times the initial state wave function. However, efficient basis functions for representing the solution have not fully been studied. Moreover, the relation between their solution and that of the ordinary Schrödinger equation has been unclear. For these reasons, most previous applications have been limited to total cross sections. To examine the applicability of the CBF method to differential cross sections and asymmetry parameters, we show that the complex valued solution to the driven-type Schrödinger equation can be variationally obtained by optimizing the complex trial functions for the frequency dependent polarizability. In the test calculations made for the hydrogen photoionization problem with five or six complex Slater-type orbitals (cSTOs), their complex valued expansion coefficients and the orbital exponents have been optimized with the analytic derivative method. Both the real and imaginary parts of the solution have been obtained accurately in a wide region covering typical molecular regions. Their phase shifts and asymmetry parameters are successfully obtained by extrapolating the CBF solution from the inner matching region to the asymptotic region using WKB method. The distribution of the optimized orbital exponents in the complex plane is explained based on the close connection between the CBF method and the driven-type equation method. The obtained information is essential to constructing the appropriate basis sets in future molecular applications. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Near-integrable behaviour in a family of discretized rotations

    NASA Astrophysics Data System (ADS)

    Reeve-Black, Heather; Vivaldi, Franco

    2013-05-01

    We consider a one-parameter family of invertible maps of a two-dimensional lattice, obtained by discretizing the space of planar rotations. We let the angle of rotation approach π/2, and show that the limit of vanishing discretization is described by an integrable piecewise-smooth Hamiltonian flow, whereby the plane foliates into families of invariant polygons with an increasing number of sides. Considered as perturbations of the flow, the lattice maps assume a different character, described in terms of strip maps, a variant of those found in outer billiards of polygons. The perturbation introduces phenomena reminiscent of the Kolmogorov-Arnold-Moser scenario: a positive fraction of the unperturbed curves survives. We prove this for symmetric orbits, under a condition that allows us to obtain explicit values for their density, the latter being a rational number typically less than 1. This result allows us to conclude that the infimum of the density of all surviving curves—symmetric or not—is bounded away from zero.

  7. Long-term survey of lion-roar emissions inside the terrestrial magnetosheath obtained from the STAFF-SA measurements onboard the Cluster spacecraft

    NASA Astrophysics Data System (ADS)

    Pisa, D.; Krupar, V.; Kruparova, O.; Santolik, O.

    2017-12-01

    Intense whistler-mode emissions known as 'lion-roars' are often observed inside the terrestrial magnetosheath, where the solar wind plasma flow slows down, and the local magnetic field increases ahead of a planetary magnetosphere. Plasma conditions in this transient region lead to the electron temperature anisotropy, which can result in the whistler-mode waves. The lion-roars are narrow-band emissions with typical frequencies between 0.1-0.5 Fce, where Fce is the electron cyclotron frequency. We present results of a long-term survey obtained by the Spatio Temporal Analysis Field Fluctuations - Spectral Analyzer (STAFF-SA) instruments on board the four Cluster spacecraft between 2001 and 2010. We have visually identified the time-frequency intervals with the intense lion-roar signature. Using the Singular Value Decomposition (SVD) method, we analyzed the wave propagation properties. We show the spatial, frequency and wave power distributions. Finally, the wave properties as a function of upstream solar wind conditions are discussed.

  8. Scene-based nonuniformity correction technique for infrared focal-plane arrays.

    PubMed

    Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong

    2009-04-20

    A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.

  9. Whole Lyophilized Olives as Sources of Unexpectedly High Amounts of Secoiridoids: The Case of Three Tuscan Cultivars.

    PubMed

    Cecchi, Lorenzo; Migliorini, Marzia; Cherubini, Chiara; Innocenti, Marzia; Mulinacci, Nadia

    2015-02-04

    The phenolic profiles of three typical Tuscan olive cultivars, Frantoio, Moraiolo, and Leccino, stored in different conditions (fresh, frozen, and whole lyophilized fruits), have been compared during the ripening period. Our main goals were to evaluate the phenolic content of whole freeze-dried fruits and to test the stability of the corresponding cake in oxidative-stress conditions. The comparison of fresh and whole freeze-dried fruits from the 2012 season gave unexpected results; e.g., oleuropein in lyophilized fruits was up to 20 times higher than in fresh olives with values up to 80.3 g/kg. Over time we noted that the olive pastes obtained from lyophilized olives contained highly stable phenolic compounds, even under strong oxidative stress conditions. Finally, it was also observed that the cake/powder obtained from unripe freeze-dried olives was very poor in oil content and therefore quite suitable for use in nutritional supplements rich in phenolic compounds, such as secoiridoids, which are not widely present in the human diet.

  10. High heat flux burnout in subcooled flow boiling

    NASA Astrophysics Data System (ADS)

    Celata, G. P.; Cumo, M.; Mariani, A.

    1995-09-01

    The paper reports the results of an experimental research carried out at the Heat Transfer Division of the Energy Department, C.R. Casaccia, on the thermal hydraulic characterization of subcooled flow boiling CHF under typical conditions of thermonuclear fusion reactors, i.e. high liquid velocity and subcooling. The experiment was carried out exploring the following parameters: channel diameter (from 2.5 to 8.0 mm), heated length (10 and 15 cm), liquid velocity (from 2 to 40 m/s), exit pressure (from atmospheric to 5.0 MPa), inlet temperature (from 30 to 80 °C), channel orientation (vertical and horizontal). A maximum CHF value of 60.6 MW/m2 has been obtained under the following conditions: T in=30°, p=2.5 MPa, u=40 m/s, D=2.5 mm (smooth channel) Turbulence promoters (helically coiled wires) have been employed to further enhance the CHF attainable with subcooled flow boiling. Helically coiled wires allow an increase of 50% of the maximum CHF obtained with smooth channels.

  11. A physics-based earthquake simulator and its application to seismic hazard assessment in Calabria (Southern Italy) region

    USGS Publications Warehouse

    Console, Rodolfo; Nardi, Anna; Carluccio, Roberto; Murru, Maura; Falcone, Giuseppe; Parsons, Thomas E.

    2017-01-01

    The use of a newly developed earthquake simulator has allowed the production of catalogs lasting 100 kyr and containing more than 100,000 events of magnitudes ≥4.5. The model of the fault system upon which we applied the simulator code was obtained from the DISS 3.2.0 database, selecting all the faults that are recognized on the Calabria region, for a total of 22 fault segments. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which can be compared with those of the real observations. The results of the physics-based simulator algorithm were compared with those obtained by an alternative method using a slip-rate balanced technique. Finally, as an example of a possible use of synthetic catalogs, an attenuation law has been applied to all the events reported in the synthetic catalog for the production of maps showing the exceedance probability of given values of PGA on the territory under investigation.

  12. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  13. A combined electrochemical-irradiation treatment of highly colored and polluted industrial wastewater

    NASA Astrophysics Data System (ADS)

    Barrera-Díaz, C.; Ureña-Nuñez, F.; Campos, E.; Palomar-Pardavé, M.; Romero-Romo, M.

    2003-07-01

    This study reports on the attainment of optimal conditions for two electrolytic methods to treat wastewater: namely, electrocoagulation and particle destabilization of a highly polluted industrial wastewater, and electrochemically induced oxidation induced by in situ generation of Fenton's reactive. Additionally, a combined method that consisted of electrochemical treatment plus γ-irradiation was carried out. A typical composition of the industrial effluent treated was COD 3400 mg/l, color 3750 Pt/Co units, and fecal coliforms 21000 MPN/ml. The best removal efficiency was obtained with electrochemical oxidation induced in situ , that resulted in the reduction of 78% for the COD, 86% color and 99.9% fecal coliforms removal. A treatment sequence was designed and carried out, such that after both electrochemical processes, a γ-irradiation technique was used to complete the procedure. The samples were irradiated with various doses in an ALC γ-cell unit provided with a Co-60 source. The removal efficiency obtained was 95% for the COD values, 90% color and 99.9% for fecal coliforms.

  14. Blood platelet counts, morphology and morphometry in lions, Panthera leo.

    PubMed

    Du Plessis, L

    2009-09-01

    Due to logistical problems in obtaining sufficient blood samples from apparently healthy animals in the wild in order to establish normal haematological reference values, only limited information regarding the blood platelet count and morphology of free-living lions (Panthera leo) is available. This study provides information on platelet counts and describes their morphology with particular reference to size in two normal, healthy and free-ranging lion populations. Blood samples were collected from a total of 16 lions. Platelet counts, determined manually, ranged between 218 and 358 x 10(9)/l. Light microscopy showed mostly activated platelets of various sizes with prominent granules. At the ultrastructural level the platelets revealed typical mammalian platelet morphology. However, morphometric analysis revealed a significant difference (P < 0.001) in platelet size between the two groups of animals. Basic haematological information obtained in this study may be helpful in future comparative studies between animals of the same species as well as in other felids.

  15. Applications of computer algebra to distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Storch, Joel A.

    1993-01-01

    In the analysis of vibrations of continuous elastic systems, one often encounters complicated transcendental equations with roots directly related to the system's natural frequencies. Typically, these equations contain system parameters whose values must be specified before a numerical solution can be obtained. The present paper presents a method whereby the fundamental frequency can be obtained in analytical form to any desired degree of accuracy. The method is based upon truncation of rapidly converging series involving inverse powers of the system natural frequencies. A straightforward method to developing these series and summing them in closed form is presented. It is demonstrated how Computer Algebra can be exploited to perform the intricate analytical procedures which otherwise would render the technique difficult to apply in practice. We illustrate the method by developing two analytical approximations to the fundamental frequency of a vibrating cantilever carrying a rigid tip body. The results are compared to the numerical solution of the exact (transcendental) frequency equation over a range of system parameters.

  16. High critical currents in heavily doped (Gd,Y)Ba 2Cu 3O x superconductor tapes

    DOE PAGES

    Selvamanickam, V.; Gharahcheshmeh, M. Heydari; Xu, A.; ...

    2015-01-20

    REBa 2Cu 3O x superconductor tapes with moderate levels of dopants have been optimized for high critical current density in low magnetic fields at 77 K, but they do not exhibit exemplary performance in conditions of interest for practical applications, i.e., temperatures less than 50 K and fields of 2–30 T. Heavy doping of REBCO tapes has been avoided by researchers thus far due to deterioration in properties. Here, we report achievement of critical current densities (J c) above 20 MA/cm 2 at 30 K, 3 T in heavily doped (25 mol. % Zr-added) (Gd,Y)Ba 2Cu 3O x superconductor tapes,more » which is more than three times higher than the J c typically obtained in moderately doped tapes. Pinning force levels above 1000 GN/m 3 have also been attained at 20 K. A composition map of lift factor in J c (ratio of J c at 30 K, 3 T to the J c at 77 K, 0 T) has been developed which reveals the optimum film composition to obtain lift factors above six, which is thrice the typical value. A highly c-axis aligned BaZrO 3 (BZO) nanocolumn defect density of nearly 7 × 10 11 cm –2 as well as 2–3nm sized particles rich in Cu and Zr have been found in the high J c films.« less

  17. Self-assembled spongy-like MnO2 electrode materials for supercapacitors

    NASA Astrophysics Data System (ADS)

    Dong, Meng; Zhang, Yu Xin; Song, Hong Fang; Qiu, Xin; Hao, Xiao Dong; Liu, Chuan Pu; Yuan, Yuan; Li, Xin Lu; Huang, Jia Mu

    2012-08-01

    Mesoporous spongy-like MnO2 has been synthesized via a facile and biphasic wet method, accompanied with tetraoctylammonium bromide (TOAB) as a soft template under ambient condition. A well-defined spongy morphology of MnO2 with uniform filament diameters 10-20 nm have been observed by FESEM, TEM, HRTEM, XRD, FT-IR,TGA-DSC studies. Further physical characterizations revealed that MnO2 sponges owned a large surface area of 155 m2 g-1 with typical mesoporous appearance. A specific capacitance value as high as 336 F g-1 was obtained. This improved capacitive behavior was attributed to the large surface area, morphology nature of nano-MnO2, and its broad pore size distribution.

  18. Multiscale modeling of sickle anemia blood blow by Dissipative Partice Dynamics

    NASA Astrophysics Data System (ADS)

    Lei, Huan; Caswell, Bruce; Karniadakis, George

    2011-11-01

    A multi-scale model for sickle red blood cell is developed based on Dissipative Particle Dynamics (DPD). Different cell morphologies (sickle, granular, elongated shapes) typically observed in in vitro and in vivo are constructed and the deviations from the biconcave shape is quantified by the Asphericity and Elliptical shape factors. The rheology of sickle blood is studied in both shear and pipe flow systems. The flow resistance obtained from both systems exhibits a larger value than the healthy blood flow due to the abnormal cell properties. However, the vaso-occulusion phenomenon, reported in a recent microfluid experiment, is not observed in the pipe flow system unless the adhesive interactions between sickle blood cells and endothelium properly introduced into the model.

  19. Average M shell fluorescence yields for elements with 70≤Z≤92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahoul, A., E-mail: ka-abdelhalim@yahoo.fr; LPMRN laboratory, Department of Materials Science, Faculty of Sciences and Technology, Mohamed El Bachir El Ibrahimi University, Bordj-Bou-Arreridj 34030; Deghfel, B.

    2015-03-30

    The theoretical, experimental and analytical methods for the calculation of average M-shell fluorescence yield (ω{sup ¯}{sub M}) of different elements are very important because of the large number of their applications in various areas of physical chemistry and medical research. In this paper, the bulk of the average M-shell fluorescence yield measurements reported in the literature, covering the period 1955 to 2005 are interpolated by using an analytical function to deduce the empirical average M-shell fluorescence yield in the atomic range of 70≤Z≤92. The results were compared with the theoretical and fitted values reported by other authors. Reasonable agreement wasmore » typically obtained between our result and other works.« less

  20. Computation of transonic flow past projectiles at angle of attack

    NASA Technical Reports Server (NTRS)

    Reklis, R. P.; Sturek, W. B.; Bailey, F. R.

    1978-01-01

    Aerodynamic properties of artillery shell such as normal force and pitching moment reach peak values in a narrow transonic Mach number range. In order to compute these quantities, numerical techniques have been developed to obtain solutions to the three-dimensional transonic small disturbance equation about slender bodies at angle of attack. The computation is based on a plane relaxation technique involving Fourier transforms to partially decouple the three-dimensional difference equations. Particular care is taken to assure accurate solutions near corners found in shell designs. Computed surface pressures are compared to experimental measurements for circular arc and cone cylinder bodies which have been selected as test cases. Computed pitching moments are compared to range measurements for a typical projectile shape.

  1. Preliminary Study of a Hybrid Helicon-ECR Plasma Source

    NASA Astrophysics Data System (ADS)

    M. Hala, A.; Oksuz, L.; Ximing, Zhu

    2016-08-01

    A new type of hybrid discharge is experimentally investigated in this work. A helicon source and an electron cyclotron resonance (ECR) source were combined to produce plasma. As a preliminary study of this type of plasma, the optical emission spectroscopy (OES) method was used to obtain values of electron temperature and density under a series of typical conditions. Generally, it was observed that the electron temperature decreases and the electron density increases as the pressure increased. When increasing the applied power at a certain pressure, the average electron density at certain positions in the discharge does not increase significantly possibly due to the high degree of neutral depletion. Electron temperature increased with power in the hybrid mode. Possible mechanisms of these preliminary observations are discussed.

  2. Cumulus cloud venting of mixed layer ozone

    NASA Technical Reports Server (NTRS)

    Ching, J. K. S.; Shipley, S. T.; Browell, E. V.; Brewer, D. A.

    1985-01-01

    Observations are presented which substantiate the hypothesis that significant vertical exchange of ozone and aerosols occurs between the mixed layer and the free troposphere during cumulus cloud convective activity. The experiments utilized the airborne Ultra-Violet Differential Absorption Lidar (UV-DIAL) system. This system provides simultaneous range resolved ozone concentration and aerosol backscatter profiles with high spatial resolution. Evening transects were obtained in the downwind area where the air mass had been advected. Space-height analyses for the evening flight show the cloud debris as patterns of ozone typically in excess of the ambient free tropospheric background. This ozone excess was approximately the value of the concentration difference between the mixed layer and free troposphere determined from independent vertical soundings made by another aircraft in the afternoon.

  3. Impedance spectroscopy studies on lead free Ba1-xMgx(Ti0.9Zr0.1)O3 ceramics

    NASA Astrophysics Data System (ADS)

    Ben Moumen, S.; Neqali, A.; Asbani, B.; Mezzane, D.; Amjoud, M.; Choukri, E.; Gagou, Y.; El Marssi, M.; Luk'yanchuk, Igor A.

    2018-06-01

    Ba1-xMgx(Ti0.9Zr0.1)O3 (x = 0.01 and 0.02) ceramics were prepared using the conventional solid state reaction. Rietveld refinement performed on X-ray diffraction patterns indicates that the samples are tetragonal crystal structure with P4mm space group. By increasing Mg content from 1 to 2% the unit cell volume decreased. Likewise, the grains size is greatly reduced from 10 μm to 4 μm. The temperature dependence of dielectric constants at different frequencies exhibited typical relaxor ferroelectric characteristic, with sensitive dependence in frequency and temperature for ac conductivity. The obtained activation energy values were correlated to the proposed conduction mechanisms.

  4. Time dependent semiclassical tunneling through one dimensional barriers using only real valued trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, Michael F.

    2015-10-28

    The time independent semiclassical treatment of barrier tunneling has been understood for a very long time. Several semiclassical approaches to time dependent tunneling through barriers have also been presented. These typically involve trajectories for which the position variable is a complex function of time. In this paper, a method is presented that uses only real valued trajectories, thus avoiding the complications that can arise when complex trajectories are employed. This is accomplished by expressing the time dependent wave packet as an integration over momentum. The action function in the exponent in this expression is expanded to second order in themore » momentum. The expansion is around the momentum, p{sub 0{sup *}}, at which the derivative of the real part of the action is zero. The resulting Gaussian integral is then taken. The stationary phase approximation requires that the derivative of the full action is zero at the expansion point, and this leads to a complex initial momentum and complex tunneling trajectories. The “pseudo-stationary phase” approximation employed in this work results in real values for the initial momentum and real valued trajectories. The transmission probabilities obtained are found to be in good agreement with exact quantum results.« less

  5. Comparative sensitizing potencies of fragrances, preservatives, and hair dyes.

    PubMed

    Lidén, Carola; Yazar, Kerem; Johansen, Jeanne D; Karlberg, Ann-Therese; Uter, Wolfgang; White, Ian R

    2016-11-01

    The local lymph node assay (LLNA) is used for assessing sensitizing potential in hazard identification and risk assessment for regulatory purposes. Sensitizing potency on the basis of the LLNA is categorized into extreme (EC3 value of ≤0.2%), strong (>0.2% to ≤2%), and moderate (>2%). To compare the sensitizing potencies of fragrance substances, preservatives, and hair dye substances, which are skin sensitizers that frequently come into contact with the skin of consumers and workers, LLNA results and EC3 values for 72 fragrance substances, 25 preservatives and 107 hair dye substances were obtained from two published compilations of LLNA data and opinions by the Scientific Committee on Consumer Safety and its predecessors. The median EC3 values of fragrances (n = 61), preservatives (n = 19) and hair dyes (n = 59) were 5.9%, 0.9%, and 1.3%, respectively. The majority of sensitizing preservatives and hair dyes are thus strong or extreme sensitizers (EC3 value of ≤2%), and fragrances are mostly moderate sensitizers. Although fragrances are typically moderate sensitizers, they are among the most frequent causes of contact allergy. This indicates that factors other than potency need to be addressed more rigorously in risk assessment and risk management. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Electrochemical stability and corrosion resistance of Ti-Mo alloys for biomedical applications.

    PubMed

    Oliveira, N T C; Guastaldi, A C

    2009-01-01

    Electrochemical behavior of pure Ti and Ti-Mo alloys (6-20wt.% Mo) was investigated as a function of immersion time in electrolyte simulating physiological media. Open-circuit potential values indicated that all Ti-Mo alloys studied and pure Ti undergo spontaneous passivation due to spontaneously formed oxide film passivating the metallic surface, in the chloride-containing solution. It also indicated that the addition of Mo to pure Ti up to 15wt.% seems to improve the protection characteristics of its spontaneous oxides. Electrochemical impedance spectroscopy (EIS) studies showed high impedance values for all samples, increasing with immersion time, indicating an improvement in corrosion resistance of the spontaneous oxide film. The fit obtained suggests a single passive film present on the metals' surface, improving their resistance with immersion time, presenting the highest values to Ti-15Mo alloy. Potentiodynamic polarization showed a typical valve-metal behavior, with anodic formation of barrier-type oxide films, without pitting corrosion, even in chloride-containing solution. In all cases, the passive current values were quite small, and decrease after 360h of immersion. All these electrochemical results suggest that the Ti-15Mo alloy is a promising material for orthopedic devices, since electrochemical stability is directly associated with biocompatibility and is a necessary condition for applying a material as biomaterial.

  7. Effect of spin-polarized D-3He fuel on dense plasma focus for space propulsion

    NASA Astrophysics Data System (ADS)

    Mei-Yu Wang, Choi, Chan K.; Mead, Franklin B.

    1992-01-01

    Spin-polarized D-3He fusion fuel is analyzed to study its effect on the dense plasma focus (DPF) device for space propulsion. The Mather-type plasma focus device is adopted because of the ``axial'' acceleration of the current carrying plasma sheath, like a coaxial plasma gun. The D-3He fuel is chosen based on the neutron-lean fusion reactions with high charged-particle fusion products. Impulsive mode of operation is used with multi-thrusters in order to make higher thrust (F)-to-weight (W) ratio with relatively high value of specific impulse (Isp). Both current (I) scalings with I2 and I8/3 are considered for plasma pinch temperature and capacitor mass. For a 30-day Mars mission, with four thrusters, for example, the typical F/W values ranging from 0.5-0.6 to 0.1-0.2 for I2 and I8/3 scalings, respectively, and the Isp values of above 1600 s are obtained. Parametric studies indicate that the spin-polarized D-3He provides increased values of F/W and Isp over conventional D-3He fuel which was due to the increased fusion power and decreased radiation losses for the spin-polarized case.

  8. A Critical Review on the Use of Support Values in Tree Viewers and Bioinformatics Toolkits.

    PubMed

    Czech, Lucas; Huerta-Cepas, Jaime; Stamatakis, Alexandros

    2017-06-01

    Phylogenetic trees are routinely visualized to present and interpret the evolutionary relationships of species. Most empirical evolutionary data studies contain a visualization of the inferred tree with branch support values. Ambiguous semantics in tree file formats can lead to erroneous tree visualizations and therefore to incorrect interpretations of phylogenetic analyses. Here, we discuss problems that arise when displaying branch values on trees after rerooting. Branch values are typically stored as node labels in the widely-used Newick tree format. However, such values are attributes of branches. Storing them as node labels can therefore yield errors when rerooting trees. This depends on the mostly implicit semantics that tools deploy to interpret node labels. We reviewed ten tree viewers and ten bioinformatics toolkits that can display and reroot trees. We found that 14 out of 20 of these tools do not permit users to select the semantics of node labels. Thus, unaware users might obtain incorrect results when rooting trees. We illustrate such incorrect mappings for several test cases and real examples taken from the literature. This review has already led to improvements in eight tools. We suggest tools should provide options that explicitly force users to define the semantics of node labels. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  9. Probabilistic analysis of preload in the abutment screw of a dental implant complex.

    PubMed

    Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R

    2008-09-01

    Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.

  10. Petrographic and Isotopic Evidence for Siderite Precursors to Iron Oxide Cements

    NASA Astrophysics Data System (ADS)

    Loope, D.

    2015-12-01

    The origin of iron oxide mineralization in the Navajo Sandstone on the Colorado Plateau is important because of the different forms of distinct self-organization exhibited by these systems, the potential importance of the cements as geochronometers, and their use as analogs for similar mineralization on other planets. We consider this mineralization to be the product of microbially mediated oxidation of siderite in evolving groundwater systems. Iron oxide grain coatings were dissolved and the iron precipitated as siderite during a reducing phase of diagenesis. Upon invasion by oxidizing waters, iron-oxidizing bacteria colonized the redox interface between siderite-cemented and porous sandstone. Precipitation of iron oxide at this interface generated acid that facilitated further siderite dissolution. One difficulty in testing this hypothesis is that siderite is destroyed by the cm-scale transport of iron during oxidation. There are two lines of evidence that support the presence of a siderite precursor in these systems. 1)Rhombic grains that we interpret to be iron oxide pseudomorphs after siderite occur where in-situ oxidation rather than dissolution of the siderite precursor has occurred. 2) The δ56Fe values of these iron oxide cements are typically negative. We have measured the δ56Fe value of Navajo Sandstone to be 0.2‰; a value in good agreement with previous workers (Chan et al., 2006; Busigny and Dauphas, 2007). Bleaching of the sandstones apparently results in near complete removal of Fe with little change in the δ56Fe values of the bulk sandstone. The δ56Fe values of iron oxide cements have a median value of -0.8‰; similar to the value we obtained from ferroan carbonate (-0.86‰). Iron oxide from samples that comprise largely rhombic grains has similar δ56Fe values (-0.5‰) to those obtained from cements produced by siderite dissolution and subsequent oxidation (-0.4‰). Our interpretation is that siderite precipitated from an aqueous solution in which the δ56Fe value was <0.2‰ yielding siderite with δ56Fe values that ranged upward from -1.4‰. Invasion of the Navajo by oxidizing waters resulted in microbially mediated oxidation of the siderite concretions. The strongly negative values of the Fe oxides result from the near-quantitative oxidation of the siderite in a closed system.

  11. Thermal insulation and clothing area factors of typical Arabian Gulf clothing ensembles for males and females: measurements using thermal manikins.

    PubMed

    Al-ajmi, F F; Loveday, D L; Bedwell, K H; Havenith, G

    2008-05-01

    The thermal insulation of clothing is one of the most important parameters used in the thermal comfort model adopted by the International Standards Organisation (ISO) [BS EN ISO 7730, 2005. Ergonomics of the thermal environment. Analytical determination and interpretation of thermal comfort using calculation of the PMV and PPD indices and local thermal comfort criteria. International Standardisation Organisation, Geneva.] and by ASHRAE [ASHRAE Handbook, 2005. Fundamentals. Chapter 8. American Society of Heating Refrigeration and Air-conditioning Engineers, Inc., 1791 Tullie Circle N.E., Atlanta, GA.]. To date, thermal insulation values of mainly Western clothing have been published with only minimal data being available for non-Western clothing. Thus, the objective of the present study is to measure and present the thermal insulation (clo) values of a number of Arabian Gulf garments as worn by males and females. The clothing ensembles and garments of Arabian Gulf males and females presented in this study are representative of those typically worn in the region during both summer and winter seasons. Measurements of total thermal insulation values (clo) were obtained using a male and a female shape thermal manikin in accordance with the definition of insulation as given in ISO 9920. In addition, the clothing area factors (f cl) determined in two different ways were compared. The first method used a photographic technique and the second a regression equation as proposed in ISO 9920, based on the insulation values of Arabian Gulf male and female garments and ensembles as they were determined in this study. In addition, fibre content, descriptions and weights of Arabian Gulf clothing have been recorded and tabulated in this study. The findings of this study are presented as additions to the existing knowledge base of clothing insulation, and provide for the first time data for Arabian Gulf clothing. The analysis showed that for these non-Western clothing designs, the most widely used regression calculation of f cl is not valid. However, despite the very large errors in f cl made with the regression method, the errors this causes in the intrinsic clothing insulation value, I cl, are limited.

  12. Simulation of runoff and nutrient export from a typical small watershed in China using the Hydrological Simulation Program-Fortran.

    PubMed

    Li, Zhaofu; Liu, Hongyu; Luo, Chuan; Li, Yan; Li, Hengpeng; Pan, Jianjun; Jiang, Xiaosan; Zhou, Quansuo; Xiong, Zhengqin

    2015-05-01

    The Hydrological Simulation Program-Fortran (HSPF), which is a hydrological and water-quality computer model that was developed by the United States Environmental Protection Agency, was employed to simulate runoff and nutrient export from a typical small watershed in a hilly eastern monsoon region of China. First, a parameter sensitivity analysis was performed to assess how changes in the model parameters affect runoff and nutrient export. Next, the model was calibrated and validated using measured runoff and nutrient concentration data. The Nash-Sutcliffe efficiency (E NS ) values of the yearly runoff were 0.87 and 0.69 for the calibration and validation periods, respectively. For storms runoff events, the E NS values were 0.93 for the calibration period and 0.47 for the validation period. Antecedent precipitation and soil moisture conditions can affect the simulation accuracy of storm event flow. The E NS values for the total nitrogen (TN) export were 0.58 for the calibration period and 0.51 for the validation period. In addition, the correlation coefficients between the observed and simulated TN concentrations were 0.84 for the calibration period and 0.74 for the validation period. For phosphorus export, the E NS values were 0.89 for the calibration period and 0.88 for the validation period. In addition, the correlation coefficients between the observed and simulated orthophosphate concentrations were 0.96 and 0.94 for the calibration and validation periods, respectively. The nutrient simulation results are generally satisfactory even though the parameter-lumped HSPF model cannot represent the effects of the spatial pattern of land cover on nutrient export. The model parameters obtained in this study could serve as reference values for applying the model to similar regions. In addition, HSPF can properly describe the characteristics of water quantity and quality processes in this area. After adjustment, calibration, and validation of the parameters, the HSPF model is suitable for hydrological and water-quality simulations in watershed planning and management and for designing best management practices.

  13. SYNCHROTRON ORIGIN OF THE TYPICAL GRB BAND FUNCTION—A CASE STUDY OF GRB 130606B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Bin-Bin; Briggs, Michael S.; Uhm, Z. Lucas

    2016-01-10

    We perform a time-resolved spectral analysis of GRB 130606B within the framework of a fast-cooling synchrotron radiation model with magnetic field strength in the emission region decaying with time, as proposed by Uhm and Zhang. The data from all time intervals can be successfully fit by the model. The same data can be equally well fit by the empirical Band function with typical parameter values. Our results, which involve only minimal physical assumptions, offer one natural solution to the origin of the observed GRB spectra and imply that, at least some, if not all, Band-like GRB spectra with typical Bandmore » parameter values can indeed be explained by synchrotron radiation.« less

  14. Alternative method of quantum state tomography toward a typical target via a weak-value measurement

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Dai, Hong-Yi; Yang, Le; Zhang, Ming

    2018-03-01

    There is usually a limitation of weak interaction on the application of weak-value measurement. This limitation dominates the performance of the quantum state tomography toward a typical target in the finite and high-dimensional complex-valued superposition of its basis states, especially when the compressive sensing technique is also employed. Here we propose an alternative method of quantum state tomography, presented as a general model, toward such typical target via weak-value measurement to overcome such limitation. In this model the pointer for the weak-value measurement is a qubit, and the target-pointer coupling interaction is no longer needed within the weak interaction limitation, meanwhile this interaction under the compressive sensing can be described with the Taylor series of the unitary evolution operator. The postselection state at the target is the equal superposition of all basis states, and the pointer readouts are gathered under multiple Pauli operator measurements. The reconstructed quantum state is generated from an optimization algorithm of total variation augmented Lagrangian alternating direction algorithm. Furthermore, we demonstrate an example of this general model for the quantum state tomography toward the planar laser-energy distribution and discuss the relations among some parameters at both our general model and the original first-order approximate model for this tomography.

  15. Phase holograms in silver halide emulsions without a bleaching step

    NASA Astrophysics Data System (ADS)

    Belendez, Augusto; Madrigal, Roque F.; Pascual, Inmaculada V.; Fimia, Antonio

    2000-03-01

    Phase holograms in holographic emulsions are usually obtained by two bath processes (developing and bleaching). In this work we present a one step method to reach phase holograms with silver-halide emulsions. Which is based on the variation of the conditions of the typical developing processes of amplitude holograms. For this, we have used the well-known chemical developer, AAC, which is composed by ascorbic acid as a developing agent and sodium carbonate anhydrous as accelerator. Agfa 8E75 HD and BB-640 plates were used to obtain these phase gratings, whose colors are between yellow and brown. In function of the parameters of this developing method the resulting diffraction efficiency and optical density of the diffraction gratings were studied. One of these parameters studied is the influence of the grain size. In the case of Agfa plates diffraction efficiency around 18% with density < 1 has been reached, whilst with the BB-640 emulsion, whose grain is smaller than that of the Agfa, diffraction efficiency near 30% has been obtained. The resulting gratings were analyzed through X-ray spectroscopy showing the differences of the structure of the developed silver when amplitude and transmission gratings are obtained. The angular response of both (transmission and amplitude) gratings were studied, where minimal transmission is showed at the Braggs angle in phase holograms, whilst a maximal value is obtained in amplitude gratings.

  16. Olive Oil Based Emulsions in Frozen Puff Pastry Production

    NASA Astrophysics Data System (ADS)

    Gabriele, D.; Migliori, M.; Lupi, F. R.; de Cindio, B.

    2008-07-01

    Puff pastry is an interesting food product having different industrial applications. It is obtained by laminating layers of dough and fats, mainly shortenings or margarine, having specific properties which provides required spreading characteristic and able to retain moisture into dough. To obtain these characteristics, pastry shortenings are usually saturated fats, however the current trend in food industry is mainly oriented towards unsatured fats such as olive oil, which are thought to be safer for human health. In the present work, a new product, based on olive oil, was studied as shortening replacer in puff pastry production. To ensure the desired consistency, for the rheological matching between fat and dough, a water-in-oil emulsion was produced based on olive oil, emulsifier and a hydrophilic thickener agent able to increase material structure. Obtained materials were characterized by rheological dynamic tests in linear viscoelastic conditions, aiming to setup process and material consistency, and rheological data were analyzed by using the weak gel model. Results obtained for tested emulsions were compared to theological properties of a commercial margarine, adopted as reference value for texture and stability. Obtained emulsions are characterized by interesting rheological properties strongly dependent on emulsifier characteristics and water phase composition. However a change in process temperature during fat extrusion and dough lamination seems to be necessary to match properly typical dough rheological properties.

  17. Energy response corrections for profile measurements using a combination of different detector types.

    PubMed

    Wegener, Sonja; Sauer, Otto A

    2018-02-01

    Different detector properties will heavily affect the results of off-axis measurements outside of radiation fields, where a different energy spectrum is encountered. While a diode detector would show a high spatial resolution, it contains high atomic number elements, which lead to perturbations and energy-dependent response. An ionization chamber, on the other hand, has a much smaller energy dependence, but shows dose averaging over its larger active volume. We suggest a way to obtain spatial energy response corrections of a detector independent of its volume effect for profiles of arbitrary fields by using a combination of two detectors. Measurements were performed at an Elekta Versa HD accelerator equipped with an Agility MLC. Dose profiles of fields between 10 × 4 cm² and 0.6 × 0.6 cm² were recorded several times, first with different small-field detectors (unshielded diode 60012 and stereotactic field detector SFD, microDiamond, EDGE, and PinPoint 31006) and then with a larger volume ionization chamber Semiflex 31010 for different photon beam qualities of 6, 10, and 18 MV. Correction factors for the small-field detectors were obtained from the readings of the respective detector and the ionization chamber using a convolution method. Selected profiles were also recorded on film to enable a comparison. After applying the correction factors to the profiles measured with different detectors, agreement between the detectors and with profiles measured on EBT3 film was improved considerably. Differences in the full width half maximum obtained with the detectors and the film typically decreased by a factor of two. Off-axis correction factors outside of a 10 × 1 cm² field ranged from about 1.3 for the EDGE diode about 10 mm from the field edge to 0.7 for the PinPoint 31006 25 mm from the field edge. The microDiamond required corrections comparable in size to the Si-diodes and even exceeded the values in the tail region of the field. The SFD was found to require the smallest correction. The corrections typically became larger for higher energies and for smaller field sizes. With a combination of two detectors, experimentally derived correction factors can be obtained. Application of those factors leads to improved agreement between the measured profiles and those recorded on EBT3 film. The results also complement so far only Monte Carlo-simulated values for the off-axis response of different detectors. © 2017 American Association of Physicists in Medicine.

  18. A note on calculation of efficiency and emissions from wood and wood pellet stoves

    NASA Astrophysics Data System (ADS)

    Petrocelli, D.; Lezzi, A. M.

    2015-11-01

    In recent years, national laws and international regulations have introduced strict limits on efficiency and emissions from woody biomass appliances to promote the diffusion of models characterized by low emissions and high efficiency. The evaluation of efficiency and emissions is made during the certification process which consists in standardized tests. Standards prescribe the procedures to be followed during tests and the relations to be used to determine the mean value of efficiency and emissions. As a matter of fact these values are calculated using flue gas temperature and composition averaged over the whole test period, lasting from 1 to 6 hours. Typically, in wood appliances the fuel burning rate is not constant and this leads to a considerable variation in time of composition and flow rate of the flue gas. In this paper we show that this fact may cause significant differences between emission values calculated according to standards and those obtained integrating over the test period the instantaneous mass and energy balances. In addition, we propose some approximated relations and a method for wood stoves which supply more accurate results than those calculated according to standards. These relations can be easily implemented in a computer controlled data acquisition systems.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence J.

    Wave packet analysis provides a connection between linear small disturbance theory and subsequent nonlinear turbulent spot flow behavior. The traditional association between linear stability analysis and nonlinear wave form is developed via the method of stationary phase whereby asymptotic (simplified) mean flow solutions are used to estimate dispersion behavior and stationary phase approximation are used to invert the associated Fourier transform. The resulting process typically requires nonlinear algebraic equations inversions that can be best performed numerically, which partially mitigates the value of the approximation as compared to a more complete, e.g. DNS or linear/nonlinear adjoint methods. To obtain a simpler,more » closed-form analytical result, the complete packet solution is modeled via approximate amplitude (linear convected kinematic wave initial value problem) and local sinusoidal (wave equation) expressions. Significantly, the initial value for the kinematic wave transport expression follows from a separable variable coefficient approximation to the linearized pressure fluctuation Poisson expression. The resulting amplitude solution, while approximate in nature, nonetheless, appears to mimic many of the global features, e.g. transitional flow intermittency and pressure fluctuation magnitude behavior. A low wave number wave packet models also recover meaningful auto-correlation and low frequency spectral behaviors.« less

  20. Quantitative Correlation of 7B04 Aluminum Alloys Pitting Corrosion Morphology Characteristics with Stress Concentration Factor

    NASA Astrophysics Data System (ADS)

    Liu, Zhiguo; Yan, Guangyao; Mu, Zhitao; Li, Xudong

    2018-01-01

    The accelerated pitting corrosion test of 7B04 aluminum alloy specimen was carried out according to the spectrum which simulated airport environment, and the corresponding pitting corrosion damage was obtained and was defined through three parameters A and B and C which respectively denoted the corrosion pit surface length and width and corrosion pit depth. The ratio between three parameters could determine the morphology characteristics of corrosion pits. On this basis the stress concentration factor of typical corrosion pit morphology under certain load conditions was quantitatively analyzed. The research shows that the corrosion pits gradually incline to be ellipse in surface and moderate in depth, and most value of B/A and C/A lies in 1 between 4 and few maximum exceeds 4; The stress concentration factor Kf of corrosion pits is obviously affected by the its morphology, the value of Kf increases with corrosion pits depth increasement under certain corrosion pits surface geometry. Also, the value of Kf decreases with surface width increasement under certain corrosion pits depth. The research conclusion can set theory basis for corrosion fatigue life analysis of aircraft aluminum alloy structure.

  1. Iterative direct inversion: An exact complementary solution for inverting fault-slip data to obtain palaeostresses

    NASA Astrophysics Data System (ADS)

    Mostafa, Mostafa E.

    2005-10-01

    The present study shows that reconstructing the reduced stress tensor (RST) from the measurable fault-slip data (FSD) and the immeasurable shear stress magnitudes (SSM) is a typical iteration problem. The result of direct inversion of FSD presented by Angelier [1990. Geophysical Journal International 103, 363-376] is considered as a starting point (zero step iteration) where all SSM are assigned constant value ( λ=√{3}/2). By iteration, the SSM and RST update each other until they converge to fixed values. Angelier [1990. Geophysical Journal International 103, 363-376] designed the function upsilon ( υ) and the two estimators: relative upsilon (RUP) and (ANG) to express the divergence between the measured and calculated shear stresses. Plotting individual faults' RUP at successive iteration steps shows that they tend to zero (simulated data) or to fixed values (real data) at a rate depending on the orientation and homogeneity of the data. FSD of related origin tend to aggregate in clusters. Plots of the estimators ANG versus RUP show that by iteration, labeled data points are disposed in clusters about a straight line. These two new plots form the basis of a technique for separating FSD into homogeneous clusters.

  2. A novel chaos-based image encryption algorithm using DNA sequence operations

    NASA Astrophysics Data System (ADS)

    Chai, Xiuli; Chen, Yiran; Broyde, Lucie

    2017-01-01

    An image encryption algorithm based on chaotic system and deoxyribonucleic acid (DNA) sequence operations is proposed in this paper. First, the plain image is encoded into a DNA matrix, and then a new wave-based permutation scheme is performed on it. The chaotic sequences produced by 2D Logistic chaotic map are employed for row circular permutation (RCP) and column circular permutation (CCP). Initial values and parameters of the chaotic system are calculated by the SHA 256 hash of the plain image and the given values. Then, a row-by-row image diffusion method at DNA level is applied. A key matrix generated from the chaotic map is used to fuse the confused DNA matrix; also the initial values and system parameters of the chaotic system are renewed by the hamming distance of the plain image. Finally, after decoding the diffused DNA matrix, we obtain the cipher image. The DNA encoding/decoding rules of the plain image and the key matrix are determined by the plain image. Experimental results and security analyses both confirm that the proposed algorithm has not only an excellent encryption result but also resists various typical attacks.

  3. A strategy to minimize the energy offset in carrier injection from excited dyes to inorganic semiconductors for efficient dye-sensitized solar energy conversion.

    PubMed

    Fujisawa, Jun-Ichi; Osawa, Ayumi; Hanaya, Minoru

    2016-08-10

    Photoinduced carrier injection from dyes to inorganic semiconductors is a crucial process in various dye-sensitized solar energy conversions such as photovoltaics and photocatalysis. It has been reported that an energy offset larger than 0.2-0.3 eV (threshold value) is required for efficient electron injection from excited dyes to metal-oxide semiconductors such as titanium dioxide (TiO2). Because the energy offset directly causes loss in the potential of injected electrons, it is a crucial issue to minimize the energy offset for efficient solar energy conversions. However, a fundamental understanding of the energy offset, especially the threshold value, has not been obtained yet. In this paper, we report the origin of the threshold value of the energy offset, solving the long-standing questions of why such a large energy offset is necessary for the electron injection and which factors govern the threshold value, and suggest a strategy to minimize the threshold value. The threshold value is determined by the sum of two reorganization energies in one-electron reduction of semiconductors and typically-used donor-acceptor (D-A) dyes. In fact, the estimated values (0.21-0.31 eV) for several D-A dyes are in good agreement with the threshold value, supporting our conclusion. In addition, our results reveal that the threshold value is possible to be reduced by enlarging the π-conjugated system of the acceptor moiety in dyes and enhancing its structural rigidity. Furthermore, we extend the analysis to hole injection from excited dyes to semiconductors. In this case, the threshold value is given by the sum of two reorganization energies in one-electron oxidation of semiconductors and D-A dyes.

  4. Comparison of outcomes obtained in murine local lymph node assays using CBA/J or CBA/Ca mice.

    PubMed

    Maeda, Yosuke; Hirosaki, Haruka; Yakata, Naoaki; Takeyoshi, Masahiro

    2016-08-01

    CBA/J and CBA/Ca mice are the recommended strains for local lymph node assays (LLNAs). Here, we report quantitative and qualitative comparisons between both mouse strains to provide useful information for the strain selection of sensitization testing. LLNA was conducted, in accordance with Organisation for Economic Co-operation and Development Test Guideline No. 429, with CBA/J and CBA/Ca mice using five chemicals including typical contact sensitizers and non-sensitizers: 2,4-dinitrochlorobenzene (DNCB), isoeugenol, α-hexylcinnamic aldehyde (HCA), propylene glycol (PG), and hexane; then outcomes were compared based on the raw data (disintegrations per minute, DPM), stimulation index (SI) values, EC3 values and positive/negative decisions. Although a significant difference was noted between DPM values derived from each strain of mice, SI values exhibited no considerable difference. The EC3 values for DNCB in CBA/J and CBA/Ca mice were 0.04 and 0.03, those for isoeugenol were 1.4 and 0.9, and those for HCA were 7.7 and 6.0, respectively. All EC values derived from each test system were almost equivalent and were within the range of acceptance criteria of the ICCVAM performance standard for LLNA. Positive/negative outcomes for all test chemicals were consistent. In conclusion, no considerable differences were observed in the final outcomes derived from CBA/J and CBA/Ca mice in LLNA. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Study on the initial value for the exterior orientation of the mobile version

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-jing; Li, Shi-liang

    2011-10-01

    Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.

  6. Theoretical and experimental studies of reentry plasmas

    NASA Technical Reports Server (NTRS)

    Dunn, M. G.; Kang, S.

    1973-01-01

    A viscous shock-layer analysis was developed and used to calculate nonequilibrium-flow species distributions in the plasma layer of the RAM vehicle. The theoretical electron-density results obtained are in good agreement with those measured in flight. A circular-aperture flush-mounted antenna was used to obtain a comparison between theoretical and experimental antenna admittance in the presence of ionized boundary layers of low collision frequency. The electron-temperature and electron-density distributions in the boundary layer were independently measured. The antenna admittance was measured using a four-probe microwave reflectometer and these measured values were found to be in good agreement with those predicted. Measurements were also performed with another type of circular-aperture antenna and good agreement was obtained between the calculations and the experimental results. A theoretical analysis has been completed which permits calculation of the nonequilibrium, viscous shock-layer flow field for a sphere-cone body. Results are presented for two different bodies at several different altitudes illustrating the influences of bluntness and chemical nonequilibrium on several gas dynamic parameters of interest. Plane-wave transmission coefficients were calculated for an approximate space-shuttle body using a typical trajectory.

  7. Rock property measurements and analysis of selected igneous, sedimentary, and metamorphic rocks from worldwide localities

    USGS Publications Warehouse

    Johnson, Gordon R.

    1983-01-01

    Dry bulk density and grain density measurements were made on 182 samples of igneous, sedimentary, and metamorphic rocks from various world-wide localities. Total porosity values and both water-accessible and helium-accessible porosities were calculated from the density data. Magnetic susceptibility measurements were made on the solid samples and permeability and streaming potentials were concurrently measured on most samples. Dry bulk densities obtained using two methods of volume determination, namely direct measurement and Archlmedes principle, were nearly equivalent for most samples. Grain densities obtained on powdered samples were typically greater than grain densities obtained on solid samples, but differences were usually small. Sedimentary rocks had the highest percentage of occluded porosity per rock volume whereas metamorphic rocks had the highest percentage of occluded porosity per total porosity. There was no apparent direct relationship between permeability and streaming potential for most samples, although there were indications of such a relationship in the rock group consisting of granites, aplites, and syenites. Most rock types or groups of similar rock types of low permeability had, when averaged, comparable levels of streaming potential per unit of permeability. Three calcite samples had negative streaming potentials.

  8. Closed cycle MHD power generation experiments using a helium-cesium working fluid in the NASA Lewis Facility

    NASA Technical Reports Server (NTRS)

    Sovie, R. J.

    1976-01-01

    The MHD channel in the NASA Lewis Research Center was redesigned and used in closed cycle power generation experiments with a helium-cesium working fluid. The cross sectional dimensions of the channel were reduced to 5 by 16.5 cm to allow operation over a variety of conditions. Experiments have been run at temperatures of 1900-2100 K and Mach numbers from 0.3 to 0.55 in argon and 0.2 in helium. Improvements in Hall voltage isolation and seed vaporization techniques have resulted in significant improvements in performance. Typical values obtained with helium are Faraday open circuit voltage 141 V (92% of uBh) at a magnetic field strength of 1.7 T, power outputs of 2.2 kw for tests with 28 electrodes and 2.1 kw for tests with 17 electrodes. Power densities of 0.6 MW/cu m and Hall fields of about 1100 V/m were obtained in the tests with 17 electrodes, representing a factor of 18 improvement over previously reported results. The V-I curves and current distribution data indicate that while near ideal equilibrium performance is obtained under some conditions, no nonequilibrium power has been generated to date.

  9. VizieR Online Data Catalog: Abundances in the local region. II. F, G, and K dwarfs (Luck+, 2017)

    NASA Astrophysics Data System (ADS)

    Luck, R. E.

    2017-06-01

    The McDonald Observatory 2.1m Telescope and Sandiford Cassegrain Echelle Spectrograph provided much of the observational data for this study. High-resolution spectra were obtained during numerous observing runs, from 1996 to 2010. The spectra cover a continuous wavelength range from about 484 to 700nm, with a resolving power of about 60000. The wavelength range used demands two separate observations--one centered at about 520nm, and the other at about 630nm. Typical S/N values per pixel for the spectra are more than 150. Spectra of 57 dwarfs were obtained using the Hobby-Eberly telescope and High-Resolution Spectrograph. The spectra have a resolution of 30000, spanning the wavelength range of 400 to 785nm. They also have very high signal-to-noise ratios, >300 per resolution element in numerous cases. The last set of spectra were obtained from the ELODIE Archive (Moultaka et al. 2004PASP..116..693M). These spectra are fully processed, including order co-addition, and have a continuous wavelength span of 400 to 680nm and a resolution of 42000. The ELODIE spectra utilized here all have S/N>75 per pixel. (6 data files).

  10. Maintenance Audit through Value Analysis Technique: A Case Study

    NASA Astrophysics Data System (ADS)

    Carnero, M. C.; Delgado, S.

    2008-11-01

    The increase in competitiveness, technological changes and the increase in the requirements of quality and service have forced a change in the design and application of maintenance, as well as the way in which it is considered within the managerial strategy. There are numerous maintenance activities that must be developed in a service company. As a result the maintenance functions as a whole have to be outsourced. Nevertheless, delegating this subject to specialized personnel does not exempt the company from responsibilities, but rather leads to the need for control of each maintenance activity. In order to achieve this control and to evaluate the efficiency and effectiveness of the company it is essential to carry out an audit that diagnoses the problems that could develop. In this paper a maintenance audit applied to a service company is developed. The methodology applied is based on the expert systems. The expert system by means of rules uses the weighting technique SMART and value analysis to obtain the weighting between the decision functions and between the alternatives. The expert system applies numerous rules and relations between different variables associated with the specific maintenance functions, to obtain the maintenance state by sections and the general maintenance state of the enterprise. The contributions of this paper are related to the development of a maintenance audit in a service enterprise, in which maintenance is not generally considered a strategic subject and to the integration of decision-making tools such as the weighting technique SMART with value analysis techniques, typical in the design of new products, in the area of the rule-based expert systems.

  11. Connecting the Cosmic Star Formation Rate with the Local Star Formation

    NASA Astrophysics Data System (ADS)

    Gribel, Carolina; Miranda, Oswaldo D.; Williams Vilas-Boas, José

    2017-11-01

    We present a model that unifies the cosmic star formation rate (CSFR), obtained through the hierarchical structure formation scenario, with the (Galactic) local star formation rate (SFR). It is possible to use the SFR to generate a CSFR mapping through the density probability distribution functions commonly used to study the role of turbulence in the star-forming regions of the Galaxy. We obtain a consistent mapping from redshift z˜ 20 up to the present (z = 0). Our results show that the turbulence exhibits a dual character, providing high values for the star formation efficiency (< \\varepsilon > ˜ 0.32) in the redshift interval z˜ 3.5{--}20 and reducing its value to < \\varepsilon > =0.021 at z = 0. The value of the Mach number ({{ M }}{crit}), from which < \\varepsilon > rapidly decreases, is dependent on both the polytropic index (Γ) and the minimum density contrast of the gas. We also derive Larson’s first law associated with the velocity dispersion (< {V}{rms}> ) in the local star formation regions. Our model shows good agreement with Larson’s law in the ˜ 10{--}50 {pc} range, providing typical temperatures {T}0˜ 10{--}80 {{K}} for the gas associated with star formation. As a consequence, dark matter halos of great mass could contain a number of halos of much smaller mass, and be able to form structures similar to globular clusters. Thus, Larson’s law emerges as a result of the very formation of large-scale structures, which in turn would allow the formation of galactic systems, including our Galaxy.

  12. Factors affecting the transformation of a pyritic tailing: scaled-up column tests.

    PubMed

    García, C; Ballester, A; González, F; Blázquez, M L

    2005-02-14

    Two different methods for predicting the quality of the water draining from a pyritic tailing are compared; for this, a static test (ABA test) and a kinetic test in large columns were chosen. The different results obtained in the two experimental set-ups show the necessity of being careful in selecting both the adequate predictive method and the conclusions and extrapolations derived from them. The tailing chosen for the weathering tests (previously tested in shake flasks and in small weathering columns) was a pyritic residue produced in a flotation plant of complex polymetallic sulphides (Huelva, Spain). The ABA test was a modification of the conventional ABA test reported in bibliography. The modification consisted in the soft conditions employed in the digestion phase. For column tests, two identical methacrylate columns (150 cm high and 15 cm diameter) were used to study the chemical and microbiological processes controlling the leaching of pyrite. The results obtained in the two tests were very different. The static test predicted a strong potential acidity for the tailing. On the contrary, pH value in the effluents draining from the columns reached values of only 5 units, being the concentration of metals (<600 mg/L) and sulphate ions (<17,000 mg/L) very small and far from the values of a typical acid mine drainage. In consequence, the static test may oversize the potential acidity of the tailing; whereas large columns may be saturated in water, displacing the oxygen and inhibiting the microbial activity necessary to catalyse mineral oxidation.

  13. Formation of Anionic C, N-bearing Chains in the Interstellar Medium via Reactions of H- with HC x N for Odd-valued x from 1 to 7

    NASA Astrophysics Data System (ADS)

    Gianturco, F. A.; Satta, M.; Yurtsever, E.; Wester, R.

    2017-11-01

    We investigate the relative efficiencies of low-temperature chemical reactions in the interstellar medium with H- anion reacting in the gas phase with cyanopolyyne neutral molecules, leading to the formation of anionic {{{C}}}x{{{N}}}- linear chains of different lengths and of H2. All the reactions turn out to be without barriers, highly exothermic reactions that provide a chemical route to the formation of anionic chains of the same length. Some of the anions have been observed in the dark molecular clouds and in the diffuse interstellar envelopes. Quantum calculations are carried out for the corresponding reactive potential energy surfaces for all the odd-numbered members of the series (x = 1, 3, 5, 7). We employ the minimum energy paths to obtain the relevant transition state configurations and use the latter within the variational transition state model to obtain the chemical rates. The present results indicate that at typical temperatures around 100 K, a set of significantly larger rate values exists for x = 3 and x = 5, while the rate values are smaller for CN- and {{{C}}}7{{{N}}}-. At those temperatures, however, all the rates turn out to be larger than the estimates in the current literature for the radiative electron attachment (REA) rates, thus indicating the greater importance of the present chemical path with respect to REA processes at those temperatures. The physical reasons for our findings are discussed in detail and linked with the existing observational findings.

  14. Monte Carlo study of out-of-field exposure in carbon-ion radiotherapy with a passive beam: Organ doses in prostate cancer treatment.

    PubMed

    Yonai, Shunsuke; Matsufuji, Naruhiro; Akahane, Keiichi

    2018-04-23

    The aim of this work was to estimate typical dose equivalents to out-of-field organs during carbon-ion radiotherapy (CIRT) with a passive beam for prostate cancer treatment. Additionally, sensitivity analyses of organ doses for various beam parameters and phantom sizes were performed. Because the CIRT out-of-field dose depends on the beam parameters, the typical values of those parameters were determined from statistical data on the target properties of patients who received CIRT at the Heavy-Ion Medical Accelerator in Chiba (HIMAC). Using these typical beam-parameter values, out-of-field organ dose equivalents during CIRT for typical prostate treatment were estimated by Monte Carlo simulations using the Particle and Heavy-Ion Transport Code System (PHITS) and the ICRP reference phantom. The results showed that the dose decreased with distance from the target, ranging from 116 mSv in the testes to 7 mSv in the brain. The organ dose equivalents per treatment dose were lower than those either in 6-MV intensity-modulated radiotherapy or in brachytherapy with an Ir-192 source for organs within 40 cm of the target. Sensitivity analyses established that the differences from typical values were within ∼30% for all organs, except the sigmoid colon. The typical out-of-field organ dose equivalents during passive-beam CIRT were shown. The low sensitivity of the dose equivalent in organs farther than 20 cm from the target indicated that individual dose assessments required for retrospective epidemiological studies may be limited to organs around the target in cases of passive-beam CIRT for prostate cancer. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Using, Seeing, Feeling, and Doing Absolute Value for Deeper Understanding

    ERIC Educational Resources Information Center

    Ponce, Gregorio A.

    2008-01-01

    Using sticky notes and number lines, a hands-on activity is shared that anchors initial student thinking about absolute value. The initial point of reference should help students successfully evaluate numeric problems involving absolute value. They should also be able to solve absolute value equations and inequalities that are typically found in…

  16. Probing for the Multiplicative Term in Modern Expectancy-Value Theory: A Latent Interaction Modeling Study

    ERIC Educational Resources Information Center

    Trautwein, Ulrich; Marsh, Herbert W.; Nagengast, Benjamin; Ludtke, Oliver; Nagy, Gabriel; Jonkmann, Kathrin

    2012-01-01

    In modern expectancy-value theory (EVT) in educational psychology, expectancy and value beliefs additively predict performance, persistence, and task choice. In contrast to earlier formulations of EVT, the multiplicative term Expectancy x Value in regression-type models typically plays no major role in educational psychology. The present study…

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lites, B.W.; Skumanich, A.

    OSO 8 observations of the profiles of the resonance lines of H I, Mg II, and Ca II obtained with the Laboratorie de Physique Stellaire et Planetaire de Centre National de la Recherche Scientifique (LPSP-CNRS) spectrometer (by A.S.) and of C IV obtained with the University of Colorado (CU) spectrometer (by B.W.L.) for a large quiet sunspot (1975 November 16--17) are analyzed along with near-simultaneous ground-based Stokes measurements obtained in a collaborative arrangement with L. L. House and T. Baur (HAO-NCAR) to yield an umbral chromosphere and transition region model. Features of this model include: (1) a chromosphere that ismore » effectively thin in the important chromsopheric resonance lines of H I and Mg II and saturated in Ca II; (2) an upper chromospheric structure similar to quiet-Sun models; (3) penetration of the sunspot photospheric ''cooling wave'' to higher altitudes in the sunspot chromosphere than in quiet-Sun models, i.e., a more extended temperature minimum region in the sunspot atomphere; (4) a lower pressure corona above the sunspot umbra than above a typical quiet region; (5) very low nonthermal broadening in the umbral chromosphere; (6) a moderately strong downdraft; (7) chromospheric radiative loss rates not significantly different from their corresponding quiet-Sun values; (8) a temperature gradient in the transitons region near 10/sup 5/ Kapprox.0.1 times the corresponding quiet-Sun value. The Balmer continuum radiation from the photospheric areas outside the sunspot umbra controls the hydrogen ionization, and hence the electron density, in the chromosphere above the umbra.« less

  18. Parametric Studies Of Lightweight Reflectors Supported On Linear Actuator Arrays

    NASA Astrophysics Data System (ADS)

    Seibert, George E.

    1987-10-01

    This paper presents the results of numerous design studies carried out at Perkin-Elmer in support of the design of large diameter controllable mirrors for use in laser beam control, surveillance, and astronomy programs. The results include relationships between actuator location and spacing and the associated degree of correctability attainable for a variety of faceplate configurations subjected to typical disturbance environments. Normalizations and design curves obtained from closed-form equations based on thin shallow shell theory and computer based finite-element analyses are presented for use in preliminary design estimates of actuator count, faceplate structural properties, system performance prediction and weight assessments. The results of the analyses were obtained from a very wide range of mirror configurations, including both continuous and segmented mirror geometries. Typically, the designs consisted of a thin facesheet controlled by point force actuators which in turn were mounted on a structurally efficient base panel, or "reaction structure". The faceplate materials considered were fused silica, ULE fused silica, Zerodur, aluminum and beryllium. Thin solid faceplates as well as rib-reinforced cross-sections were treated, with a wide variation in thickness and/or rib patterns. The magnitude and spatial frequency distribution of the residual or uncorrected errors were related to the input error functions for mirrors of many different diameters and focal ratios. The error functions include simple sphere-to-sphere corrections, "parabolization" of spheres, and higher spatial frequency input error maps ranging from 0.5 to 7.5 cycles per diameter. The parameter which dominates all of the results obtained to date, is a structural descriptor of thin shell behavior called the characteristic length. This parameter is a function of the shell's radius of curvature, thickness, and Poisson's ratio of the material used. The value of this constant, in itself, describes the extent to which the deflection under a point force is localized by the shell's curvature. The deflection shape is typically a near-gaussian "bump" with a zero-crossing at a local radius of approximately 3.5 characteristic lengths. The amplitude is a function of the shells elastic modulus, radius, and thickness, and is linearly proportional to the applied force. This basic shell behavior is well-treated in an excellent set of papers by Eric Reissner entitled "Stresses and Small Displacements of Shallow Spherical Shells".1'2 Building on the insight offered by these papers, we developed our design tools around two derived parameters, the ratio of the mirror's diameter to its characteristic length (D/l), and the ratio of the actuator spacing to the characteristic length (b/l). The D/1 ratio determines the "finiteness" of the shell, or its dependence on edge boundary conditions. For D/1 values greater than 10, the influence of edges is almost totally absent on interior behavior. The b/1 ratio, the basis of all our normalizations is the most universal term in the description of correctability or ratio of residual/input errors. The data presented in the paper, shows that the rms residual error divided by the peak amplitude of the input error function is related to the actuator spacing to characteristic length ratio by the following expression RMS Residual Error b 3.5 k (I) (1) Initial Error Ampl. The value of k ranges from approximately 0.001 for low spatial frequency initial errors up to 0.05 for higher error frequencies (e.g. 5 cycles/diameter). The studies also yielded insight to the forces required to produce typical corrections at both the center and edges of the mirror panels. Additionally, the data lends itself to rapid evaluation of the effects of trading faceplate weight for increased actuator count,

  19. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  20. Combination of radar and daily precipitation data to estimate meaningful sub-daily point precipitation extremes

    NASA Astrophysics Data System (ADS)

    Bárdossy, András; Pegram, Geoffrey

    2017-01-01

    The use of radar measurements for the space time estimation of precipitation has for many decades been a central topic in hydro-meteorology. In this paper we are interested specifically in daily and sub-daily extreme values of precipitation at gauged or ungauged locations which are important for design. The purpose of the paper is to develop a methodology to combine daily precipitation observations and radar measurements to estimate sub-daily extremes at point locations. Radar data corrected using precipitation-reflectivity relationships lead to biased estimations of extremes. Different possibilities of correcting systematic errors using the daily observations are investigated. Observed gauged daily amounts are interpolated to unsampled points and subsequently disaggregated using the sub-daily values obtained by the radar. Different corrections based on the spatial variability and the subdaily entropy of scaled rainfall distributions are used to provide unbiased corrections of short duration extremes. Additionally a statistical procedure not based on a matching day by day correction is tested. In this last procedure as we are only interested in rare extremes, low to medium values of rainfall depth were neglected leaving a small number of L days of ranked daily maxima in each set per year, whose sum typically comprises about 50% of each annual rainfall total. The sum of these L day maxima is first iterpolated using a Kriging procedure. Subsequently this sum is disaggregated to daily values using a nearest neighbour procedure. The daily sums are then disaggregated by using the relative values of the biggest L radar based days. Of course, the timings of radar and gauge maxima can be different, so the method presented here uses radar for disaggregating daily gauge totals down to 15 min intervals in order to extract the maxima of sub-hourly through to daily rainfall. The methodologies were tested in South Africa, where an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense (10 km spacing) set of 45 pluviometers recording in the same 6-year period. This valuable set of data was obtained from each of 37 selected radar pixels [1 km square in plan] which contained a pluviometer not masked out by the radar foot-print. The pluviometer data were also aggregated to daily totals, for the same purpose. The extremes obtained using disaggregation methods were compared to the observed extremes in a cross validation procedure. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the point extremes, which we found to be stable.

  1. Substituted amylose matrices for oral drug delivery

    NASA Astrophysics Data System (ADS)

    Moghadam, S. H.; Wang, H. W.; Saddar El-Leithy, E.; Chebli, C.; Cartilier, L.

    2007-03-01

    High amylose corn starch was used to obtain substituted amylose (SA) polymers by chemically modifying hydroxyl groups by an etherification process using 1,2-epoxypropanol. Tablets for drug-controlled release were prepared by direct compression and their release properties assessed by an in vitro dissolution test (USP XXIII no 2). The polymer swelling was characterized by measuring gravimetrically the water uptake ability of polymer tablets. SA hydrophilic matrix tablets present sequentially a burst effect, typical of hydrophilic matrices, and a near constant release, typical of reservoir systems. After the burst effect, surface pores disappear progressively by molecular association of amylose chains; this allows the creation of a polymer layer acting as a diffusion barrier and explains the peculiar behaviour of SA polymers. Several formulation parameters such as compression force, drug loading, tablet weight and insoluble diluent concentration were investigated. On the other hand, tablet thickness, scanning electron microscope analysis and mercury intrusion porosimetry showed that the high crushing strength values observed for SA tablets were due to an unusual melting process occurring during tabletting although the tablet external layer went only through densification, deformation and partial melting. In contrast, HPMC tablets did not show any traces of a melting process.

  2. An ISVD-based Euclidian structure from motion for smartphones

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Guarnieri, A.; Vettore, A.; Pirotti, F.

    2014-06-01

    The development of Mobile Mapping systems over the last decades allowed to quickly collect georeferenced spatial measurements by means of sensors mounted on mobile vehicles. Despite the large number of applications that can potentially take advantage of such systems, because of their cost their use is currently typically limited to certain specialized organizations, companies, and Universities. However, the recent worldwide diffusion of powerful mobile devices typically embedded with GPS, Inertial Navigation System (INS), and imaging sensors is enabling the development of small and compact mobile mapping systems. More specifically, this paper considers the development of a 3D reconstruction system based on photogrammetry methods for smartphones (or other similar mobile devices). The limited computational resources available in such systems and the users' request for real time reconstructions impose very stringent requirements on the computational burden of the 3D reconstruction procedure. This work takes advantage of certain recently developed mathematical tools (incremental singular value decomposition) and of photogrammetry techniques (structure from motion, Tomasi-Kanade factorization) to access very computationally efficient Euclidian 3D reconstruction of the scene. Furthermore, thanks to the presence of instrumentation for localization embedded in the device, the obtained 3D reconstruction can be properly georeferenced.

  3. The magnitude and colour of noise in genetic negative feedback systems.

    PubMed

    Voliotis, Margaritis; Bowsher, Clive G

    2012-08-01

    The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.

  4. Perceptual decision making: drift-diffusion model is equivalent to a Bayesian model

    PubMed Central

    Bitzer, Sebastian; Park, Hame; Blankenburg, Felix; Kiebel, Stefan J.

    2014-01-01

    Behavioral data obtained with perceptual decision making experiments are typically analyzed with the drift-diffusion model. This parsimonious model accumulates noisy pieces of evidence toward a decision bound to explain the accuracy and reaction times of subjects. Recently, Bayesian models have been proposed to explain how the brain extracts information from noisy input as typically presented in perceptual decision making tasks. It has long been known that the drift-diffusion model is tightly linked with such functional Bayesian models but the precise relationship of the two mechanisms was never made explicit. Using a Bayesian model, we derived the equations which relate parameter values between these models. In practice we show that this equivalence is useful when fitting multi-subject data. We further show that the Bayesian model suggests different decision variables which all predict equal responses and discuss how these may be discriminated based on neural correlates of accumulated evidence. In addition, we discuss extensions to the Bayesian model which would be difficult to derive for the drift-diffusion model. We suggest that these and other extensions may be highly useful for deriving new experiments which test novel hypotheses. PMID:24616689

  5. Variance Analysis if Unevenly Spaced Time Series Data

    DTIC Science & Technology

    1995-12-01

    Daka were subsequently removed from mch simulated data set using typical TWSTFT data patterns to create lwo unevenly spaced sets with average...and techniqw are presented for cowecking errors caused by uneven data spacing in typical TWSTFT daka sets. INTRODUCTION Data points obtained from an...the possible data available. In TWSTFT , the task is less daunting: time transfers are typically measured on Monday, Wednesday, and Friday, so, in a

  6. Gentamicin coated iron oxide nanoparticles as novel antibacterial agents

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Proma; Neogi, Sudarsan

    2017-09-01

    Applications of different types of magnetic nanoparticles for biomedical purposes started a long time back. The concept of surface functionalization of the iron oxide nanoparticles with antibiotics is a novel technique which paves the path for further application of these nanoparticles by virtue of their property of superparamagnetism. In this paper, we have synthesized novel iron oxide nanoparticles surface functionalized with Gentamicin. The average size of the particles, concluded from the HR-TEM images, came to be around 14 nm and 10 nm for unmodified and modified nanoparticles, respectively. The magnetization curve M(H) obtained for these nanoparticles are typical of superparamagnetic nature and having almost zero values of coercivity and remanance. The release properties of the drug coated nanoparticles were studied; obtaining an S shaped profile, indicating the initial burst effect followed by gradual sustained release. In vitro investigations against various gram positive and gram negative strains viz Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa and Bacillus subtilis indicated significant antibacterial efficiency of the drug-nanoparticle conjugate. The MIC values indicated that a small amount like 0.2 mg ml-1 of drug capped particles induce about 98% bacterial death. The novelty of the work lies in the drug capping of the nanoparticles, which retains the superparamagnetic nature of the iron oxide nanoparticles and the medical properties of the drug simultaneously, which is found to extremely blood compatible.

  7. Flammability limits of hydrated and anhydrous ethanol at reduced pressures in aeronautical applications.

    PubMed

    Coronado, Christian J R; Carvalho, João A; Andrade, José C; Mendiburu, Andrés Z; Cortez, Ely V; Carvalho, Felipe S; Gonçalves, Beatriz; Quintero, Juan C; Velásquez, Elkin I Gutiérrez; Silva, Marcos H; Santos, José C; Nascimento, Marco A R

    2014-09-15

    There is interest in finding the flammability limits of ethanol at reduced pressures for the future use of this biofuel in aeronautical applications taking into account typical commercial aviation altitude (<40,000 ft). The lower and upper flammability limits (LFL and UFL, respectively) for hydrated ethanol and anhydrous ethanol (92.6% and 99.5% p/p, respectively) were determined for a pressure of 101.3 kPa at temperatures between 0 and 200°C. A heating chamber with a spherical 20-l vessel was used. First, LFL and the UFL were determined as functions of temperature and atmospheric pressure to compare results with data published in the scientific literature. Second, after checking the veracity of the data obtained for standard atmospheric pressure, the work proceeded with reduced pressures in the same temperature range. 295 experiments were carried out in total; the first 80 were to calibrate the heating chamber and compare the results with those given in the published scientific literature. 215 experiments were performed both at atmospheric and reduced pressures. The results had a correlation with the values obtained for the LFL, but values for the UFL had some differences. With respect to the water content in ethanol, it was shown that the water vapor contained in the fuel can act as an inert substance, narrowing flammability. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Bias in estimating accuracy of a binary screening test with differential disease verification

    PubMed Central

    Brinton, John T.; Ringham, Brandy M.; Glueck, Deborah H.

    2011-01-01

    SUMMARY Sensitivity, specificity, positive and negative predictive value are typically used to quantify the accuracy of a binary screening test. In some studies it may not be ethical or feasible to obtain definitive disease ascertainment for all subjects using a gold standard test. When a gold standard test cannot be used an imperfect reference test that is less than 100% sensitive and specific may be used instead. In breast cancer screening, for example, follow-up for cancer diagnosis is used as an imperfect reference test for women where it is not possible to obtain gold standard results. This incomplete ascertainment of true disease, or differential disease verification, can result in biased estimates of accuracy. In this paper, we derive the apparent accuracy values for studies subject to differential verification. We determine how the bias is affected by the accuracy of the imperfect reference test, the percent who receive the imperfect reference standard test not receiving the gold standard, the prevalence of the disease, and the correlation between the results for the screening test and the imperfect reference test. It is shown that designs with differential disease verification can yield biased estimates of accuracy. Estimates of sensitivity in cancer screening trials may be substantially biased. However, careful design decisions, including selection of the imperfect reference test, can help to minimize bias. A hypothetical breast cancer screening study is used to illustrate the problem. PMID:21495059

  9. Combined Study of Snow Depth Determination and Winter Leaf Area Index Retrieval by Unmanned Aerial Vehicle Photogrammetry

    NASA Astrophysics Data System (ADS)

    Lendzioch, Theodora; Langhammer, Jakub; Jenicek, Michal

    2017-04-01

    A rapid and robust approach using Unmanned Aerial Vehicle (UAV) digital photogrammetry was performed for evaluating snow accumulation over different small localities (e.g. disturbed forest and open area) and for indirect field measurements of Leaf Area Index (LAI) of coniferous forest within the Šumava National Park, Czech Republic. The approach was used to reveal impacts related to changes in forest and snowpack and to determine winter effective LAI for monitoring the impact of forest canopy metrics on snow accumulation. Due to the advancement of the technique, snow depth and volumetric changes of snow depth over these selected study areas were estimated at high spatial resolution (1 cm) by subtracting a snow-free digital elevation model (DEM) from a snow-covered DEM. Both, downward-looking UAV images and upward-looking digital hemispherical photography (DHP), and additional widely used LAI-2200 canopy analyser measurements were applied to determine the winter LAI, controlling interception and transmitting radiation. For the performance of downward-looking UAV images the snow background instead of the sky fraction was used. The reliability of UAV-based LAI retrieval was tested by taking an independent data set during the snow cover mapping campaigns. The results showed the potential of digital photogrammetry for snow depth mapping and LAI determination by UAV techniques. The average difference obtained between ground-based and UAV-based measurements of snow depth was 7.1 cm with higher values obtained by UAV. The SD of 22 cm for the open area seemed competitive with the typical precision of point measurements. In contrast, the average difference in disturbed forest area was 25 cm with lower values obtained by UAV and a SD of 36 cm, which is in agreement with other studies. The UAV-based LAI measurements revealed the lowest effective LAI values and the plant canopy analyser LAI-2200 the highest effective LAI values. The biggest bias of effective LAI was observed between LAI-2200 and UAV-based analyses. Since the LAI parameter is important for snowpack modelling, this method presents the potential of simplifying LAI retrieval and mapping of snow dynamics while reducing running costs and time.

  10. Measurements of Turbulent Fluxes over Sea Ice Region in the Sea of Okhotsk.

    NASA Astrophysics Data System (ADS)

    Fujisaki, A.; Yamaguchi, H.; Toyota, T.; Futatsudera, A.; Miyanaga, M.

    2007-12-01

    The measurements of turbulent fluxes over sea ice area were done in the southern part of the Sea of Okhotsk, during the cruises of the ice-breaker P/V 'Soya' in 2000-2005. The air-ice drag coefficients CDN were 3.57×10-3 over small floes \\left(diameter:φ=20- 100m\\right), 3.38×10-3 over medium floes \\left(φ=100-500m\\right), and 2.12×10-3 over big floes \\left( φ=500m-2km\\right), which showed a decrease with the increase of floe size. This is because the smaller floes contribue to the roughness of sea-ice area by their edges more than the larger ones. The average CDN values showed a gradual upslope with ice concentration, which is simply due to the rougher surface of sea ice than that of open water, while they showed a slight decline at ice concentration 100%, which is possibly due to the lack of freeboard effect of lateral side of floes. We also compared the relation between the roughness length zM and the friction velocity u* with the model developed in the previous study. The zM-u* relation well corresponded with the model results, while the range of zM we obtained was larger than those obtained at the Ice Station Weddell and during the Surface Heat Budget of the Arctic Ocean project. The sensible heat transfer coefficients CHN were 1.35×10-3 at 80-90% ice concentration, and 0.95×10-3 at 100% ice concentration, which are comparable with the results of the past reaserches. On the other hand, we obtained a maximum CHN value of 2.39×10-3at 20-50% ice concentration, and 2.35×10-3 over open water, which are more than twice as the typical value of 1.0×10-3 over open water. These large CHN values are due to the significant upward sensible heat flux during the measurements.

  11. Impact of El Niño and La Niña on SeaWiFS, MODIS-A and VIIRS Chlorophyll-a Measurements Along the Equator During 1997 to 2016

    NASA Astrophysics Data System (ADS)

    Halpern, D.; Franz, B. A.; Kuring, N. A.

    2016-12-01

    The Ocean Biology Processing Group at NASA's GSFC recently reprocessed satellite ocean color measurements (SeaWiFS, MODIS-A and VIIRS) to improve accuracy and enhance time-series interoperability and consistency between multi-mission datasets. We chose the 1°S-1°N region along the equator to examine the behavior of Chl-a in El Niño and La Niña events because this latitudinal width represented the scale of Ekman upwelling, which is hypothesized to be a primary mechanism of Chl-a variations along the equator. An El Niño (La Niña) event has five consecutive 3-month-average sea surface temperature anomalies (SSTAs) greater (less) than 0.5°C in the 5°S-5°N, 170°W-120°W region and a super El Niño event occurs when SSTA is greater than 2.0°C. The September 1997 (onset of SeaWiFS data) to July 2016 period contained two super El Niño events, four typical El Niño events and four La Niña events. In the equatorial Pacific Ocean from 135°E (longitude of the westernmost data) to 150°E, the average typical El Niño and La Niña values were approximately the same (0.13 mg m-3). From 150°E to 165°W, the approximate bowl-shaped longitudinal pattern of Chl-a data in the average typical El Niño reached minimum (0.08 mg m-3) at 170°E and then increased to a relatively uniform value of 0.20 mg m-3 from 160°W to the Galapagos, where Chl-a reached 0.45 mg m-3. Eastward from 150°E, Chl-a values in the average typical La Niña increased approximately linearly to 0.21 mg m-3 at 170°E, where Chl-a was 175% larger than that in the average typical El Niño. Chl-a values in the average typical La Niña were approximately 0.22 mg m-3 until the Galapagos, where values reached 0.55 mg m-3. Average Chl-a values in the super El Niño event in 2015-2016 were similar to those associated with the average typical El Niño, but the bottom of the bowl-shaped pattern was shallower and wider. However, the longitudinal pattern of Chl-a in the super El Niño of 1997-1998 differed significantly from the patterns of the average typical El Niño and super El Niño of 2015-2016. Also, Chl-a distributions in the Atlantic and Indian oceans will be described. Correlations between satellite surface wind vector measurements and Chl-a in El Niño and La Niña were not always consistent with the hypothesis of the important contribution of Ekman upwelling and will be discussed.

  12. Diagnostic and prognostic value of a careful symptom evaluation and high sensitive troponin in patients with suspected stable angina pectoris without prior cardiovascular disease.

    PubMed

    Madsen, Debbie M; Diederichsen, Axel C P; Hosbond, Susanne E; Gerke, Oke; Mickley, Hans

    2017-03-01

    Typical angina pectoris (AP) and high-sensitive troponin I (hs-TnI) are independently associated with coronary artery disease (CAD) and future cardiovascular events (CVE). This study aimed to assess the individual and combined diagnostic and prognostic impact of symptoms and hs-TnI in stable chest pain patients without prior cardiovascular disease. During a one-year period, 487 patients with suspected stable AP underwent invasive or CT-coronary angiography (significant stenosis ≥50%). At study inclusion, a careful symptom evaluation was obtained, and patients were classified as having typical AP, atypical AP, or non-cardiac chest pain. Hs-TnI was measured in all patients and divided into tertiles for analysis. Follow-up was a median of 4.9 years with cardiovascular death, non-fatal myocardial infarction, unstable AP, ischemic stroke, coronary-artery-bypass-grafting, percutaneous coronary intervention, and peripheral vascular surgery as combined endpoint. Hs-TnI was detected in 486 patients (99.8%). By multivariate regression analysis, typical AP and hs-TnI elevation were associated with increased risk of having significant CAD (typical AP, OR: 3.46; 95% CI: 2.07-5.79; p < 0.0001, hs-TnI, OR: 1.50; 95% CI: 1.12-2.01; p = 0.007) and experiencing future CVE (typical AP, HR: 2.64; 95% CI: 1.74-3.99; p = 0.001, hs-TnI, HR: 1.26; 95% CI: 1.06-1.49; p = 0.008). Patients in the lowest hs-TnI tertile, without typical AP (n = 107) had a 1.9% absolute risk of significant CAD and a 3.7% absolute risk of long-term CVE. In clinical stable patients without known cardiovascular disease, a thorough chest-pain history in combination with hs-TnI testing can identify a significant low-risk group. The prognostic need for coronary angiography in these patients seems limited. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Examining the reinforcing value of stimuli within social and non-social contexts in children with and without high-functioning autism.

    PubMed

    Goldberg, Melissa C; Allman, Melissa J; Hagopian, Louis P; Triggs, Mandy M; Frank-Crawford, Michelle A; Mostofsky, Stewart H; Denckla, Martha B; DeLeon, Iser G

    2017-10-01

    One of the key diagnostic criteria for autism spectrum disorder includes impairments in social interactions. This study compared the extent to which boys with high-functioning autism and typically developing boys "value" engaging in activities with a parent or alone. Two different assessments that can empirically determine the relative reinforcing value of social and non-social stimuli were employed: paired-choice preference assessments and progressive-ratio schedules. There were no significant differences between boys with high-functioning autism and typically developing boys on either measure. Moreover, there was a strong correspondence in performance across these two measures for participants in each group. These results suggest that the relative reinforcing value of engaging in activities with a primary caregiver is not diminished for children with autism spectrum disorder.

  14. Solubility properties of synthetic and natural meta-torbernite

    NASA Astrophysics Data System (ADS)

    Cretaz, Fanny; Szenknect, Stéphanie; Clavier, Nicolas; Vitorge, Pierre; Mesbah, Adel; Descostes, Michael; Poinssot, Christophe; Dacheux, Nicolas

    2013-11-01

    Meta-torbernite, Cu(UO2)2(PO4)2ṡ8H2O, is one of the most common secondary minerals resulting from the alteration of pitchblende. The determination of the thermodynamic data associated to this phase appears to be a crucial step toward the understanding the origin of uranium deposits or to forecast the fate and transport of uranium in natural media. A parallel approach based on the study of both synthetic and natural samples of meta-torbernite (H3O)0.4Cu0.8(UO2)2(PO4)2ṡ7.6H2O was set up to evaluate its solubility constant. The two solids were first thoroughly characterized and compared by means of XRD, SEM, X-EDS analyses, Raman spectroscopy and BET measurements. The solubility constant was then determined in both under- and supersaturated conditions: the obtained value appeared close to logKs,0°(298 K) = -52.9 ± 0.1 whatever the type of experiment and the sample considered. The joint determination of Gibbs free energy (ΔRG°(298 K) = 300 ± 2 kJ mol-1) then allowed the calculation of ΔRH°(298 K) = 40 ± 3 kJ mol-1 and ΔRS°(298 K) = -879 ± 7 J mol-1 K-1. From these values, the thermodynamic data associated with the formation of meta-torbernite (H3O)0.4Cu0.8(UO2)2(PO4)2ṡ7.6H2O were also evaluated and found to be consistent with those previously obtained by calorimetry, showing the reliability of the method developed in this work. Finally, the obtained data were implemented in a calculation code to determine the conditions of meta-torbernite formation in environmental conditions typical of a former mining site. SI=log({Q}/{Ks}) with Q=∏i( where νi is the stoichiometric coefficient (algebraic value) of species i and ai the nonequilibrium activity of i.

  15. The value of pathogen information in treating clinical mastitis.

    PubMed

    Cha, Elva; Smith, Rebecca L; Kristensen, Anders R; Hertl, Julia A; Schukken, Ynte H; Tauer, Loren W; Welcome, Frank L; Gröhn, Yrjö T

    2016-11-01

    The objective of this study was to determine the economic value of obtaining timely and more accurate clinical mastitis (CM) test results for optimal treatment of cows. Typically CM is first identified when the farmer observes recognisable outward signs. Further information of whether the pathogen causing CM is Gram-positive, Gram-negative or other (including no growth) can be determined by using on-farm culture methods. The most detailed level of information for mastitis diagnostics is obtainable by sending milk samples for culture to an external laboratory. Knowing the exact pathogen permits the treatment method to be specifically targeted to the causation pathogen, resulting in less discarded milk. The disadvantages are the additional waiting time to receive test results, which delays treating cows, and the cost of the culture test. Net returns per year (NR) for various levels of information were estimated using a dynamic programming model. The Value of Information (VOI) was then calculated as the difference in NR using a specific level of information as compared to more detailed information on the CM causative agent. The highest VOI was observed where the farmer assumed the pathogen causing CM was the one with the highest incidence in the herd and no pathogen specific CM information was obtained. The VOI of pathogen specific information, compared with non-optimal treatment of Staphylococcus aureus where recurrence and spread occurred due to lack of treatment efficacy, was $20.43 when the same incorrect treatment was applied to recurrent cases, and $30.52 when recurrent cases were assumed to be the next highest incidence pathogen and treated accordingly. This indicates that negative consequences associated with choosing the wrong CM treatment can make additional information cost-effective if pathogen identification is assessed at the generic information level and if the pathogen can spread to other cows if not treated appropriately.

  16. Precise determination of δ88Sr in rocks, minerals, and waters by double-spike TIMS: A powerful tool in the study of chemical, geologic, hydrologic and biologic processes

    USGS Publications Warehouse

    Neymark, Leonid A.; Premo, Wayne R.; Mel'nikov, Nikolay N.; Emsbo, Poul

    2014-01-01

    We present strontium isotopic (88Sr/86Sr and 87Sr/86Sr) results obtained by 87Sr–84Sr double spike thermal ionization mass-spectrometry (DS-TIMS) for several standards as well as natural water samples and mineral samples of abiogenic and biogenic origin. The detailed data reduction algorithm and a user-friendly Sr-specific stand-alone computer program used for the spike calibration and the data reduction are also presented. Accuracy and precision of our δ88Sr measurements, calculated as permil (‰) deviations from the NIST SRM-987 standard, were evaluated by analyzing the NASS-6 seawater standard, which yielded δ88Sr = 0.378 ± 0.009‰. The first DS-TIMS data for the NIST SRM-607 potassium feldspar standard and for several US Geological Survey carbonate, phosphate, and silicate standards (EN-1, MAPS-4, MAPS-5, G-3, BCR-2, and BHVO-2) are also reported. Data obtained during this work for Sr-bearing solids and natural waters show a range of δ88Sr values of about 2.4‰, the widest observed so far in terrestrial materials. This range is easily resolvable analytically because the demonstrated external error (±SD, standard deviation) for measured δ88Sr values is typically ≤0.02‰. It is shown that the “true” 87Sr/86Sr value obtained by the DS-TIMS or any other external normalization method combines radiogenic and mass-dependent mass-fractionation effects, which cannot be separated. Therefore, the “true” 87Sr/86Sr and the δ87Sr parameter derived from it are not useful isotope tracers. Data presented in this paper for a wide range of naturally occurring sample types demonstrate the potential of the δ88Sr isotope tracer in combination with the traditional radiogenic 87Sr/86Sr tracer for studying a variety of biological, hydrological, and geological processes.

  17. Comparison of direct and indirect methods of estimating health state utilities for resource allocation: review and empirical analysis.

    PubMed

    Arnold, David; Girling, Alan; Stevens, Andrew; Lilford, Richard

    2009-07-22

    Utilities (values representing preferences) for healthcare priority setting are typically obtained indirectly by asking patients to fill in a quality of life questionnaire and then converting the results to a utility using population values. We compared such utilities with those obtained directly from patients or the public. Review of studies providing both a direct and indirect utility estimate. Papers reporting comparisons of utilities obtained directly (standard gamble or time tradeoff) or indirectly (European quality of life 5D [EQ-5D], short form 6D [SF-6D], or health utilities index [HUI]) from the same patient. PubMed and Tufts database of utilities. Sign test for paired comparisons between direct and indirect utilities; least squares regression to describe average relations between the different methods. Mean utility scores (or median if means unavailable) for each method, and differences in mean (median) scores between direct and indirect methods. We found 32 studies yielding 83 instances where direct and indirect methods could be compared for health states experienced by adults. The direct methods used were standard gamble in 57 cases and time trade off in 60(34 used both); the indirect methods were EQ-5D (67 cases), SF-6D (13), HUI-2 (5), and HUI-3 (37). Mean utility values were 0.81 (standard gamble) and 0.77 (time tradeoff) for the direct methods; for the indirect methods: 0.59(EQ-5D), 0.63 (SF-6D), 0.75 (HUI-2) and 0.68 (HUI-3). Direct methods of estimating utilities tend to result in higher health ratings than the more widely used indirect methods, and the difference can be substantial.Use of indirect methods could have important implications for decisions about resource allocation: for example, non-lifesaving treatments are relatively more favoured in comparison with lifesaving interventions than when using direct methods.

  18. Combining Radar and Daily Precipitation Data to Estimate Meaningful Sub-daily Precipitation Extremes

    NASA Astrophysics Data System (ADS)

    Pegram, G. G. S.; Bardossy, A.

    2016-12-01

    Short duration extreme rainfalls are important for design. The purpose of this presentation is not to improve the day by day estimation of precipitation, but to obtain reasonable statistics for the subdaily extremes at gauge locations. We are interested specifically in daily and sub-daily extreme values of precipitation at gauge locations. We do not employ the common procedure of using time series of control station to determine the missing data values in a target. We are interested in individual rare events, not sequences. The idea is to use radar to disaggregate daily totals to sub-daily amounts. In South Arica, an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense (10 km spacing) set of 45 pluviometers recording in the same 6-year period. Using this valuable set of data, we are only interested in rare extremes, therefore small to medium values of rainfall depth were neglected, leaving 12 days of ranked daily maxima in each set per year, whose sum typically comprised about 50% of each annual rainfall total. The method presented here uses radar for disaggregating daily gauge totals in subdaily intervals down to 15 minutes in order to extract the maxima of sub-hourly through to daily rainfall at each of 37 selected radar pixels [1 km square in plan] which contained one of the 45 pluviometers not masked out by the radar foot-print. The pluviometer data were aggregated to daily totals, to act as if they were daily read gauges; their only other task was to help in the cross-validation exercise. The extrema were obtained as quantiles by ordering the 12 daily maxima of each interval per year. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the gauge and radar extremes, by matching their ranks, which we found to be stable and meaningful in cross-validation tests. We provide and compare a range of different methodologies to enable reasonable estimation of subdaily extremes using radar and daily precipitation observations.

  19. Forestland social values and open space preservation.

    Treesearch

    Jeffrey D. Kline; Ralph J. Alig; Brian Garber-Yonts

    2004-01-01

    Concerns have grown about the loss of forestland to development, leading to both public and private efforts to preserve forestland as open space. These lands comprise social values-ecological, scenic, recreation, and resource protection values-not typically reflected in market prices for land. When these values are present, it is up to public and private agencies to...

  20. Higher Education and the Transmission of Educational Values in Today's Society.

    ERIC Educational Resources Information Center

    Escobar-Ortloff, Luz; Ortloff, Warren G.

    Education has traditionally been the primary method of passing on a society's culture and the values it considers to be important. Higher education institutions have not been immune to the crises in the transmission of values. Typically, in higher education basic intellectual values and virtues are mostly left for students to pick up through…

  1. Expansion of Tabulated Scattering Matrices in Generalized Spherical Functions

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Geogdzhayev, Igor V.; Yang, Ping

    2016-01-01

    An efficient way to solve the vector radiative transfer equation for plane-parallel turbid media is to Fourier-decompose it in azimuth. This methodology is typically based on the analytical computation of the Fourier components of the phase matrix and is predicated on the knowledge of the coefficients appearing in the expansion of the normalized scattering matrix in generalized spherical functions. Quite often the expansion coefficients have to be determined from tabulated values of the scattering matrix obtained from measurements or calculated by solving the Maxwell equations. In such cases one needs an efficient and accurate computer procedure converting a tabulated scattering matrix into the corresponding set of expansion coefficients. This short communication summarizes the theoretical basis of this procedure and serves as the user guide to a simple public-domain FORTRAN program.

  2. Cortexin diffusion in human eye sclera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genina, Elina A; Bashkatov, A N; Tuchin, Valerii V

    2011-05-31

    Investigation of the diffusion of cytamines, a typical representative of which is cortexin, is important for evaluating the drug dose, necessary to provide sufficient concentration of the preparation in the inner tissues of the eye. In the present paper, the cortexin diffusion rate in the eye sclera is measured using the methods of optical coherence tomography (OCT) and reflectance spectroscopy. The technique for determining the diffusion coefficient is based on the registration of temporal dependence of the eye sclera scattering parameters caused by partial replacement of interstitial fluid with the aqueous cortexin solution, which reduces the level of the OCTmore » signal and decreases the reflectance of the sclera. The values of the cortexin diffusion coefficient obtained using two independent optical methods are in good agreement. (optical technologies in biophysics and medicine)« less

  3. TrackEtching - A Java based code for etched track profile calculations in SSNTDs

    NASA Astrophysics Data System (ADS)

    Muraleedhara Varier, K.; Sankar, V.; Gangadathan, M. P.

    2017-09-01

    A java code incorporating a user friendly GUI has been developed to calculate the parameters of chemically etched track profiles of ion-irradiated solid state nuclear track detectors. Huygen's construction of wavefronts based on secondary wavelets has been used to numerically calculate the etched track profile as a function of the etching time. Provision for normal incidence and oblique incidence on the detector surface has been incorporated. Results in typical cases are presented and compared with experimental data. Different expressions for the variation of track etch rate as a function of the ion energy have been utilized. The best set of values of the parameters in the expressions can be obtained by comparing with available experimental data. Critical angle for track development can also be calculated using the present code.

  4. Conformational study of 13C-enriched fibroin in the solid state, using the cross polarization nuclear magnetic resonance method.

    PubMed

    Fujiwara, T; Kobayashi, Y; Kyogoku, Y; Kataoka, K

    1986-01-05

    Silk fibroin with the alanyl carboxyl carbon enriched with 13C was obtained by giving a diet containing 13C-enriched alanine to the larvae of Bombyx mori and Antheraea pernyi at the fifth instar. Sericin-free fibroin fibers were prepared from cocoons, and gut was made from the liquid silk in the gland. The final 13C content was about 13%. Cross polarization/magic angle sample spinning spectra at 25 MHz and 75 MHz were measured for each sample at different orientations. Spectra were simulated using the principal values and orientations of the shielding tensor in the alanine crystal. The results indicate that the beta-structure of the fibroin may be a little more flattened than the typical pleated sheet beta-structure.

  5. Generation of three wide frequency bands within a single white-light cavity

    NASA Astrophysics Data System (ADS)

    Othman, Anas; Yevick, David; Al-Amri, M.

    2018-04-01

    We theoretically investigate the double-Λ scheme inside a Fabry-Pérot cavity employing a weak probe beam and two strong driving fields together with an incoherent pumping mechanism. By generating analytical expressions for the susceptibility and applying the white-light cavity conditions, we devise a procedure that reaches the white-light condition at a smaller gas density than the values typically cited in similar previous studies. Further, when the intensities of the two driving fields are equal, a single giant white band is obtained, while for unequal driving fields three white bands can be present in the cavity. Two additional techniques are then advanced for generating three white bands and a method is described for displacing the center frequency of the bands. Finally, some potential applications are suggested.

  6. High-Accuracy Readout Electronics for Piezoresistive Tactile Sensors

    PubMed Central

    Vidal-Verdú, Fernando

    2017-01-01

    The typical layout in a piezoresistive tactile sensor arranges individual sensors to form an array with M rows and N columns. While this layout reduces the wiring involved, it does not allow the values of the sensor resistors to be measured individually due to the appearance of crosstalk caused by the nonidealities of the array reading circuits. In this paper, two reading methods that minimize errors resulting from this phenomenon are assessed by designing an electronic system for array reading, and the results are compared to those obtained using the traditional method, obviating the nonidealities of the reading circuit. The different models were compared by testing the system with an array of discrete resistors. The system was later connected to a tactile sensor with 8 × 7 taxels. PMID:29104229

  7. Dynamical gluon mass in the instanton vacuum model

    NASA Astrophysics Data System (ADS)

    Musakhanov, M.; Egamberdiev, O.

    2018-04-01

    We consider the modifications of gluon properties in the instanton liquid model (ILM) for the QCD vacuum. Rescattering of gluons on instantons generates the dynamical momentum-dependent gluon mass Mg (q). First, we consider the case of a scalar gluon, no zero-mode problem occurs and its dynamical mass Ms (q) can be found. Using the typical phenomenological values of the average instanton size ρ = 1 / 3 fm and average inter-instanton distance R = 1 fm we get Ms (0) = 256 MeV. We then extend this approach to the real vector gluon with zero-modes carefully considered. We obtain the following expression Mg2 (q) = 2 Ms2 (q). This modification of the gluon in the instanton media will shed light on nonperturbative aspect on heavy quarkonium physics.

  8. Simplification of the laser absorption process in the particle simulation for the laser-induced shockwave processing

    NASA Astrophysics Data System (ADS)

    Shimamura, Kohei

    2016-09-01

    To reduce the computational cost in the particle method for the numerical simulation of the laser plasma, we examined the simplification of the laser absorption process. Because the laser frequency is sufficiently larger than the collision frequency between the electron and heavy particles, we assumed that the electron obtained the constant value from the laser irradiation. First of all, the simplification of the laser absorption process was verified by the comparison of the EEDF and the laser-absorptivity with PIC-FDTD method. Secondary, the laser plasma induced by TEA CO2 laser in Argon atmosphere was modeled using the 1D3V DSMC method with the simplification of the laser-absorption. As a result, the LSDW was observed with the typical electron and neutral density distribution.

  9. Photosynthetic characteristics of an amphibious plant, Eleocharis vivipara: Expression of C4 and C3 modes in contrasting environments

    PubMed Central

    Ueno, Osamu; Samejima, Muneaki; Muto, Shoshi; Miyachi, Shigetoh

    1988-01-01

    Eleocharis vivipara Link, a freshwater amphibious leafless plant belonging to the Cyperaceae can grow in both terrestrial and submersed aquatic conditions. Two forms of E. vivipara obtained from these contrasting environments were examined for the characteristics associated with C4 and C3 photosynthesis. In the terrestrial form (δ 13C values = -13.5 to -15.4‰, where ‰ is parts per thousand), the culms, which are photosynthetic organs, possess a Kranz-type anatomy typical of C4 plants, and well-developed bundle-sheath cells contain numerous large chloroplasts. In the submersed form (δ 13C value = -25.9‰), the culms possess anatomical features characteristic of submersed aquatic plants, and the reduced bundle-sheath cells contain only a few small chloroplasts. 14C pulse-12C chase experiments showed that the terrestrial form and the submersed form fix carbon by way of the C4 pathway, with aspartate (40%) and malate (35%) as the main primary products, and by way of the C3 pathway, with 3-phosphoglyceric acid (53%) and sugar phosphates (14%) as the main primary products, respectively. The terrestrial form showed photosynthetic enzyme activities typical of the NAD-malic enzyme-C4 subtype, whereas the submersed form showed decreased activities of key C4 enzymes and an increased ribulose 1,5-bisphosphate carboxylase (EC 4.1.1.39) activity. These data suggest that this species can differentiate into the C4 mode under terrestrial conditions and into the C3 mode under submersed conditions. Images PMID:16593980

  10. Lunar Silicon Abundance determined by Kaguya Gamma-ray Spectrometer and Chandrayaan-1 Moon Mineralogy Mapper

    NASA Astrophysics Data System (ADS)

    Kim, Kyeong; Berezhnoy, Alexey; Wöhler, Christian; Grumpe, Arne; Rodriguez, Alexis; Hasebe, Nobuyuki; Van Gasselt, Stephan

    2016-07-01

    Using Kaguya GRS data, we investigated Si distribution on the Moon, based on study of the 4934 keV Si gamma ray peak caused by interaction between thermal neutrons and lunar Si-28 atoms. A Si peak analysis for a grid of 10 degrees in longitude and latitude was accomplished by the IRAP Aquarius program followed by a correction for altitude and thermal neutron density. A spectral parameter based regression model of the Si distribution was built for latitudes between 60°S and 60°N based on the continuum slopes, band depths, widths and minimum wavelengths of the absorption bands near 1 μμm and 2 μμm. Based on these regression models a nearly global cpm (counts per minute) map of Si with a resolution of 20 pixels per degree was constructed. The construction of a nearly global map of lunar Si abundances has been achieved by a combination of regression-based analysis of KGRS cpm data and M ^{3} spectral reflectance data, it has been calibrated with respect to returned sample-based wt% values. The Si abundances estimated with our method systematically exceed those of the LP GRS Si data set but are consistent with typical Si abundances of lunar basalt samples (in the maria) and feldspathic mineral samples (in the highlands). Our Si map shows that the Si abundance values on the Moon are typically between 17 and 28 wt%. The obtained Si map will provide an important aspect in both understanding the distribution of minerals and the evolution of the lunar surface since its formation.

  11. Precambrian fluvial deposits: Enigmatic palaeohydrological data from the c. 2 1.9 Ga Waterberg Group, South Africa

    NASA Astrophysics Data System (ADS)

    Eriksson, Patrick G.; Bumby, Adam J.; Brümer, Jacobus J.; van der Neut, Markus

    2006-08-01

    Precambrian fluvial systems, lacking the influence of rooted vegetation, probably were characterised by flashy surface runoff, low bank stability, broad channels with abundant bedload, and faster rates of channel migration; consequently, a braided fluvial style is generally accepted. Pre-vegetational braided river systems, active under highly variable palaeoclimatic conditions, may have been more widespread than are modern, ephemeral dry-land braided systems. Aeolian deflation of fine fluvial detritus does not appear to have been prevalent. With the onset of large cratons by the Neoarchaean-Palaeoproterozoic, very large, perennial braided river systems became typical. The c. 2.06-1.88 Ga Waterberg Group, preserved within a Main and a smaller Middelburg basin on the Kaapvaal craton, was deposited largely by alluvial/braided-fluvial and subordinate palaeo-desert environments, within fault-bounded, possibly pull-apart type depositories. Palaeohydrological data obtained from earlier work in the Middelburg basin (Wilgerivier Formation) are compared to such data derived from the correlated Blouberg Formation, situated along the NE margin of the Main basin. Within the preserved Blouberg depository, palaeohydrological parameters estimated from clast size and cross-bed set thickness data, exhibit rational changes in their values, either in a down-palaeocurrent direction, or from inferred basin margin to palaeo-basin centre. In both the Wilgerivier and Blouberg Formations, calculated palaeoslope values (derived from two separate formulae) plot within the gap separating typical alluvial fan gradients from those which characterise rivers (cf. [Blair, T.C., McPherson, J.G., 1994. Alluvial fans and their natural distinction from rivers based on morphology, hydraulic processes, sedimentary processes, and facies assemblages. J. Sediment. Res. A64, 450-489.]). Although it may be argued that such data support possibly unique fluvial styles within the Precambrian, perhaps related to a combination of major global-scale tectono-thermal and atmospheric-palaeoclimatic events, a simpler explanation of these apparently enigmatic palaeoslope values may be pertinent. Of the two possible palaeohydrological formulae for calculating palaeoslope, one provides results close to typical fluvial gradients; the other formula relies on preserved channel-width data. We suggest that the latter will not be reliable due to problematic preservation of original channel-widths within an active braided fluvial system. We thus find no unequivocal support for a unique fluvial style for the Precambrian, beyond that generally accepted for that period and discussed briefly in the first paragraph.

  12. Do We Value Caring?

    ERIC Educational Resources Information Center

    Weissbourd, Richard; Anderson, Trisha Ross

    2016-01-01

    When asked about their child-rearing priorities, parents in the United States are likely to say it's more important to raise children who are caring than to raise high achievers. Schools, too, typically trumpet values such as caring, honesty, and fairness. These values are posted on walls, reiterated in assemblies, and included in mission…

  13. Describing Typical Capstone Course Experiences from a National Random Sample

    ERIC Educational Resources Information Center

    Grahe, Jon E.; Hauhart, Robert C.

    2013-01-01

    The pedagogical value of capstones has been regularly discussed within psychology. This study presents results from an examination of a national random sample of department webpages and an online survey that characterized the typical capstone course in terms of classroom activities and course administration. The department webpages provide an…

  14. Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT

    NASA Astrophysics Data System (ADS)

    Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang

    2015-03-01

    In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, J.D.; Ott, E.; Yorke, J.A.

    Dimension is perhaps the most basic property of an attractor. In this paper we discuss a variety of different definitions of dimension, compute their values for a typical example, and review previous work on the dimension of chaotic attractors. The relevant definitions of dimension are of two general types, those that depend only on metric properties, and those that depend on probabilistic properties (that is, they depend on the frequency with which a typical trajectory visits different regions of the attractor). Both our example and the previous work that we review support the conclusion that all of the probabilistic dimensionsmore » take on the same value, which we call the dimension of the natural measure, and all of the metric dimensions take on a common value, which we call the fractal dimension. Furthermore, the dimension of the natural measure is typically equal to the Lyapunov dimension, which is defined in terms of Lyapunov numbers, and thus is usually far easier to calculate than any other definition. Because it is computable and more physically relevant, we feel that the dimension of the natural measure is more important than the fractal dimension.« less

  16. Low-temperature specific heat of uranium germanides

    NASA Astrophysics Data System (ADS)

    Pikul, A.; Troć, R.; Czopnik, A.; Noël, H.

    2014-06-01

    We report measurements of the specific heat down to the lowest temperature of 2 K for the paramagnetic binaries U5Ge4 (Ti5Ga4-type) and UGe (ThIn-type) as well as for the ferromagnetic binaries U3Ge5-x (x=0.2) and UGe2-x (x=0.3) (with TC=94 and 47 K) having defect crystal structures of the AlB2- and ThSi2-type, respectively. The obtained data were compared to those of other uranium germanides which have been earlier studied: UGe2 (ZrGa2) and UGe3 (Cu3Au). Among all these germanides, only UGe exhibits enhanced electronic specific heat coefficient, γ(0), equal to 137 mJ/molUK2. This value can be compared to that derived for the most known spin fluctuator, UAl2 (143 mJ/molUK2). The other uranium germanides have less enhanced γ(0) values (27-65 mJ/molUK2). The lowest value of about 20 mJ/molUK2 was reported earlier for the typical temperature independent paramagnet UGe3. For the ferromagnetic new phase UGe2-x the inferred magnetic entropy, Sm, reaches at the Curie temperature, TC, a value of R ln 2 which corresponds to a doublet ground state of the uranium ion in this deficit digermanide.

  17. A multicenter study on PIVKA reference interval of healthy population and establishment of PIVKA cutoff value for hepatocellular carcinoma diagnosis in China.

    PubMed

    Qin, X; Tang, G; Gao, R; Guo, Z; Liu, Z; Yu, S; Chen, M; Tao, Z; Li, S; Liu, M; Wang, L; Hou, L; Xia, L; Cheng, X; Han, J; Qiu, L

    2017-08-01

    The aim of this study was to investigate the reference interval of protein-induced vitamin K absence or antagonist-II (PIVKA-II) in China population and to evaluate its medical decision level for hepatocellular carcinoma (HCC) diagnosis. To determine the reference range for Chinese individuals, a total of 855 healthy subjects in five typical regions of China were enrolled in this study to obtain a 95% reference interval. In a case-control study which recruited the subjects diagnosed with HCC, metastatic liver cancer, bile duct cancer, hepatitis, cirrhosis, other benign liver diseases and the subjects administrated anticoagulant, receiver operating characteristic analysis was used to determine PIVKA-II cutoff value for a medical decision. The concentration of PIVKA-II had no relationship with age or gender and that region was a significant factor associated with the level of PIVKA-II. The 95% reference interval determined in this study for PIVKA-II in Chinese healthy individuals was 28 mAU/mL, and the cutoff value which to distinguish patients with HCC from disease control groups is 36.5 mAU/mL. In clinical applications, it is recommended that each laboratory chooses their own reference interval based on the regional population study or cutoff value for disease diagnosis. © 2017 John Wiley & Sons Ltd.

  18. Thio-arylglycosides with Various Aglycon Para-Substituents, a Probe for Studying Chemical Glycosylation Reactions

    PubMed Central

    Li, Xiaoning; Huang, Lijun; Hu, Xiche; Huang, Xuefei

    2009-01-01

    Summary Three series of thioglycosyl donors differing only in their respective aglycon substituents within each series have been prepared as representatives of typical glycosyl donors. The relative anomeric reactivities of these donors were quantified under competitive glycosylation conditions with various reaction time, promoters, solvents and acceptors. Over three orders of magnitude reactivity difference were generated by simple transformation of the para-substituent on the aglycon with methanol as the acceptor, while chemoselectivities became lower with carbohydrate acceptors. Excellent linear correlations were attained between relative reactivity values of donors and σp values of the substituents in the Hammett plots. This indicates that the glycosylation mechanism remains the same over a wide range of reactivities and glycosylation conditions. The negative slopes of the Hammett plots suggested that electron donating substituents expedite the reactions and the magnitudes of slopes can be rationalized by neighboring group participation as well as electronic properties of the glycon protective groups. Within the same series of donors, less nucleophilic acceptors gave smaller slopes in their Hammett plots. This is consistent with the notion that acceptor nucleophilic attack onto the reactive intermediate is part of the rate limiting step of the glycosylation reaction. Excellent linear Hammett correlations were obtained between relative reactivity values of three series of donors differing only in their aglycon substituents and σp values of the substituents. PMID:19081954

  19. Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.

    PubMed

    Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter

    2015-12-01

    Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.

  20. Macroscopic behavior and microscopic magnetic properties of nanocarbon

    NASA Astrophysics Data System (ADS)

    Lähderanta, E.; Ryzhov, V. A.; Lashkul, A. V.; Galimov, D. M.; Titkov, A. N.; Matveev, V. V.; Mokeev, M. V.; Kurbakov, A. I.; Lisunov, K. G.

    2015-06-01

    Here are presented investigations of powder and glass-like samples containing carbon nanoparticles, not intentionally doped and doped with Ag, Au and Co. The neutron diffraction study reveals an amorphous structure of the samples doped with Au and Co, as well as the magnetic scattering due to a long-range FM order in the Co-doped sample. The composition and molecular structure of the sample doped with Au is clarified with the NMR investigations. The temperature dependence of the magnetization, M (T), exhibits large irreversibility in low fields of B=1-7 mT. M (B) saturates already above 2 T at high temperatures, but deviates from the saturation behavior below 50 (150 K). Magnetic hysteresis is observed already at 300 K and exhibits a power-law temperature decay of the coercive field, Bc (T). The macroscopic behavior above is typical of an assembly of partially blocked magnetic nanoparticles. The values of the saturation magnetization, Ms, and the blocking temperature, Tb, are obtained as well. However, the hysteresis loop in the Co-doped sample differs from that in other samples, and the values of Bc and Ms are noticeably increased.

  1. Cardioinhibitory effect of atrial peptide in conscious rats.

    PubMed

    Allen, D E; Gellai, M

    1987-03-01

    The hemodynamic and renal excretory responses to 150-min atriopeptin II (AP II) infusion (330 ng X kg-1 X min-1) were assessed in five chronically instrumented rats with (FR protocol) and without (NR protocol) replaced urinary fluid losses. The observed changes were compared with those obtained by vehicle in the same rats. The hypotension seen with AP II infusion (120-min value: -27 +/- 2%, FR and NR responses combined) was due solely to a decreased cardiac output (CO; 120-min combined value: -34 +/- 3%). Total peripheral resistance remained unchanged or slightly elevated. A drop in stroke volume plus a later-developing (by 75-90 min) decrease in heart rate contributed to the CO decline. This latter bradycardic component, the opposite response to that typically produced reflexly by hypotension, was reversed by atropine sulfate treatment at 120 min and may thus be neural in origin. The finding of similar hemodynamic changes in the FR and NR rats and the lack of a significant effect of AP II on hematocrit suggest that volume depletion or a plasma extravasation were not contributors to the cardioinhibitory effect of the peptide.

  2. Experiments in dilution jet mixing

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.; Srinivasan, R.; Berenfeld, A.

    1983-01-01

    Experimental results are presented on the mixing of a single row of jets with an isothermal mainstream in a straight duct, with flow and geometric variations typical of combustion chambers in gas turbine engines included. It is found that at a constant momentum ratio, variations in the density ratio have only a second-order effect on the profiles. A first-order approximation to the mixing of jets with a variable temperature mainstream can, it is found, be obtained by superimposing the jets-in-an-isothermal-crossflow and mainstream profiles. Another finding is that the flow area convergence, especially injection-wall convergence, significantly improves the mixing. For opposed rows of jets with the orifice cone centerlines in-line, the optimum ratio of orifice spacing to duct height is determined to be 1/2 of the optimum value for single injection at the same momentum ratio. For opposed rows of jets with the orifice centerlines staggered, the optimum ratio of orifice spacing to duct height is found to be twice the optimum value for single side injection at the same momentum ratio.

  3. Stabilization and analytical tuning rule of double-loop control scheme for unstable dead-time process

    NASA Astrophysics Data System (ADS)

    Ugon, B.; Nandong, J.; Zang, Z.

    2017-06-01

    The presence of unstable dead-time systems in process plants often leads to a daunting challenge in the design of standard PID controllers, which are not only intended to provide close-loop stability but also to give good performance-robustness overall. In this paper, we conduct stability analysis on a double-loop control scheme based on the Routh-Hurwitz stability criteria. We propose to use this unstable double-loop control scheme which employs two P/PID controllers to control first-order or second-order unstable dead-time processes typically found in process industries. Based on the Routh-Hurwitz stability necessary and sufficient criteria, we establish several stability regions which enclose within them the P/PID parameter values that guarantee close-loop stability of the double-loop control scheme. A systematic tuning rule is developed for the purpose of obtaining the optimal P/PID parameter values within the established regions. The effectiveness of the proposed tuning rule is demonstrated using several numerical examples and the result are compared with some well-established tuning methods reported in the literature.

  4. Earthquake slip weakening and asperities explained by thermal pressurization.

    PubMed

    Wibberley, Christopher A J; Shimamoto, Toshihiko

    2005-08-04

    An earthquake occurs when a fault weakens during the early portion of its slip at a faster rate than the release of tectonic stress driving the fault motion. This slip weakening occurs over a critical distance, D(c). Understanding the controls on D(c) in nature is severely limited, however, because the physical mechanism of weakening is unconstrained. Conventional friction experiments, typically conducted at slow slip rates and small displacements, have obtained D(c) values that are orders of magnitude lower than values estimated from modelling seismological data for natural earthquakes. Here we present data on fluid transport properties of slip zone rocks and on the slip zone width in the centre of the Median Tectonic Line fault zone, Japan. We show that the discrepancy between laboratory and seismological results can be resolved if thermal pressurization of the pore fluid is the slip-weakening mechanism. Our analysis indicates that a planar fault segment with an impermeable and narrow slip zone will become very unstable during slip and is likely to be the site of a seismic asperity.

  5. Electronic, magnetic properties and phase diagrams of system with Fe4N compound: An ab initio calculations and Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Masrour, R.; Jabar, A.; Hlil, E. K.

    2018-05-01

    Self-consistent ab initio calculations, based on Density Functional Theory (DFT) approach and using Full potential Linear Augmented Plane Wave (FLAPW) method, are performed to investigate the electronic and magnetic properties of the Fe4N compound. Polarized spin and spin-orbit coupling are included in calculations within the framework of the ferromagnetic state between Fe(I) and Fe(II) in Fe4N compound. We have used the obtained data from abinitio calculations as an input in Monte Carlo simulation to calculate the magnetic properties of this compounds such as the ground state phase diagrams, total and partial magnetization of Fe(I) and Fe(II) as well as the transition temperatures are computed. The variation of magnetization with the crystal field are also studied. The magnetic hysteresis cycle of the same Fe4N compound are determined for different values of temperatures and crystal field values. The two-step hysteresis loop are evidenced, which is typical for Fe4N structure. The ferromagnetic and superparamagnetic phase is observed as well.

  6. A magnetic damper for first mode vibration reduction in multimass flexible rotors

    NASA Technical Reports Server (NTRS)

    Kasarda, M. E. F.; Allaire, P. E.; Humphris, R. R.; Barrett, L. E.

    1989-01-01

    Many rotating machines such as compressors, turbines and pumps have long thin shafts with resulting vibration problems, and would benefit from additional damping near the center of the shaft. Magnetic dampers have the potential to be employed in these machines because they can operate in the working fluid environment unlike conventional bearings. An experimental test rig is described which was set up with a long thin shaft and several masses to represent a flexible shaft machine. An active magnetic damper was placed in three locations: near the midspan, near one end disk, and close to the bearing. With typical control parameter settings, the midspan location reduced the first mode vibration 82 percent, the disk location reduced it 75 percent and the bearing location attained a 74 percent reduction. Magnetic damper stiffness and damping values used to obtain these reductions were only a few percent of the bearing stiffness and damping values. A theoretical model of both the rotor and the damper was developed and compared to the measured results. The agreement was good.

  7. Coherent nonlinear optical response of single-layer black phosphorus: third-harmonic generation

    NASA Astrophysics Data System (ADS)

    Margulis, Vladimir A.; Muryumin, Evgeny E.; Gaiduk, Evgeny A.

    2017-10-01

    We theoretically calculate the nonlinear optical (NLO) response of phosphorene (a black phosphorus monolayer) to a normally incident and linearly polarized coherent laser radiation of frequency ω, resulting in the generation of radiation at frequency 3ω. We derive explicit analytic expressions for four independent nonvanishing elements of the third-order NLO susceptibility tensor, describing the third-harmonic generation (THG) from phosphorene. The final formulas are numerically evaluated for typical values of the system's parameters to explore how the efficiency of the THG varies with both the frequency and the polarization direction of the incident radiation. The results obtained show a resonant enhancement of the THG efficiency when the pump photon energy ℏω approaches a value of one third of the bandgap energy Eg (≈1.5 eV) of phosphorene. It is also shown that the THG efficiency exhibits a specific polarization dependence, allowing the THG to be used for determining the orientation of phosphorene's crystallographic axes. Our findings highlight the material's potential for practical application in nanoscale photonic devices such as frequency convertors operating in the near-infrared spectral range.

  8. First Principles Model of Electric Cable Braid Penetration with Dielectrics

    DOE PAGES

    Campione, Salvatore; Warne, Larry Kevin; Langston, William L.; ...

    2018-01-01

    In this study, we report the formulation to account for dielectrics in a first principles multipole-based cable braid electromagnetic penetration model. To validate our first principles model, we consider a one-dimensional array of wires, which can be modeled analytically with a multipole-conformal mapping expansion for the wire charges; however, the first principles model can be readily applied to realistic cable geometries. We compare the elastance (i.e. the inverse of the capacitance) results from the first principles cable braid electromagnetic penetration model to those obtained using the analytical model. The results are found in good agreement up to a radius tomore » half spacing ratio of 0.5-0.6, depending on the permittivity of the dielectric used, within the characteristics of many commercial cables. We observe that for typical relative permittivities encountered in braided cables, the transfer elastance values are essentially the same as those of free space; the self-elastance values are also approximated by the free space solution as long as the dielectric discontinuity is taken into account for the planar mode.« less

  9. Spatiotemporal Dynamics and Fitness Analysis of Global Oil Market: Based on Complex Network

    PubMed Central

    Wang, Minggang; Fang, Guochang; Shao, Shuai

    2016-01-01

    We study the overall topological structure properties of global oil trade network, such as degree, strength, cumulative distribution, information entropy and weight clustering. The structural evolution of the network is investigated as well. We find the global oil import and export networks do not show typical scale-free distribution, but display disassortative property. Furthermore, based on the monthly data of oil import values during 2005.01–2014.12, by applying random matrix theory, we investigate the complex spatiotemporal dynamic from the country level and fitness evolution of the global oil market from a demand-side analysis. Abundant information about global oil market can be obtained from deviating eigenvalues. The result shows that the oil market has experienced five different periods, which is consistent with the evolution of country clusters. Moreover, we find the changing trend of fitness function agrees with that of gross domestic product (GDP), and suggest that the fitness evolution of oil market can be predicted by forecasting GDP values. To conclude, some suggestions are provided according to the results. PMID:27706147

  10. Rarefaction windows in a high-power impulse magnetron sputtering plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmucci, Maria; Britun, Nikolay; Konstantinidis, Stephanos

    2013-09-21

    The velocity distribution function of the sputtered particles in the direction parallel to the planar magnetron cathode is studied by spatially- and time-resolved laser-induced fluorescence spectroscopy in a short-duration (20 μs) high-power impulse magnetron sputtering discharge. The experimental evidence for the neutral and ionized sputtered particles to have a constant (saturated) velocity at the end of the plasma on-time is demonstrated. The velocity component parallel to the target surface reaches the values of about 5 km/s for Ti atoms and ions, which is higher that the values typically measured in the direct current sputtering discharges before. The results point outmore » on the presence of a strong gas rarefaction significantly reducing the sputtered particles energy dissipation during a certain time interval at the end of the plasma pulse, referred to as “rarefaction window” in this work. The obtained results agree with and essentially clarify the dynamics of HiPIMS discharge studied during the plasma off-time previously in the work: N. Britun, Appl. Phys. Lett. 99, 131504 (2011)« less

  11. The analysis of distribution of meteorological over China in astronomical site selection

    NASA Astrophysics Data System (ADS)

    Zhang, Cai-yun; Weng, Ning-quan

    2014-02-01

    The distribution of parameters such as sunshine hours, precipitation, and visibility were obtained by analyzing the meteorological data in 906 stations of China during 1981~2012. And the month and annual variations of the parameters in some typical stations were discussed. The results show that: (1) the distribution of clear days is similar to that of sunshine hours, the values of which decrease from north to south and from west to east. The distributions of cloud, precipitation and vapor pressure are opposite. (2) The northwest areas in China have the characteristic such as low precipitation and vapor pressure, small cloud clever, and good visibility, which are the general conditions of astronomical site selection. (3) The parameters have obvious month variation. There are large precipitation, long sunshine hours and strong radiation in the mid months of one year, which are opposite in beginning and ending of one year. (4) In the selected stations, the value of vapor pressure decreases year by year, and the optical depth is similar or invariable. All the above results provided for astronomical site selection.

  12. Thermal Optimization of Growth and Quality in Protein Crystals

    NASA Technical Reports Server (NTRS)

    Wiencek, John M.

    1996-01-01

    Experimental evidence suggests that larger and higher quality crystals can be attained in the microgravity of space; however, the effect of growth rate on protein crystal quality is not well documented. This research is the first step towards providing strategies to grow crystals under constant rates of growth. Controlling growth rates at a constant value allows for direct one-to-one comparison of results obtained in microgravity and on earth. The overall goal of the project was to control supersaturation at a constant value during protein crystal growth by varying temperature in a predetermined manner. Applying appropriate theory requires knowledge of specific physicochemical properties of the protein solution including the effect of supersaturation on growth rates and the effect of temperature on protein solubility. Such measurements typically require gram quantities of protein and many months of data acquisition. A second goal of the project applied microcalorimetry for the rapid determination of these physicochemical properties using a minimum amount of protein. These two goals were successfully implemented on hen egg-white lysozyme. Results of these studies are described in the attached reprints.

  13. First Principles Model of Electric Cable Braid Penetration with Dielectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry Kevin; Langston, William L.

    In this study, we report the formulation to account for dielectrics in a first principles multipole-based cable braid electromagnetic penetration model. To validate our first principles model, we consider a one-dimensional array of wires, which can be modeled analytically with a multipole-conformal mapping expansion for the wire charges; however, the first principles model can be readily applied to realistic cable geometries. We compare the elastance (i.e. the inverse of the capacitance) results from the first principles cable braid electromagnetic penetration model to those obtained using the analytical model. The results are found in good agreement up to a radius tomore » half spacing ratio of 0.5-0.6, depending on the permittivity of the dielectric used, within the characteristics of many commercial cables. We observe that for typical relative permittivities encountered in braided cables, the transfer elastance values are essentially the same as those of free space; the self-elastance values are also approximated by the free space solution as long as the dielectric discontinuity is taken into account for the planar mode.« less

  14. Investigating Teachers' Explanations for Aggressive Classroom Discipline Strategies in China and Australia

    ERIC Educational Resources Information Center

    Riley, Philip; Lewis, Ramon; Wang, Bingxin

    2012-01-01

    Student misbehaviour can provoke aggressive teacher management (e.g. yelling in anger), adversely effecting students' learning and attitudes toward school. To investigate this phenomenon, data were obtained from 75 Chinese (typically Eastern) and 192 Victorian (typically Western) secondary teachers who self-reported aggressive management. Results:…

  15. The price of your soul: neural evidence for the non-utilitarian representation of sacred values

    PubMed Central

    Berns, Gregory S.; Bell, Emily; Capra, C. Monica; Prietula, Michael J.; Moore, Sara; Anderson, Brittany; Ginges, Jeremy; Atran, Scott

    2012-01-01

    Sacred values, such as those associated with religious or ethnic identity, underlie many important individual and group decisions in life, and individuals typically resist attempts to trade off their sacred values in exchange for material benefits. Deontological theory suggests that sacred values are processed based on rights and wrongs irrespective of outcomes, while utilitarian theory suggests that they are processed based on costs and benefits of potential outcomes, but which mode of processing an individual naturally uses is unknown. The study of decisions over sacred values is difficult because outcomes cannot typically be realized in a laboratory, and hence little is known about the neural representation and processing of sacred values. We used an experimental paradigm that used integrity as a proxy for sacredness and which paid real money to induce individuals to sell their personal values. Using functional magnetic resonance imaging (fMRI), we found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval. This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits. PMID:22271790

  16. The price of your soul: neural evidence for the non-utilitarian representation of sacred values.

    PubMed

    Berns, Gregory S; Bell, Emily; Capra, C Monica; Prietula, Michael J; Moore, Sara; Anderson, Brittany; Ginges, Jeremy; Atran, Scott

    2012-03-05

    Sacred values, such as those associated with religious or ethnic identity, underlie many important individual and group decisions in life, and individuals typically resist attempts to trade off their sacred values in exchange for material benefits. Deontological theory suggests that sacred values are processed based on rights and wrongs irrespective of outcomes, while utilitarian theory suggests that they are processed based on costs and benefits of potential outcomes, but which mode of processing an individual naturally uses is unknown. The study of decisions over sacred values is difficult because outcomes cannot typically be realized in a laboratory, and hence little is known about the neural representation and processing of sacred values. We used an experimental paradigm that used integrity as a proxy for sacredness and which paid real money to induce individuals to sell their personal values. Using functional magnetic resonance imaging (fMRI), we found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval. This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.

  17. Pragmatics in pre-schoolers with language impairments.

    PubMed

    Geurts, Hilde; Embrechts, Mariëtte

    2010-01-01

    Pragmatic assessment methods are very diverse and differ in informant type. Some rely on parents, others on teachers/professionals and some directly test pragmatic abilities in the children themselves. A widely used pragmatic parent questionnaire is the Children's Communication Checklist--2 (CCC-2). However, it is not known how scores on the CCC-2 relate to direct measures of pragmatics. The aim of the current study is determine whether children's language patterns on pragmatics obtained with a parent questionnaire were converging with findings when the children were directly tested with a pragmatic test. The CCC-2 and the Nijmegen Pragmatics Test (NPT) were applied to 24 pre-schoolers (aged 4-7 years) with various language impairments and 33 age-matched typically developing pre-schoolers. Both pragmatic language instruments clearly differentiated between pre-schoolers with language impairments and those without language impairments. However, the obtained correlations between the different measures were low to moderate. The specificity of each of the instruments was sufficient, but the sensitivity was generally poor. The instruments were not always converging, but when the instruments did converge the obtained results were valid. However, the obtained high specificity and relatively low sensitivity values for each of the instruments showed that better cut-off scores are needed. When only one of the instruments indicated the absence or presence of language impairments, one needs to be careful in concluding whether or not there are indeed language impairments.

  18. An analytical theory of a scattering of radio waves on meteoric ionization - II. Solution of the integro-differential equation in case of backscatter

    NASA Astrophysics Data System (ADS)

    Pecina, P.

    2016-12-01

    The integro-differential equation for the polarization vector P inside the meteor trail, representing the analytical solution of the set of Maxwell equations, is solved for the case of backscattering of radio waves on meteoric ionization. The transversal and longitudinal dimensions of a typical meteor trail are small in comparison to the distances to both transmitter and receiver and so the phase factor appearing in the kernel of the integral equation is large and rapidly changing. This allows us to use the method of stationary phase to obtain an approximate solution of the integral equation for the scattered field and for the corresponding generalized radar equation. The final solution is obtained by expanding it into the complete set of Bessel functions, which results in solving a system of linear algebraic equations for the coefficients of the expansion. The time behaviour of the meteor echoes is then obtained using the generalized radar equation. Examples are given for values of the electron density spanning a range from underdense meteor echoes to overdense meteor echoes. We show that the time behaviour of overdense meteor echoes using this method is very different from the one obtained using purely numerical solutions of the Maxwell equations. Our results are in much better agreement with the observations performed e.g. by the Ondřejov radar.

  19. Rossby wave activity in a two-dimensional model - Closure for wave driving and meridional eddy diffusivity

    NASA Technical Reports Server (NTRS)

    Hitchman, Matthew H.; Brasseur, Guy

    1988-01-01

    A parameterization of the effects of Rossby waves in the middle atmosphere is proposed for use in two-dimensional models. By adding an equation for conservation of Rossby wave activity, closure is obtained for the meridional eddy fluxes and body force due to Rossby waves. Rossby wave activity is produced in a climatological fashion at the tropopause, is advected by a group velocity which is determined solely by model zonal winds, and is absorbed where it converges. Absorption of Rossby wave activity causes both an easterly torque and an irreversible mixing of potential vorticity, represented by the meridional eddy diffusivity, K(yy). The distribution of Rossby wave driving determines the distribution of K(yy), which is applied to all of the chemical constituents. This provides a self-consistent coupling of the wave activity with the winds, tracer distributions and the radiative field. Typical winter stratospheric values for K(yy) of 2 million sq m/sec are obtained. Poleward tracer advection is enhanced and meridional tracer gradients are reduced where Rossby wave activity is absorbed in the model.

  20. Indium oxide co-doped with tin and zinc: A simple route to highly conducting high density targets for TCO thin-film fabrication

    NASA Astrophysics Data System (ADS)

    Saadeddin, I.; Hilal, H. S.; Decourt, R.; Campet, G.; Pecquenard, B.

    2012-07-01

    Indium oxide co-doped with tin and zinc (ITZO) ceramics have been successfully prepared by direct sintering of the powders mixture at 1300 °C. This allowed us to easily fabricate large highly dense target suitable for sputtering transparent conducting oxide (TCO) films, without using any cold or hot pressing techniques. Hence, the optimized ITZO ceramic reaches a high relative bulk density (˜ 92% of In2O3 theoretical density) and higher than the well-known indium oxide doped with tin (ITO) prepared under similar conditions. All X-ray diagrams obtained for ITZO ceramics confirms a bixbyte structure typical for In2O3 only. This indicates a higher solubility limit of Sn and Zn when they are co-doped into In2O3 forming a solid-solution. A very low value of electrical resistivity is obtained for [In2O3:Sn0.10]:Zn0.10 (1.7 × 10-3 Ω cm, lower than ITO counterpart) which could be fabricated to high dense ceramic target suing pressure-less sintering.

  1. Chemical potentials and thermodynamic characteristics of ideal Bose- and Fermi-gases in the region of quantum degeneracy

    NASA Astrophysics Data System (ADS)

    Sotnikov, A. G.; Sereda, K. V.; Slyusarenko, Yu. V.

    2017-01-01

    Calculations of chemical potentials for ideal monatomic gases with Bose-Einstein and Fermi-Dirac statistics as functions of temperature, across the temperature region that is typical for the collective quantum degeneracy effect, are presented. Numerical calculations are performed without any additional approximations, and explicit dependences of the chemical potentials on temperature are constructed at a fixed density of gas particles. Approximate polynomial dependences of chemical potentials on temperature are obtained that allow for the results to be used in further studies without re-applying the involved numerical methods. The ease of using the obtained representations is demonstrated on examples of deformation of distribution for a population of energy states at low temperatures, and on the impact of quantum statistics (exchange interaction) on the equations of state for ideal gases and some of the thermodynamic properties thereof. The results of this study essentially unify two opposite limiting cases in an intermediate region that are used to describe the equilibrium states of ideal gases, which are well known from university courses on statistical physics, thus adding value from an educational point of view.

  2. A Nanoflare-Based Cellular Automaton Model and the Observed Properties of the Coronal Plasma

    NASA Technical Reports Server (NTRS)

    Lopez-Fuentes, Marcelo; Klimchuk, James Andrew

    2016-01-01

    We use the cellular automaton model described in Lopez Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode/XRT and SDOAIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasma in AR coronal loops. The typical intensity fluctuations have amplitudes of 10 percent - 15 percent both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.

  3. Sludge conditioning using the composite of a bioflocculant and PAC for enhancement in dewaterability.

    PubMed

    Guo, Junyuan; Chen, Cheng

    2017-10-01

    This study investigated the production of a bioflocculant by using rice stover and its potential in sludge dewatering. Production of the bioflocculant was positively associated with cell growth and highest value of 2.37 g L -1 was obtained with main backbone of polysaccharides. The bioflocculant showed good performances in sludge dewatering, after conditioned by this bioflocculant, dry solids (DS) and specific resistance to filtration (SRF) of typical wastewater activated sludge reached 19.3% and 4.8 × 10 12  m kg -1 , respectively, which were much better than the ones obtained with chemical flocculants. Sludge dewatering was further improved when the bioflocculant and polyaluminum chloride (PAC) were used simultaneously, and the optimized conditioning process by the composite was bioflocculant of 10.5 g kg -1 , PAC of 19.4 g kg -1 , and pH of 8.1. Under this optimal condition, DS and SRF of the sludge appeared as 24.1% and 3.0 × 10 12  m kg -1 , respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Investigation of the wavelength dependence of laser stratigraphy on Cu and Ni coatings using LIBS compared to a pure thermal ablation model

    NASA Astrophysics Data System (ADS)

    Paulis, Evgeniya; Pacher, Ulrich; Weimerskirch, Morris J. J.; Nagy, Tristan O.; Kautek, Wolfgang

    2017-12-01

    In this study, galvanic coatings of Cu and Ni, typically applied in industrial standard routines, were investigated. Ablation experiments were carried out using the first two harmonic wavelengths of a pulsed Nd:YAG laser and the resulting plasma spectra were analysed using a linear Pearson correlation method. For both wavelengths the absorption/ablation behaviour as well as laser-induced breakdown spectroscopy (LIBS) depth profiles were studied varying laser fluences between 4.3-17.2 J/cm^2 at 532 nm and 2.9-11.7 J/cm^2 at 1064 nm. The LIBS-stratigrams were compared with energy-dispersive X-ray spectroscopy of cross-sections. The ablation rates were calculated and compared to theoretical values originating from a thermal ablation model. Generally, higher ablation rates were obtained with 532 nm light for both materials. The light-plasma interaction is suggested as possible cause of the lower ablation rates in the infrared regime. Neither clear evidence of the pure thermal ablation, nor correlation with optical properties of investigated materials was obtained.

  5. Pressurized pyrolysis of rice husk in an inert gas sweeping fixed-bed reactor with a focus on bio-oil deoxygenation.

    PubMed

    Qian, Yangyang; Zhang, Jie; Wang, Jie

    2014-12-01

    The pyrolysis of rice husk was conducted in a fixed-bed reactor with a sweeping nitrogen gas to investigate the effects of pressure on the pyrolytic behaviors. The release rates of main gases during the pyrolysis, the distributions of four products (char, bio-oil, water and gas), the elemental compositions of char, bio-oil and gas, and the typical compounds in bio-oil were determined. It was found that the elevation of pressure from 0.1MPa to 5.0MPa facilitated the dehydration and decarboxylation of bio-oil, and the bio-oils obtained under the elevated pressures had significantly less oxygen and higher calorific value than those obtained under atmospheric pressure. The former bio-oils embraced more acetic acid, phenols and guaiacols. The elevation of pressure increased the formation of CH4 partially via the gas-phase reactions. An attempt is made in this study to clarify "the pure pressure effect" and "the combined effect with residence time". Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Ultraviolet Imaging Telescope ultraviolet images - Large-scale structure, H II regions, and extinction in M81

    NASA Technical Reports Server (NTRS)

    Hill, Jesse K.; Bohlin, Ralph C.; Cheng, Kwang-Ping; Hintzen, Paul M. N.; Landsman, Wayne B.; Neff, Susan G.; O'Connell, Robert W.; Roberts, Morton S.; Smith, Andrew M.; Smith, Eric P.

    1992-01-01

    The study employs UV images of M81 obtained by the Ultraviolet Imaging Telescope (UIT) during the December 1990 Astro-1 spacelab mission to determine 2490- and 1520-A fluxes from 46 H II regions and global surface brightness profiles. Comparison photometry in the V band is obtained from a ground-based CCD image. UV radial profiles show bulge and exponential disk components, with a local decrease in disk surface brightness inside the inner Lindblad Resonance about 4 arcmin from the nucleus. The V profile shows typical bulge plus exponential disk structure, with no local maximum in the disk. There is little change of UV color across the disk, although there is a strong gradient in the bulge. Observed m152-V colors of the H II regions are consistent with model spectra for young clusters, after dereddening using Av determined from m249-V and the Galactic extinction curve. The value of Av, so determined, is 0.4 mag greater on the average than Av derived from radio continuum and H-alpha fluxes.

  7. Deep-tissue two-photon imaging in brain and peripheral nerve with a compact high-pulse energy ytterbium fiber laser

    NASA Astrophysics Data System (ADS)

    Fontaine, Arjun K.; Kirchner, Matthew S.; Caldwell, John H.; Weir, Richard F.; Gibson, Emily A.

    2018-02-01

    Two-photon microscopy is a powerful tool of current scientific research, allowing optical visualization of structures below the surface of tissues. This is of particular value in neuroscience, where optically accessing regions within the brain is critical for the continued advancement in understanding of neural circuits. However, two-photon imaging at significant depths have typically used Ti:Sapphire based amplifiers that are prohibitively expensive and bulky. In this study, we demonstrate deep tissue two-photon imaging using a compact, inexpensive, turnkey operated Ytterbium fiber laser (Y-Fi, KM Labs). The laser is based on all-normal dispersion (ANDi) that provides short pulse durations and high pulse energies. Depth measurements obtained in ex vivo mouse cortex exceed those obtainable with standard two-photon microscopes using Ti:Sapphire lasers. In addition to demonstrating the capability of deep-tissue imaging in the brain, we investigated imaging depth in highly-scattering white matter with measurements in sciatic nerve showing limited optical penetration of heavily myelinated nerve tissue relative to grey matter.

  8. Fermion-induced quantum critical points in two-dimensional Dirac semimetals

    NASA Astrophysics Data System (ADS)

    Jian, Shao-Kai; Yao, Hong

    2017-11-01

    In this paper we investigate the nature of quantum phase transitions between two-dimensional Dirac semimetals and Z3-ordered phases (e.g., Kekule valence-bond solid), where cubic terms of the order parameter are allowed in the quantum Landau-Ginzberg theory and the transitions are putatively first order. From large-N renormalization-group (RG) analysis, we find that fermion-induced quantum critical points (FIQCPs) [Z.-X. Li et al., Nat. Commun. 8, 314 (2017), 10.1038/s41467-017-00167-6] occur when N (the number of flavors of four-component Dirac fermions) is larger than a critical value Nc. Remarkably, from the knowledge of space-time supersymmetry, we obtain an exact lower bound for Nc, i.e., Nc>1 /2 . (Here the "1/2" flavor of four-component Dirac fermions is equivalent to one flavor of four-component Majorana fermions). Moreover, we show that the emergence of two length scales is a typical phenomenon of FIQCPs and obtain two different critical exponents, i.e., ν ≠ν' , by large-N RG calculations. We further give a brief discussion of possible experimental realizations of FIQCPs.

  9. A NANOFLARE-BASED CELLULAR AUTOMATON MODEL AND THE OBSERVED PROPERTIES OF THE CORONAL PLASMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuentes, Marcelo López; Klimchuk, James A., E-mail: lopezf@iafe.uba.ar

    2016-09-10

    We use the cellular automaton model described in López Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode /XRT and SDO /AIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasmamore » in AR coronal loops. The typical intensity fluctuations have amplitudes of 10%–15% both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.« less

  10. Noninvasive monitoring of blood pressure using optical Ballistocardiography and Photoplethysmograph approaches.

    PubMed

    Chen, Zhihao; Yang, Xiufeng; Teo, Ju Teng; Ng, Soon Huat

    2013-01-01

    A new all optical method for long term and continuous blood pressure measurement and monitoring without using cuffs is proposed by using Ballistocardiography (BCG) and Photoplethysmograph (PPG). Based on BCG signal and PPG signal, a time delay between these two signals is obtained to calculate both systolic blood pressure and diastolic blood pressure via linear regression analysis. The fabricated noninvasive blood pressure monitoring device consists of a fiber sensor mat to measure BCG signal and a SpO2 sensor to measure PPG signal. A commercial digital oscillometric blood pressure meter is used to obtain reference values and for calibration. It has been found that by comparing with the reference device, our prototype has typical means and standard deviations of 9+/-5.6 mmHg for systolic blood pressure, 1.8+/-1.3 mmHg for diastolic blood pressure and 0.6+/-0.9 bpm for pulse rate, respectively. If the fiber optic SpO2 probe is used, this new all fiber cuffless noninvasive blood pressure monitoring device will truly be a MRI safe blood pressure measurement and monitoring device.

  11. Locally adaptive parallel temperature accelerated dynamics method

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2010-03-01

    The recently-developed temperature-accelerated dynamics (TAD) method [M. Sørensen and A.F. Voter, J. Chem. Phys. 112, 9599 (2000)] along with the more recently developed parallel TAD (parTAD) method [Y. Shim et al, Phys. Rev. B 76, 205439 (2007)] allow one to carry out non-equilibrium simulations over extended time and length scales. The basic idea behind TAD is to speed up transitions by carrying out a high-temperature MD simulation and then use the resulting information to obtain event times at the desired low temperature. In a typical implementation, a fixed high temperature Thigh is used. However, in general one expects that for each configuration there exists an optimal value of Thigh which depends on the particular transition pathways and activation energies for that configuration. Here we present a locally adaptive high-temperature TAD method in which instead of using a fixed Thigh the high temperature is dynamically adjusted in order to maximize simulation efficiency. Preliminary results of the performance obtained from parTAD simulations of Cu/Cu(100) growth using the locally adaptive Thigh method will also be presented.

  12. The effect of seasonal variation on biomethane production from seaweed and on application as a gaseous transport biofuel.

    PubMed

    Tabassum, Muhammad Rizwan; Xia, Ao; Murphy, Jerry D

    2016-06-01

    Biomethane produced from seaweed may be used as a transport biofuel. Seasonal variation will have an effect on this industry. Laminaria digitata, a typical Irish brown seaweed species, shows significant seasonal variation both in proximate, ultimate and biochemical composition. The characteristics in August were optimal with the lowest level of ash (20% of volatile solids), a C:N ratio of 32 and the highest specific methane yield measured at 327LCH4kgVS(-1), which was 72% of theoretical yield. The highest yield per mass collected of 53m(3)CH4t(-1) was achieved in August, which is 4.5 times higher than the lowest value, obtained in December. A seaweed cultivation area of 11,800ha would be required to satisfy the 2020 target for advanced biofuels in Ireland, of 1.25% renewable energy supply in transport (RES-T) based on the optimal gross energy yield obtained in August (200GJha(-1)yr(-1)). Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Tropical upper troposphere and tropopause layer in situ measurement of H2O by the micro- SDLA balloon borne diode laser spectrometer: modelling interpretation.

    NASA Astrophysics Data System (ADS)

    Durry, G.; Huret, N.; Freitas, S.; Hauchecorne, A.; Longo, K.

    2006-12-01

    During the HIBISCUS European campaign in Bauru (Brazil, 22°S) in 2004, the micro-SDLA diode laser sensor was flown twice on February the 13th (SF2 flight) and the 24th (SF4 flight) from small size open stratospheric balloons operated by the CNES. In situ measurements of H2O, CH4 at high spatial resolution (a few meters) were obtained in the UT and in the TTL. Both flights took place in convective conditions. Layering in the TTL water vapour content is observed with values from 3 ppmv (typical of TTL) to high values of 6 ppmv. To investigate such layering we have used a combination of 3D trajectory calculations (Freitas et al., JGR, 2000) using the mesoscale model BRAMS outputs and Potential vorticity map obtained from the high- resolution PV-advection model MIMOSA (Hauchecorne et al., JGR, 2001). The mesoscale model BRAMS allows us to study processes associated with convective systems, whereas isentropic transport at global scale is investigated with MIMOSA. Backward 3D trajectories have been calculated every km for the two flights. It appears that a very strong uplifting from the ground to 16.5 km has occurred 80 hours before the SF4 flight. This uplifting is associated with a 3 ppmv water vapor layer whereas just above twice more water vapour is observed. This layer with high water vapor is associated with trajectories that skim over the top of the convective region. This leads us to discuss on the ability of convective system to inject water vapour in the TTL. For both flights we investigate also the impact of isentropic transport from extratropical region on TTL water vapour content. It appears that for the SF2 and SF4 flight using the PV maps from MIMOSA model we report filamentation in the TTL and in the UT respectively. This filamentation is associated in the UT with strong dehydration observed at 8-10 km for the SF4 flight and with high water vapour content in the TTL typical of mid- latitude region during SF2 flight.

  14. CT volumetry of the skeletal tissues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brindle, James M.; Alexandre Trindade, A.; Pichardo, Jose C.

    2006-10-15

    Computed tomography (CT) is an important and widely used modality in the diagnosis and treatment of various cancers. In the field of molecular radiotherapy, the use of spongiosa volume (combined tissues of the bone marrow and bone trabeculae) has been suggested as a means to improve the patient-specificity of bone marrow dose estimates. The noninvasive estimation of an organ volume comes with some degree of error or variation from the true organ volume. The present study explores the ability to obtain estimates of spongiosa volume or its surrogate via manual image segmentation. The variation among different segmentation raters was exploredmore » and found not to be statistically significant (p value >0.05). Accuracy was assessed by having several raters manually segment a polyvinyl chloride (PVC) pipe with known volumes. Segmentation of the outer region of the PVC pipe resulted in mean percent errors as great as 15% while segmentation of the pipe's inner region resulted in mean percent errors within {approx}5%. Differences between volumes estimated with the high-resolution CT data set (typical of ex vivo skeletal scans) and the low-resolution CT data set (typical of in vivo skeletal scans) were also explored using both patient CT images and a PVC pipe phantom. While a statistically significant difference (p value <0.002) between the high-resolution and low-resolution data sets was observed with excised femoral heads obtained following total hip arthroplasty, the mean difference between high-resolution and low-resolution data sets was found to be only 1.24 and 2.18 cm{sup 3} for spongiosa and cortical bone, respectively. With respect to differences observed with the PVC pipe, the variation between the high-resolution and low-resolution mean percent errors was a high as {approx}20% for the outer region volume estimates and only as high as {approx}6% for the inner region volume estimates. The findings from this study suggest that manual segmentation is a reasonably accurate and reliable means for the in vivo estimation of spongiosa volume. This work also provides a foundation for future studies where spongiosa volumes are estimated by various raters in more comprehensive CT data sets.« less

  15. The Values Awareness Teaching Strategy; An Overview.

    ERIC Educational Resources Information Center

    Dalis, Gus T.; Strasser, Ben B.

    The transcript of a values awareness lesson in communicable diseases is presented to illustrate the two stages of a typical values class--introducing the lesson and implementing the lesson. In this case, an overhead slide-transparency is used (along with prefacing remarks) to lead the class into considering those whom a gonorrhea-infected youth…

  16. Economic incentives for oak woodland preservation and conservation

    Treesearch

    Rosi Dagit; Cy Carlberg; Christy Cuba; Thomas Scott

    2015-01-01

    Numerous ordinances and laws recognize the value of oak trees and woodlands, and dictate serious and expensive consequences for removing or harming them. Unfortunately, the methods used to calculate these values are equally numerous and often inconsistent. More important, these ordinances typically lack economic incentives to avoid impacts to oak woodland values...

  17. From Prototypes to Caricatures: Geometrical Models for Concept Typicality

    ERIC Educational Resources Information Center

    Ameel, Eef; Storms, Gert

    2006-01-01

    In three studies, we investigated to what extent a geometrical representation in a psychological space succeeds in predicting typicality in animal, natural food and artifact concepts and whether contrast categories contribute to the prediction. In Study 1, we compared the predictive value of a family resemblance-based prototype model with a…

  18. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  19. Computerized method for automatic evaluation of lean body mass from PET/CT: comparison with predictive equations.

    PubMed

    Chan, Tao

    2012-01-01

    CT has become an established method for calculating body composition, but it requires data from the whole body, which are not typically obtained in routine PET/CT examinations. A computerized scheme that evaluates whole-body lean body mass (LBM) based on CT data from limited-whole-body coverage was developed. The LBM so obtained was compared with results from conventional predictive equations. LBM can be obtained automatically from limited-whole-body CT data by 3 means: quantification of body composition from CT images in the limited-whole-body scan, based on thresholding of CT attenuation; determination of the range of coverage based on a characteristic trend of changing composition across different levels and pattern recognition of specific features at strategic positions; and estimation of the LBM of the whole body on the basis of a predetermined relationship between proportion of fat mass and extent of coverage. This scheme was validated using 18 whole-body PET/CT examinations truncated at different lengths to emulate limited-whole-body data. LBM was also calculated using predictive equations that had been reported for use in SUV normalization. LBM derived from limited-whole-body data using the proposed method correlated strongly with LBM derived from whole-body CT data, with correlation coefficients ranging from 0.991 (shorter coverage) to 0.998 (longer coverage) and SEMs of LBM ranging from 0.14 to 0.33 kg. These were more accurate than results from different predictive equations, which ranged in correlation coefficient from 0.635 to 0.970 and in SEM from 0.64 to 2.40 kg. LBM of the whole body could be automatically estimated from CT data of limited-whole-body coverage typically acquired in PET/CT examinations. This estimation allows more accurate and consistent quantification of metabolic activity of tumors based on LBM-normalized standardized uptake value.

  20. Role of Proinflammatory Cytokines in Thermal Activation of Lymphocyte Recruitment to Breast Tumor Microvessels

    DTIC Science & Technology

    2007-03-01

    Photomicrographs show typical images. Scale bar, 50 µm. Data are the mean ± SE and are representative of ≥ 3 independent experiments. P values represent the...not affect ICAM-1 expression in normal islets of RIP-Tag5 pancreas. Photomicrographs show typical images. Scale bar, 50 µm. 2 We have identified the...WBH-treated mice. Thermal upregulation of vascular ICAM-1 expression was abolished in IL-6 KO mice. Photomicrographs show typical images. Scale bar

  1. Possible Range of Viscosity Parameters to Trigger Black Hole Candidates to Exhibit Different States of Outbursts

    NASA Astrophysics Data System (ADS)

    Mondal, Santanu; Chakrabarti, Sandip K.; Nagarkoti, Shreeram; Arévalo, Patricia

    2017-11-01

    In a two component advective flow around a compact object, a high-viscosity Keplerian disk is flanked by a low angular momentum and low-viscosity flow that forms a centrifugal, pressure-supported shock wave close to the black hole. The post-shock region that behaves like a Compton cloud becomes progressively smaller during the outburst as the spectra change from the hard state (HS) to the soft state (SS), in order to satisfy the Rankine-Hugoniot relation in the presence of cooling. The resonance oscillation of the shock wave that causes low-frequency quasi-periodic oscillations (QPOs) also allows us to obtain the shock location from each observed QPO frequency. Applying the theory of transonic flow, along with Compton cooling and viscosity, we obtain the viscosity parameter {α }{SK} required for the shock to form at those places in the low-Keplerian component. When we compare the evolution of {α }{SK} for each outburst, we arrive at a major conclusion: in each source, the advective flow component typically requires an exactly similar value of {α }{SK} when transiting from one spectral state to another (e.g., from HS to SS through intermediate states and the other way around in the declining phase). Most importantly, these {α }{SK} values in the low angular momentum advective component are fully self-consistent in the sense that they remain below the critical value {α }{cr} required to form a Keplerian disk. For a further consistency check, we compute the {α }{{K}} of the Keplerian component, and find that in each of the objects, {α }{SK} < {α }{cr} < {α }{{K}}.

  2. Measuring the apparent diffusion coefficient in primary rectal tumors: is there a benefit in performing histogram analyses?

    PubMed

    van Heeswijk, Miriam M; Lambregts, Doenja M J; Maas, Monique; Lahaye, Max J; Ayas, Z; Slenter, Jos M G M; Beets, Geerard L; Bakers, Frans C H; Beets-Tan, Regina G H

    2017-06-01

    The apparent diffusion coefficient (ADC) is a potential prognostic imaging marker in rectal cancer. Typically, mean ADC values are used, derived from precise manual whole-volume tumor delineations by experts. The aim was first to explore whether non-precise circular delineation combined with histogram analysis can be a less cumbersome alternative to acquire similar ADC measurements and second to explore whether histogram analyses provide additional prognostic information. Thirty-seven patients who underwent a primary staging MRI including diffusion-weighted imaging (DWI; b0, 25, 50, 100, 500, 1000; 1.5 T) were included. Volumes-of-interest (VOIs) were drawn on b1000-DWI: (a) precise delineation, manually tracing tumor boundaries (2 expert readers), and (b) non-precise delineation, drawing circular VOIs with a wide margin around the tumor (2 non-experts). Mean ADC and histogram metrics (mean, min, max, median, SD, skewness, kurtosis, 5th-95th percentiles) were derived from the VOIs and delineation time was recorded. Measurements were compared between the two methods and correlated with prognostic outcome parameters. Median delineation time reduced from 47-165 s (precise) to 21-43 s (non-precise). The 45th percentile of the non-precise delineation showed the best correlation with the mean ADC from the precise delineation as the reference standard (ICC 0.71-0.75). None of the mean ADC or histogram parameters showed significant prognostic value; only the total tumor volume (VOI) was significantly larger in patients with positive clinical N stage and mesorectal fascia involvement. When performing non-precise tumor delineation, histogram analysis (in specific 45th ADC percentile) may be used as an alternative to obtain similar ADC values as with precise whole tumor delineation. Histogram analyses are not beneficial to obtain additional prognostic information.

  3. Neuronal variability in orbitofrontal cortex during economic decisions.

    PubMed

    Conen, Katherine E; Padoa-Schioppa, Camillo

    2015-09-01

    Neuroeconomic models assume that economic decisions are based on the activity of offer value cells in the orbitofrontal cortex (OFC), but testing this assertion has proven difficult. In principle, the decision made on a given trial should correlate with the stochastic fluctuations of these cells. However, this correlation, measured as a choice probability (CP), is small. Importantly, a neuron's CP reflects not only its individual contribution to the decision (termed readout weight), but also the intensity and the structure of correlated variability across the neuronal population (termed noise correlation). A precise mathematical relation between CPs, noise correlations, and readout weights was recently derived by Haefner and colleagues (Haefner RM, Gerwinn S, Macke JH, Bethge M. Nat Neurosci 16: 235-242, 2013) for a linear decision model. In this framework, concurrent measurements of noise correlations and CPs can provide quantitative information on how a population of cells contributes to a decision. Here we examined neuronal variability in the OFC of rhesus monkeys during economic decisions. Noise correlations had similar structure but considerably lower strength compared with those typically measured in sensory areas during perceptual decisions. In contrast, variability in the activity of individual cells was high and comparable to that recorded in other cortical regions. Simulation analyses based on Haefner's equation showed that noise correlations measured in the OFC combined with a plausible readout of offer value cells reproduced the experimental measures of CPs. In other words, the results obtained for noise correlations and those obtained for CPs taken together support the hypothesis that economic decisions are primarily based on the activity of offer value cells. Copyright © 2015 the American Physiological Society.

  4. PM10 emission efficiency for agricultural soils: Comparing a wind tunnel, a dust generator, and the open-air plot

    NASA Astrophysics Data System (ADS)

    Avecilla, Fernando; Panebianco, Juan E.; Mendez, Mariano J.; Buschiazzo, Daniel E.

    2018-06-01

    The PM10 emission efficiency of soils has been determined through different methods. Although these methods imply important physical differences, their outputs have never been compared. In the present study the PM10 emission efficiency was determined for soils through a wide range of textures, using three typical methodologies: a rotary-chamber dust generator (EDG), a laboratory wind tunnel on a prepared soil bed, and field measurements on an experimental plot. Statistically significant linear correlation was found (p < 0.05) between the PM10 emission efficiency obtained from the EDG and wind tunnel experiments. A significant linear correlation (p < 0.05) was also found between the PM10 emission efficiency determined both with the wind tunnel and the EDG, and a soil texture index (%sand + %silt)/(%clay + %organic matter) that reflects the effect of texture on the cohesion of the aggregates. Soils with higher sand content showed proportionally less emission efficiency than fine-textured, aggregated soils. This indicated that both methodologies were able to detect similar trends regarding the correlation between the soil texture and the PM10 emission. The trends attributed to soil texture were also verified for two contrasting soils under field conditions. However, differing conditions during the laboratory-scale and the field-scale experiments produced significant differences in the magnitude of the emission efficiency values. The causes of these differences are discussed within the paper. Despite these differences, the results suggest that standardized laboratory and wind tunnel procedures are promissory methods, which could be calibrated in the future to obtain results comparable to field values, essentially through adjusting the simulation time. However, more studies are needed to extrapolate correctly these values to field-scale conditions.

  5. Length dependence of electron transport through molecular wires--a first principles perspective.

    PubMed

    Khoo, Khoong Hong; Chen, Yifeng; Li, Suchun; Quek, Su Ying

    2015-01-07

    One-dimensional wires constitute a fundamental building block in nanoscale electronics. However, truly one-dimensional metallic wires do not exist due to Peierls distortion. Molecular wires come close to being stable one-dimensional wires, but are typically semiconductors, with charge transport occurring via tunneling or thermally-activated hopping. In this review, we discuss electron transport through molecular wires, from a theoretical, quantum mechanical perspective based on first principles. We focus specifically on the off-resonant tunneling regime, applicable to shorter molecular wires (<∼4-5 nm) where quantum mechanics dictates electron transport. Here, conductance decays exponentially with the wire length, with an exponential decay constant, beta, that is independent of temperature. Different levels of first principles theory are discussed, starting with the computational workhorse - density functional theory (DFT), and moving on to many-electron GW methods as well as GW-inspired DFT + Sigma calculations. These different levels of theory are applied in two major computational frameworks - complex band structure (CBS) calculations to estimate the tunneling decay constant, beta, and Landauer-Buttiker transport calculations that consider explicitly the effects of contact geometry, and compute the transmission spectra directly. In general, for the same level of theory, the Landauer-Buttiker calculations give more quantitative values of beta than the CBS calculations. However, the CBS calculations have a long history and are particularly useful for quick estimates of beta. Comparing different levels of theory, it is clear that GW and DFT + Sigma calculations give significantly improved agreement with experiment compared to DFT, especially for the conductance values. Quantitative agreement can also be obtained for the Seebeck coefficient - another independent probe of electron transport. This excellent agreement provides confirmative evidence of off-resonant tunneling in the systems under investigation. Calculations show that the tunneling decay constant beta is a robust quantity that does not depend on details of the contact geometry, provided that the same contact geometry is used for all molecular lengths considered. However, because conductance is sensitive to contact geometry, values of beta obtained by considering conductance values where the contact geometry is changing with the molecular junction length can be quite different. Experimentally measured values of beta in general compare well with beta obtained using DFT + Sigma and GW transport calculations, while discrepancies can be attributed to changes in the experimental contact geometries with molecular length. This review also summarizes experimental and theoretical efforts towards finding perfect molecular wires with high conductance and small beta values.

  6. Accuracy of the HumaSensplus point-of-care uric acid meter using capillary blood obtained by fingertip puncture.

    PubMed

    Fabre, Stéphanie; Clerson, Pierre; Launay, Jean-Marie; Gautier, Jean-François; Vidal-Trecan, Tiphaine; Riveline, Jean-Pierre; Platt, Adam; Abrahamsson, Anna; Miner, Jeffrey N; Hughes, Glen; Richette, Pascal; Bardin, Thomas

    2018-05-02

    The uric acid (UA) level in patients with gout is a key factor in disease management and is typically measured in the laboratory using plasma samples obtained after venous puncture. This study aimed to assess the reliability of immediate UA measurement with capillary blood samples obtained by fingertip puncture with the HumaSens plus point-of-care meter. UA levels were measured using both the HumaSens plus meter in the clinic and the routine plasma UA method in the biochemistry laboratory of 238 consenting diabetic patients. HumaSens plus capillary and routine plasma UA measurements were compared by linear regression, Bland-Altman plots, intraclass correlation coefficient (ICC), and Lin's concordance coefficient. Values outside the dynamic range of the meter, low (LO) or high (HI), were analyzed separately. The best capillary UA thresholds for detecting hyperuricemia were determined by receiver operating characteristic (ROC) curves. The impact of potential confounding factors (demographic and biological parameters/treatments) was assessed. Capillary and routine plasma UA levels were compared to reference plasma UA measurements by liquid chromatography-mass spectrometry (LC-MS) for a subgroup of 67 patients. In total, 205 patients had capillary and routine plasma UA measurements available. ICC was 0.90 (95% confidence interval (CI) 0.87-0.92), Lin's coefficient was 0.91 (0.88-0.93), and the Bland-Altman plot showed good agreement over all tested values. Overall, 17 patients showed values outside the dynamic range. LO values were concordant with plasma values, but HI values were considered uninterpretable. Capillary UA thresholds of 299 and 340 μmol/l gave the best results for detecting hyperuricemia (corresponding to routine plasma UA thresholds of 300 and 360 μmol/l, respectively). No significant confounding factor was found among those tested, except for hematocrit; however, this had a negligible influence on the assay reliability. When capillary and routine plasma results were discordant, comparison with LC-MS measurements showed that plasma measurements had better concordance: capillary UA, ICC 0.84 (95% CI 0.75-0.90), Lin's coefficient 0.84 (0.77-0.91); plasma UA, ICC 0.96 (0.94-0.98), Lin's coefficient 0.96 (0.94-0.98). UA measurements with the HumaSens plus meter were reasonably comparable with those of the laboratory assay. The meter is easy to use and may be useful in the clinic and in epidemiologic studies.

  7. Assessment of a 1H high-resolution magic angle spinning NMR spectroscopy procedure for free sugars quantification in intact plant tissue.

    PubMed

    Delgado-Goñi, Teresa; Campo, Sonia; Martín-Sitjar, Juana; Cabañas, Miquel E; San Segundo, Blanca; Arús, Carles

    2013-08-01

    In most plants, sucrose is the primary product of photosynthesis, the transport form of assimilated carbon, and also one of the main factors determining sweetness in fresh fruits. Traditional methods for sugar quantification (mainly sucrose, glucose and fructose) require obtaining crude plant extracts, which sometimes involve substantial sample manipulation, making the process time-consuming and increasing the risk of sample degradation. Here, we describe and validate a fast method to determine sugar content in intact plant tissue by using high-resolution magic angle spinning nuclear magnetic resonance spectroscopy (HR-MAS NMR). The HR-MAS NMR method was used for quantifying sucrose, glucose and fructose in mesocarp tissues from melon fruits (Cucumis melo var. reticulatus and Cucumis melo var. cantalupensis). The resulting sugar content varied among individual melons, ranging from 1.4 to 7.3 g of sucrose, 0.4-2.5 g of glucose; and 0.73-2.83 g of fructose (values per 100 g fw). These values were in agreement with those described in the literature for melon fruit tissue, and no significant differences were found when comparing them with those obtained using the traditional, enzymatic procedure, on melon tissue extracts. The HR-MAS NMR method offers a fast (usually <30 min) and sensitive method for sugar quantification in intact plant tissues, it requires a small amount of tissue (typically 50 mg fw) and avoids the interferences and risks associated with obtaining plant extracts. Furthermore, this method might also allow the quantification of additional metabolites detectable in the plant tissue NMR spectrum.

  8. Sci-Fri AM: MRI and Diagnostic Imaging - 05: Comparison of Input Function Measurements from DCE and MOLLI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majtenyi, Nicholas; Juma, Hanif; Klein, Ran

    Dynamic contrast-enhanced (DCE)-MRI is a technique for obtaining tissue hemodynamic information (e.g. tumours). Despite widespread clinical application of DCE-MRI, the technique suffers from a lack of standardization and accuracy, especially with respect to the concentration-versus-time of gadolinium (Gd) in feeding arteries (the input function, IF). MR phase has a linear quantitative relationship with Gd concentration ([Gd]), making it ideal for measuring the first-pass of the IF, but is not considered accurate in the steady-state washout. Modified Look-Locker Inversion Recovery (MOLLI) is a fast and accurate method to measure T1 and has been validated to quantify typical [Gd] ranges experienced inmore » the washout of the IF. Two different methods to measure the IF for DCE-MRI were compared: 1) conventional phase-versus-time (“Phase-only”) and 2) phase-versus-time combined with pre- and post-DCE MOLLI T1 measurements (“Phase+MOLLI”). The IF obtained from Phase+MOLLI was calculated from MOLLI T1 values and known relaxivity, then added to the Phase-only acquisition with the washout IF subtracted. A significant difference was observed between IF values for [Gd] between the Phase-only and Phase+MOLLI acquisitions (P = 0.03). To ensure the IFs from MOLLI T1s were accurate, it was compared to [Gd] obtained from “gold-standard” inversion recovery (IR). MOLLI showed excellent agreement with IR when imaged in static phantoms (r{sup 2} = 0.997, P = 0.001). The Phase+MOLLI IF was more accurate than the Phase-only IF in measuring the washout. The Phase+MOLLI acquisition may therefore provide a DCE-MRI reference standard that could lead to better clinical diagnoses.« less

  9. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.

    PubMed

    Chagren, S; Tekaya, M Ben; Reguigui, N; Gharbi, F

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Accurate Valence Ionization Energies from Kohn-Sham Eigenvalues with the Help of Potential Adjustors.

    PubMed

    Thierbach, Adrian; Neiss, Christian; Gallandi, Lukas; Marom, Noa; Körzdörfer, Thomas; Görling, Andreas

    2017-10-10

    An accurate yet computationally very efficient and formally well justified approach to calculate molecular ionization potentials is presented and tested. The first as well as higher ionization potentials are obtained as the negatives of the Kohn-Sham eigenvalues of the neutral molecule after adjusting the eigenvalues by a recently [ Görling Phys. Rev. B 2015 , 91 , 245120 ] introduced potential adjustor for exchange-correlation potentials. Technically the method is very simple. Besides a Kohn-Sham calculation of the neutral molecule, only a second Kohn-Sham calculation of the cation is required. The eigenvalue spectrum of the neutral molecule is shifted such that the negative of the eigenvalue of the highest occupied molecular orbital equals the energy difference of the total electronic energies of the cation minus the neutral molecule. For the first ionization potential this simply amounts to a ΔSCF calculation. Then, the higher ionization potentials are obtained as the negatives of the correspondingly shifted Kohn-Sham eigenvalues. Importantly, this shift of the Kohn-Sham eigenvalue spectrum is not just ad hoc. In fact, it is formally necessary for the physically correct energetic adjustment of the eigenvalue spectrum as it results from ensemble density-functional theory. An analogous approach for electron affinities is equally well obtained and justified. To illustrate the practical benefits of the approach, we calculate the valence ionization energies of test sets of small- and medium-sized molecules and photoelectron spectra of medium-sized electron acceptor molecules using a typical semilocal (PBE) and two typical global hybrid functionals (B3LYP and PBE0). The potential adjusted B3LYP and PBE0 eigenvalues yield valence ionization potentials that are in very good agreement with experimental values, reaching an accuracy that is as good as the best G 0 W 0 methods, however, at much lower computational costs. The potential adjusted PBE eigenvalues result in somewhat less accurate ionization energies, which, however, are almost as accurate as those obtained from the most commonly used G 0 W 0 variants.

  11. A method for the estimation of dual transmissivities from slug tests

    NASA Astrophysics Data System (ADS)

    Wolny, Filip; Marciniak, Marek; Kaczmarek, Mariusz

    2018-03-01

    Aquifer homogeneity is usually assumed when interpreting the results of pumping and slug tests, although aquifers are essentially heterogeneous. The aim of this study is to present a method of determining the transmissivities of dual-permeability water-bearing formations based on slug tests such as the pressure-induced permeability test. A bi-exponential rate-of-rise curve is typically observed during many of these tests conducted in heterogeneous formations. The work involved analyzing curves deviating from the exponential rise recorded at the Belchatow Lignite Mine in central Poland, where a significant number of permeability tests have been conducted. In most cases, bi-exponential movement was observed in piezometers with a screen installed in layered sediments, each with a different hydraulic conductivity, or in fissured rock. The possibility to identify the flow properties of these geological formations was analyzed. For each piezometer installed in such formations, a set of two transmissivity values was calculated piecewise based on the interpretation algorithm of the pressure-induced permeability test—one value for the first (steeper) part of the obtained rate-of-rise curve, and a second value for the latter part of the curve. The results of transmissivity estimation for each piezometer are shown. The discussion presents the limitations of the interpretational method and suggests future modeling plans.

  12. Determination of 137Cs activity in soil from Qatar using high-resolution gamma-ray spectrometry

    NASA Astrophysics Data System (ADS)

    Al-Sulaiti, Huda; Nasir, Tabassum; Al Mugren, K. S.; Alkhomashi, N.; Al-Dahan, N.; Al-Dosari, M.; Bradley, D. A.; Bukhari, S.; Matthews, M.; Regan, P. H.; Santawamaitre, T.; Malain, D.; Habib, A.; Al-Dosari, Hanan; Al Sadig, Ibrahim; Daar, Eman

    2016-10-01

    With interest in establishing baseline concentrations of 137Cs in soil from the Qatarian peninsula, we focus on determination of the activity concentrations in 129 soil samples collected across the State of Qatar prior to the 2011 Fukushima Dai-ichi nuclear power plant accident. As such, the data provides the basis of a reference map for the detection of releases of this fission product. The activity concentrations were measured via high-resolution gamma-ray spectrometry using a hyper-pure germanium detector enclosed in a copper-lined passive lead shield that was situated in a low-background environment. The activity concentrations ranged from 0.21 to 15.41 Bq/kg, with a median value of 1 Bq/kg, the greatest activity concentration being observed in a sample obtained from northern Qatar. Although it cannot be confirmed, it is expected that this contamination is mainly due to releases from the Chernobyl accident of 26 April 1986, there being a lack of data from Qatar before the accident. The values are typically within but are sometimes lower than the range indicated by data from other countries in the region. The lower values than those of others is suggested to be due to variation in soil characteristics as well as metrological factors at the time of deposition.

  13. Biomonitoring of 33 Elements in Blood and Urine Samples from Coastal Populations in Sanmen County of Zhejiang Province.

    PubMed

    Zhang, Su-jing; Luo, Ru-xin; Ma, Dong; Zhuo, Xian-yi

    2016-04-01

    To determine the normal reference values of 33 elements, Ag, Al, As, Au, B, Ba, Be, Ca, Cd, Co, Cr, Cs, Cu, Fe, Ga, Hg, Li, Mg, Mn, Mo, Ni, Pb, Rb, Sb, Se, Sr, Th, Ti, Tl, U, V, Zn and Zr, in the blood and urine samples from the general population in Sanmen County of Zhejiang province, a typical coastal area of eastern China. The 33 elements in 272 blood and 300 urine samples were determined by inductively coupled plasma-mass spectrometry (ICP-MS). The normality test of data was conducted using SPSS 17.0 Statistics. The data was compared with other reports. The normal reference values of the 33 elements in the blood and urine samples from the general population in Sanmen County were obtained, which of some elements were found to be similar with other reports, such as Co, Cu, Mn and Sr, while As, Cd, Hg and Pb were generally found to be higher than those previously reported. There was a wide variation between the reports from different countries in blood Ba. The normal reference values of the 33 elements in the blood and urine samples from the general population in Sanmen County are established, and successfully applied to two poisoning cases.

  14. Estimation of principal deviatoric stresses imposed on an individual metachert: detailed application of the microboudin palaeopiezometer

    NASA Astrophysics Data System (ADS)

    Matsumura, T.; Masuda, T.

    2017-12-01

    The microboudinage structure of columnar mineral grain is an useful marker for the stress imposed on the metamorphic rock. In this presentation, we report a detailed application of the microboudin palaeopiezometer to an individual metachert specimen that includes microboudinaged tourmaline grains. The microboudin palaeostress analysis is conducted to the number of 3621 tourmaline grains divided into every 10° of their long axes on the foliation surface. The analysis revealed that the group of mean orientation ± 15° and perpendicular to the mean orientation ± 15° showed the value of σ1 - σ3 and σ1 - σ2 as 10.2 MPa and 5.3 MPa, respectively. Using both values of σ1 - σ3 and σ1 - σ2, magnitude of principal deviatoric stresses (σ'1, σ'2 and σ'3) are obtained as σ'1 = 5.3 MPa, σ'2 = -0.1 MPa and σ'3 = -5.1 MPa. In this stress state, the stress ratio (σ2 - σ3)/(σ1 - σ3) is 0.48 that indicates typical triaxial compression. As the microboudinage structure is considered to develop immediately before the matrix mineral encountered the cessation of the plastic flow, these values correspond to conditions at ≧ 300 °C on the later stage of the metamorphism.

  15. Water vapor radiative effects on short-wave radiation in Spain

    NASA Astrophysics Data System (ADS)

    Vaquero-Martínez, Javier; Antón, Manuel; Ortiz de Galisteo, José Pablo; Román, Roberto; Cachorro, Victoria E.

    2018-06-01

    In this work, water vapor radiative effect (WVRE) is studied by means of the Santa Barbara's Disort Radiative Transfer (SBDART) model, fed with integrated water vapor (IWV) data from 20 ground-based GPS stations in Spain. Only IWV data recorded during cloud-free days (selected using daily insolation data) were used in this study. Typically, for SZA = 60.0 ± 0.5° WVRE values are around - 82 and - 66 Wm-2 (first and third quartile), although it can reach up - 100 Wm-2 or decrease to - 39 Wm-2. A power dependence of WVRE on IWV and cosine of solar zenith angle (SZA) was found by an empirical fit. This relation is used to determine the water vapor radiative efficiency (WVEFF = ∂WVRE/∂IWV). Obtained WVEFF values range from - 9 and 0 Wm-2 mm-1 (- 2.2 and 0% mm-1 in relative terms). It is observed that WVEFF decreases as IWV increases, but also as SZA increases. On the other hand, when relative WVEFF is calculated from normalized WVRE, an increase of SZA results in an increase of relative WVEFF. Heating rates were also calculated, ranging from 0.2 Kday-1 to 1.7 Kday-1. WVRE was also calculated at top of atmosphere, where values ranged from 4 Wm-2 to 37 Wm-2.

  16. Modeling and Studying the Effect of Texture and Elastic Anisotropy of Copper Microstructure in Nanoscale Interconnects on Reliability in Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Basavalingappa, Adarsh

    Copper interconnects are typically polycrystalline and follow a lognormal grain size distribution. Polycrystalline copper interconnect microstructures with a lognormal grain size distribution were obtained with a Voronoi tessellation approach. The interconnect structures thus obtained were used to study grain growth mechanisms, grain boundary scattering, scattering dependent resistance of interconnects, stress evolution, vacancy migration, reliability life times, impact of orientation dependent anisotropy on various mechanisms, etc. In this work, the microstructures were used to study the impact of microstructure and elastic anisotropy of copper on thermal and electromigration induced failure. A test structure with copper and bulk moduli values was modeled to do a comparative study with the test structures with textured microstructure and elastic anisotropy. By subjecting the modeled test structure to a thermal stress by ramping temperature down from 400 °C to 100 °C, a significant variation in normal stresses and pressure were observed at the grain boundaries. This variation in normal stresses and hydrostatic stresses at the grain boundaries was found to be dependent on the orientation, dimensions, surroundings, and location of the grains. This may introduce new weak points within the metal line where normal stresses can be very high depending on the orientation of the grains leading to delamination and accumulation sites for vacancies. Further, the hydrostatic stress gradients act as a driving force for vacancy migration. The normal stresses can exceed certain grain orientation dependent critical threshold values and induce delamination at the copper and cap material interface, thereby leading to void nucleation and growth. Modeled test structures were subjected to a series of copper depositions at 250 °C followed by copper etch at 25 °C to obtain initial stress conditions. Then the modeled test structures were subjected to 100,000 hours ( 11.4 years) of simulated thermal stress at an elevated temperature of 150 °C. Vacancy migration due to concentration gradients, thermal gradients, and mechanical stress gradients were considered under the applied thermal stress. As a result, relatively high concentrations of vacancies were observed in the test structure due to a driving force caused by the pressure gradients resulting from the elastic anisotropy of copper. The grain growth mechanism was not considered in these simulations. Studies with two grain analysis demonstrated that the stress gradients developed will be severe when (100) grains are adjacent to (111) grains, therefore making them the weak points for potentially reliability failures. Ilan Blech discovered that electromigration occurs above a critical product of the current density and metal length, commonly referred as Blech condition. Electromigration stress simulations in this work were carried out by subjecting test structures to scaled current densities to overcome the Blech condition of (jL)crit for small dimensions of test structure and the low temperature stress condition used. Vacancy migration under the electromigration stress conditions was considered along with the vacancy migration induced stress evolution. A simple void growth model was used which assumes voids start to form when vacancies reach a critical level. Increase of vacancies in a localized region increases the resistance of the metal line. Considering a 10% increase in resistance as a failure criterion, the distributions of failure times were obtained for given electromigration stress conditions. Bimodal/multimodal failure distributions were obtained as a result. The sigma values were slightly lower than the ones commonly observed from experiments. The anisotropy of the elastic moduli of copper leads to the development of significantly different stress values which are dependent on the orientation of the grains. This results in some grains having higher normal stress than the others. This grain orientation dependent normal stress can reach a critical stress necessary to induce delamination at the copper and cap interface. Time taken to reach critical stress was considered as time to fail and distributions of failure times were obtained for structures with different grain orientations in the microstructure for different critical stress values. The sigma values of the failure distributions thus obtained for different constant critical stress values had a strong dependence of on the critical stress. It is therefore critical to use the appropriate critical stress value for the delamination of copper and cap interface. The critical stress necessary to overcome the local adhesion of the copper and the cap material interface is dependent on grain orientation of the copper. Simulations were carried out by considering grain orientation dependent critical normal stress values as failure criteria. The sigma value thus obtained with selected critical stress values were comparable to sigma values commonly observed from experiments.

  17. Comment on ``Annual variation of geomagnetic activity'' by Alicia L. Clúa de Gonzales et al.

    NASA Astrophysics Data System (ADS)

    Sonnemann, G. R.

    2002-10-01

    Clúa de Gonzales et al. (J. Atmos. Terr. Phys. 63 (2001) 367) analyzed the monthly means of the geomagnetic /aa-index available since 1868 and found enhanced geomagnetic activity in July outside of the known seasonal course of semiannual variation. They pointed out that this behavior is mainly caused by the high values of the geomagnetic activity. Their analysis confirmed results obtained from an analysis of Ap-values nearly 30 years ago but widely unknown to the scientific community. At that time the entire year was analyzed using running means of the activity values averaged to the same date. Aside from the July period, the calculations revealed distinct deviations from the seasonal course-called geomagnetic singularities. The most marked singularity occurs from the middle of March to the end of March characterized by a strong increase from, on average, relatively calm values to the actually strongest ones during the entire year. Some typical time patterns around and after equinox are repeated half a year later. An analysis in 1998 on the basis of the available /aa-values confirmed the findings derived from Ap-values and the local activity index Ak from Niemegk, Germany available since 1890. The new results will be presented and discussed. Special attention is paid to the statistical problem of the persistence of geomagnetic perturbations. The main problem under consideration is that the variation of the mean activity is not caused by an accidental accumulation of strong perturbations occurring within certain intervals of days. We assume that the most marked variations of the mean value are not accidental and result from internal processes within the earth's atmosphere but different, particularly small-scale features, are most probably accidental.

  18. On Pulsating and Cellular Forms of Hydrodynamic Instability in Liquid-Propellant Combustion

    NASA Technical Reports Server (NTRS)

    Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)

    1998-01-01

    An extended Landau-Levich model of liquid-propellant combustion, one that allows for a local dependence of the burning rate on the (gas) pressure at the liquid-gas interface, exhibits not only the classical hydrodynamic cellular instability attributed to Landau but also a pulsating hydrodynamic instability associated with sufficiently negative pressure sensitivities. Exploiting the realistic limit of small values of the gas-to-liquid density ratio p, analytical formulas for both neutral stability boundaries may be obtained by expanding all quantities in appropriate powers of p in each of three distinguished wave-number regimes. In particular, composite analytical expressions are derived for the neutral stability boundaries A(sub p)(k), where A, is the pressure sensitivity of the burning rate and k is the wave number of the disturbance. For the cellular boundary, the results demonstrate explicitly the stabilizing effect of gravity on long-wave disturbances, the stabilizing effect of viscosity (both liquid and gas) and surface tension on short-wave perturbations, and the instability associated with intermediate wave numbers for negative values of A(sub p), which is characteristic of many hydroxylammonium nitrate-based liquid propellants over certain pressure ranges. In contrast, the pulsating hydrodynamic stability boundary is insensitive to gravitational and surface-tension effects but is more sensitive to the effects of liquid viscosity because, for typical nonzero values of the latter, the pulsating boundary decreases to larger negative values of A(sub p) as k increases through O(l) values. Thus, liquid-propellant combustion is predicted to be stable (that is, steady and planar) only for a range of negative pressure sensitivities that lie below the cellular boundary that exists for sufficiently small negative values of A(sub p) and above the pulsating boundary that exists for larger negative values of this parameter.

  19. Identification of sources and behavior of agricultural contaminants in groundwater using nitorgen and sulfur isootope in Haean basin, Korea

    NASA Astrophysics Data System (ADS)

    Kaown, Dugin; Kim, Heejung; Mayer, Bernard; Hyun, Yunjung; Lee, Jin-Yong; Lee, Kang-Kun

    2013-04-01

    The Haean basin shows a bowl-shaped topographic feature and the drainage system shows a dendritic pattern. The study area is consisted of forests (58.0%), vegetable fields (27.6%), rice paddy fields (11.4%) and fruit fields (0.5%). Most of residents in the study area practice agriculture and paddy rice and vegetables (Chinese radish) are the typical crops grown. The concentration of nitrate in groundwater showed 0.8 ~ 67.3 mg/L in June, 2012 and 2.0 ~ 65.7 mg/L in September, 2012. Hydrogeochemical values and stable isotope ratios of dissolved nitrate and sulfate in groundwater were used to identify contamination sources and transformation processes in shallow groundwater. The δ15N-NO3- values in the study area ranged between +5.2 and +16.9‰ in June and between +4.4 and +13.0‰ in September. The sulfate concentration in groundwater samples obtained from the study area varied from 0.8 to 16.5 mg/L in June and 0 to 19.7 mg/L in September. δ34S-SO42- values ranged from +2.9 to +11.7‰ in June and +1.6 to +8.2‰ in September. The values of δ15N-NO3- and δ34S-SO42- in September were slightly decreased than those of values in June. The chemical composition of groundwater in vegetable and fruit fields showed slightly lower values of δ34S-SO42- and δ15N-NO3- indicated that a mixture of synthetic and organic fertilizers is responsible for groundwater contamination with agro-chemicals. Most groundwater from forests and paddy fields showed slightly higher values of δ15N-NO3- suggested that organic fertilizer is introduced into subsurface.

  20. Reentry heat transfer analysis of the space shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Ko, W. L.; Quinn, R. D.; Gong, L.

    1982-01-01

    A structural performance and resizing finite element thermal analysis computer program was used in the reentry heat transfer analysis of the space shuttle. Two typical wing cross sections and a midfuselage cross section were selected for the analysis. The surface heat inputs to the thermal models were obtained from aerodynamic heating analyses, which assumed a purely turbulent boundary layer, a purely laminar boundary layer, separated flow, and transition from laminar to turbulent flow. The effect of internal radiation was found to be quite significant. With the effect of the internal radiation considered, the wing lower skin temperature became about 39 C (70 F) lower. The results were compared with fight data for space transportation system, trajectory 1. The calculated and measured temperatures compared well for the wing if laminar flow was assumed for the lower surface and bay one upper surface and if separated flow was assumed for the upper surfaces of bays other than bay one. For the fuselage, good agreement between the calculated and measured data was obtained if laminar flow was assumed for the bottom surface. The structural temperatures were found to reach their peak values shortly before touchdown. In addition, the finite element solutions were compared with those obtained from the conventional finite difference solutions.

  1. A Bayesian Approach for Population Pharmacokinetic Modeling of Alcohol in Japanese Individuals.

    PubMed

    Nemoto, Asuka; Masaaki, Matsuura; Yamaoka, Kazue

    2017-01-01

    Blood alcohol concentration data that were previously obtained from 34 healthy Japanese subjects with limited sampling times were reanalyzed. Characteristics of the data were that the concentrations were obtained from only the early part of the time-concentration curve. To explore significant covariates for the population pharmacokinetic analysis of alcohol by incorporating external data using a Bayesian method, and to estimate effects of the covariates. The data were analyzed using a Markov chain Monte Carlo Bayesian estimation with NONMEM 7.3 (ICON Clinical Research LLC, North Wales, Pennsylvania). Informative priors were obtained from the external study. A 1-compartment model with Michaelis-Menten elimination was used. The typical value for the apparent volume of distribution was 49.3 L at the age of 29.4 years. Volume of distribution was estimated to be 20.4 L smaller in subjects with the ALDH2*1/*2 genotype than in subjects with the ALDH2*1/*1 genotype. A population pharmacokinetic model for alcohol was updated. A Bayesian approach allowed interpretation of significant covariate relationships, even if the current dataset is not informative about all parameters. This is the first study reporting an estimate of the effect of the ALDH2 genotype in a PPK model.

  2. Role of substrate in the surface diffusion and kinetic roughening of nanocrystallised nickel electrodeposits

    NASA Astrophysics Data System (ADS)

    Nzoghe-Mendome, L.; Aloufy, A.; Ebothé, J.; El Messiry, M.; Hui, D.

    2009-02-01

    The surface growth and roughening of nano-crystallised Ni electrodeposits prepared at the same conditions have been studied on Cu, Au and ITO substrates. The Ni films obtained are characterised by the same face-centred cubic structure with a texture affected by the substrate chemical nature. Practically, the same small-sized grains of 83 nm mean height depicting a statistical mono-mode feature grow on Cu. A three-modal feature corresponding to the biggest and compact crystallites of 335, 368 and 400 nm mean height is obtained with Au. Two typical modes, respectively, linked to isolated big crystallites of 343 nm mean height and large zones of small grains of 170 nm height, result from the ITO effect. The surface transport properties of Ni ad-atoms on each substrate have been studied from the theoretical approach including the film global roughness measured by AFM. It is shown that the ad-atom diffusion coefficients ( D s) ranged in the interval 10 -10-10 -9 cm 2 s -1 are greatly affected by the non-equilibrium conditions of the film formation. Cu and ITO, respectively, lead to Λ s=11.92 and 14.30 nm, while the higher D s value and diffusion length Λ s=37.32 nm are obtained with Au substrate.

  3. A numerical approximation to the elastic properties of sphere-reinforced composites

    NASA Astrophysics Data System (ADS)

    Segurado, J.; Llorca, J.

    2002-10-01

    Three-dimensional cubic unit cells containing 30 non-overlapping identical spheres randomly distributed were generated using a new, modified random sequential adsortion algorithm suitable for particle volume fractions of up to 50%. The elastic constants of the ensemble of spheres embedded in a continuous and isotropic elastic matrix were computed through the finite element analysis of the three-dimensional periodic unit cells, whose size was chosen as a compromise between the minimum size required to obtain accurate results in the statistical sense and the maximum one imposed by the computational cost. Three types of materials were studied: rigid spheres and spherical voids in an elastic matrix and a typical composite made up of glass spheres in an epoxy resin. The moduli obtained for different unit cells showed very little scatter, and the average values obtained from the analysis of four unit cells could be considered very close to the "exact" solution to the problem, in agreement with the results of Drugan and Willis (J. Mech. Phys. Solids 44 (1996) 497) referring to the size of the representative volume element for elastic composites. They were used to assess the accuracy of three classical analytical models: the Mori-Tanaka mean-field analysis, the generalized self-consistent method, and Torquato's third-order approximation.

  4. Evaluating reaction pathways of hydrothermal abiotic organic synthesis at elevated temperatures and pressures using carbon isotopes

    NASA Astrophysics Data System (ADS)

    Fu, Qi; Socki, Richard A.; Niles, Paul B.

    2015-04-01

    Experiments were performed to better understand the role of environmental factors on reaction pathways and corresponding carbon isotope fractionations during abiotic hydrothermal synthesis of organic compounds using piston cylinder apparatus at 750 °C and 5.5 kbars. Chemical compositions of experimental products and corresponding carbon isotopic values were obtained by a Pyrolysis-GC-MS-IRMS system. Alkanes (methane and ethane), straight-chain saturated alcohols (ethanol and n-butanol) and monocarboxylic acids (formic and acetic acids) were generated with ethanol being the only organic compound with higher δ13C than CO2. CO was not detected in experimental products owing to the favorable water-gas shift reaction under high water pressure conditions. The pattern of δ13C values of CO2, carboxylic acids and alkanes are consistent with their equilibrium isotope relationships: CO2 > carboxylic acids > alkanes, but the magnitude of the fractionation among them is higher than predicted isotope equilibrium values. In particular, the isotopic fractionation between CO2 and CH4 remained constant at ∼31‰, indicating a kinetic effect during CO2 reduction processes. No "isotope reversal" of δ13C values for alkanes or carboxylic acids was observed, which indicates a different reaction pathway than what is typically observed during Fischer-Tropsch synthesis under gas phase conditions. Under constraints imposed in experiments, the anomalous 13C isotope enrichment in ethanol suggests that hydroxymethylene is the organic intermediate, and that the generation of other organic compounds enriched in 12C were facilitated by subsequent Rayleigh fractionation of hydroxymethylene reacting with H2 and/or H2O. Carbon isotope fractionation data obtained in this study are instrumental in assessing the controlling factors on abiotic formation of organic compounds in hydrothermal systems. Knowledge on how environmental conditions affect reaction pathways of abiotic synthesis of organic compounds is critical for understanding deep subsurface ecosystems and the origin of organic compounds on Mars and other planets.

  5. Quantitative Magnetic Resonance Diffusion-Weighted Imaging Evaluation of the Supratentorial Brain Regions in Patients Diagnosed with Brainstem Variant of Posterior Reversible Encephalopathy Syndrome: A Preliminary Study.

    PubMed

    Chen, Tai-Yuan; Wu, Te-Chang; Ko, Ching-Chung; Feng, I-Jung; Tsui, Yu-Kun; Lin, Chien-Jen; Chen, Jeon-Hor; Lin, Ching-Po

    2017-07-01

    Posterior reversible encephalopathy syndrome (PRES) is a clinicoradiologic entity with several causes, characterized by rapid onset of symptoms and typical neuroimaging features, which usually resolve if promptly recognized and treated. Brainstem variant of PRES presents with vasogenic edema in brainstem regions on magnetic resonance (MR) images and there is sparing of the supratentorial regions. Because PRES is usually caused by a hypertensive crisis, which would likely have a systemic effect and global manifestations on the brain tissue, we thus proposed that some microscopic abnormalities of the supratentorial regions could be detected with diffusion-weighted imaging (DWI) using apparent diffusion coefficient (ADC) analysis in brainstem variant of PRES and hypothesized that "normal-looking" supratentorial regions will increase water diffusion. We retrospectively identified patients with PRES who underwent brain magnetic resonance imaging studies. We identified 11 brainstem variants of PRES patients, who formed the study cohort, and 11 typical PRES patients and 20 normal control subjects as the comparison cohorts for this study. Nineteen regions of interest were drawn and systematically placed. The mean ADC values were measured and compared among these 3 groups. ADC values of the typical PRES group were consistently elevated compared with those in normal control subjects. ADC values of the brainstem variant group were consistently elevated compared with those in normal control subjects. ADC values of the typical PRES group and brainstem variant group did not differ significantly, except for the pons area. Quantitative MR DWI may aid in the evaluation of supratentorial microscopic abnormalities in brainstem variant of PRES patients. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  6. Gastrointestinal Problems in Children with Autism, Developmental Delays or Typical Development

    ERIC Educational Resources Information Center

    Chaidez, Virginia; Hansen, Robin L.; Hertz-Picciotto, Irva

    2014-01-01

    To compare gastrointestinal (GI) problems among children with: (1) autism spectrum disorder (ASD), (2) developmental delay (DD) and (3) typical development (TD), GI symptom frequencies were obtained for 960 children from the CHildhood Autism Risks from Genetics and Environment (CHARGE) study. We also examined scores on five Aberrant Behavior…

  7. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children.

    PubMed

    Djalal, Farah Mutiasari; Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children's category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults.

  8. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children

    PubMed Central

    Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371

  9. Laboratory R-value vs. in-situ NDT methods.

    DOT National Transportation Integrated Search

    2006-05-01

    The New Mexico Department of Transportation (NMDOT) uses the Resistance R-Value as a quantifying parameter in subgrade and base course design. The parameter represents soil strength and stiffness and ranges from 1 to 80, 80 being typical of the highe...

  10. Design values of resilient modulus of stabilized and non-stabilized base.

    DOT National Transportation Integrated Search

    2010-10-01

    The primary objective of this research study is to determine design value ranges for typical base materials, as allowed by LADOTD specifications, through laboratory tests with respect to resilient modulus and other parameters used by pavement design ...

  11. USE OF METHOD DETECTION LIMITS IN ENVIRONMENTAL MEASUREMENTS

    EPA Science Inventory

    Environmental measurements often produce values below the method detection limit (MDL). Because low or zero values may be used in determining compliance with regulatory limits, in determining emission factors (typical concentrations emitted by a given type of source), or in model...

  12. Influence of atypical retardation pattern on the peripapillary retinal nerve fibre distribution assessed by scanning laser polarimetry and optical coherence tomography.

    PubMed

    Schrems, W A; Laemmer, R; Hoesl, L M; Horn, F K; Mardin, C Y; Kruse, F E; Tornow, R P

    2011-10-01

    To investigate the influence of atypical retardation pattern (ARP) on the distribution of peripapillary retinal nerve fibre layer (RNFL) thickness measured with scanning laser polarimetry in healthy individuals and to compare these results with RNFL thickness from spectral domain optical coherence tomography (OCT) in the same subjects. 120 healthy subjects were investigated in this study. All volunteers received detailed ophthalmological examination, GDx variable corneal compensation (VCC) and Spectralis-OCT. The subjects were divided into four subgroups according to their typical scan score (TSS): very typical with TSS=100, typical with 99 ≥ TSS ≥ 91, less typical with 90 ≥ TSS ≥ 81 and atypical with TSS ≤ 80. Deviations from very typical normal values were calculated for 32 sectors for each group. There was a systematic variation of the RNFL thickness deviation around the optic nerve head in the atypical group for the GDxVCC results. The highest percentage deviation of about 96% appeared temporal with decreasing deviation towards the superior and inferior sectors, and nasal sectors exhibited a deviation of 30%. Percentage deviations from very typical RNFL values decreased with increasing TSS. No systematic variation could be found if the RNFL thickness deviation between different TSS-groups was compared with the OCT results. The ARP has a major impact on the peripapillary RNFL distribution assessed by GDx VCC; thus, the TSS should be included in the standard printout.

  13. Uncinate fasciculus fractional anisotropy correlates with typical use of reappraisal in women but not men.

    PubMed

    Zuurbier, Lisette A; Nikolova, Yuliya S; Ahs, Fredrik; Hariri, Ahmad R

    2013-06-01

    Emotion regulation refers to strategies through which individuals influence their experience and expression of emotions. Two typical strategies are reappraisal, a cognitive strategy for reframing the context of an emotional experience, and suppression, a behavioral strategy for inhibiting emotional responses. Functional neuroimaging studies have revealed that regions of the prefrontal cortex modulate amygdala reactivity during both strategies, but relatively greater downregulation of the amygdala occurs during reappraisal. Moreover, these studies demonstrated that engagement of this modulatory circuitry varies as a function of gender. The uncinate fasciculus is a major structural pathway connecting regions of the anterior temporal lobe, including the amygdala to inferior frontal regions, especially the orbitofrontal cortex. The objective of the current study was to map variability in the structural integrity of the uncinate fasciculus onto individual differences in self-reported typical use of reappraisal and suppression. Diffusion tensor imaging was used in 194 young adults to derive regional fractional anisotropy values for the right and left uncinate fasciculus. All participants also completed the Emotion Regulation Questionnaire. In women but not men, self-reported typical reappraisal use was positively correlated with fractional anisotropy values in a region of the left uncinate fasciculus within the orbitofrontal cortex. In contrast, typical use of suppression was not significantly correlated with fractional anisotropy in any region of the uncinate fasciculus in either men or women. Our data suggest that in women typical reappraisal use is specifically related to the integrity of white matter pathways linking the amygdala and prefrontal cortex.

  14. First identification of pure rotation lines of NH in the infrared solar spectrum

    NASA Technical Reports Server (NTRS)

    Geller, M.; Farmer, C. B.; Norton, R. H.; Sauval, A. J.; Grevesse, N.

    1991-01-01

    Pure rotation lines of NH of the v = 0 level and v = 1 level are detected in high-resolution solar spectra obtained from the Atmospheric Trace Molecule Spectroscopy (ATMOS) experimental observations. It is pointed out that the identification of the lines is favored by the typical appearance of the triplet lines of nearly equal intensities. The observed equivalent widths of these triplet lines are compared with predicted intensities, and it is observed that these widths are systematically larger than the predicted values. It is noted that because these very faint lines are observed in a region where the signal is very low, a systematic error in the measurements of the equivalent widths cannot be ruled out; therefore, the disagreement between the observed and predicted intensities is not considered to be real.

  15. Mass and Charge Measurements on Heavy Ions

    PubMed Central

    Sugai, Toshiki

    2017-01-01

    The relationship between mass and charge has been a crucial topic in mass spectrometry (MS) because the mass itself is typically evaluated based on the m/z ratio. Despite the fact that this measurement is indirect, a precise mass can be obtained from the m/z value with a high m/z resolution up to 105 for samples in the low mass and low charge region under 10,000 Da and 20 e, respectively. However, the target of MS has recently been expanded to the very heavy region of Mega or Giga Da, which includes large particles and biocomplexes, with very large and widely distributed charge from kilo to Mega range. In this region, it is necessary to evaluate charge and mass simultaneously. Recent studies for simultaneous mass and charge observation and related phenomena are discussed in this review. PMID:29302406

  16. Rapid determination of particle velocity from space-time images using the Radon transform

    PubMed Central

    Drew, Patrick J.; Blinder, Pablo; Cauwenberghs, Gert; Shih, Andy Y.; Kleinfeld, David

    2016-01-01

    Laser-scanning methods are a means to observe streaming particles, such as the flow of red blood cells in a blood vessel. Typically, particle velocity is extracted from images formed from cyclically repeated line-scan data that is obtained along the center-line of the vessel; motion leads to streaks whose angle is a function of the velocity. Past methods made use of shearing or rotation of the images and a Singular Value Decomposition (SVD) to automatically estimate the average velocity in a temporal window of data. Here we present an alternative method that makes use of the Radon transform to calculate the velocity of streaming particles. We show that this method is over an order of magnitude faster than the SVD-based algorithm and is more robust to noise. PMID:19459038

  17. High resolution in-beam γ-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Kern, J.; Dousse, J.-Cl.; Gasser, M.; Perny, B.; Rhême, Ch.

    1985-01-01

    An in-beam curved crystal facility has been installed at the SIN variable energy cyclotron. Using the (110) planes of a 3.0 mm thick quartz lamina bent at 3.15 m, diffraction peaks typically 6 arcsec wide (FWHM) are obtained. The energy resolution is thus, for instance, 110 eV at 170 keV in 3rd order. Due to a sophisticated detector system and heavy shielding, the sensitivity of the instrument is quite good. The facility proves quite useful in (p,xnγ) reaction studies whenever the γ-ray spectrum is very complex, e.g. in the study of odd-odd deformed nuclei. Complicated multiplets appearing in the 176Yb(p,3nγ)174Lu spectrum could be successfully resolved. From the results we derive that the g-factors of the 142 d, Jπ=6- isomer, take anomalous values.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nekkab, M., E-mail: mohammed-nekkab@yahoo.com; LESIMS laboratory, Physics Department, Faculty of Sciences, University of Setif 1, 19000 Setif; Kahoul, A.

    The analytical methods based on X-ray fluorescence are advantageous for practical applications in a variety of fields including atomic physics, X-ray fluorescence surface chemical analysis and medical research and so the accurate fluorescence yields (ω{sub K}) are required for these applications. In this contribution we report a new parameters for calculation of K-shell fluorescence yields (ω{sub K}) of elements in the range of 11≤Z≤30. The experimental data are interpolated by using the famous analytical function (ω{sub k}/(1−ω{sub k})){sup 1/q} (were q=3, 3.5 and 4) vs Z to deduce the empirical K-shell fluorescence yields. A comparison is made between the resultsmore » of the procedures followed here and those theoretical and other semi-empirical fluorescence yield values. Reasonable agreement was typically obtained between our result and other works.« less

  19. Low NOx heavy fuel combustor concept program

    NASA Technical Reports Server (NTRS)

    White, D. J.; Kubasco, A. J.

    1982-01-01

    Three simulated coal gas fuels based on hydrogen and carbon monoxide were tested during an experimental evaluation with a rich lean can combustor: these were a simulated Winkler gas, Lurgi gas and Blue Water gas. All three were simulated by mixing together the necessary pure component species, to levels typical of fuel gases produced from coal. The Lurgi gas was also evaluated with ammonia addition. Fuel burning in a rich lean mode was emphasized. Only the Blue Water gas, however, could be operated in such fashion. This showed that the expected NOx signature form could be obtained, although the absolute values of NOx were above the 75 ppm goals for most operating conditions. Lean combustion produced very low NOx well below 75 ppm with the Winkler and Lurgi gases. In addition, these low levels were not significantly impacted by changes in operating conditions.

  20. Sharp Absorption Peaks in THz Spectra Valuable for Crystal Quality Evaluation of Middle Molecular Weight Pharmaceuticals

    NASA Astrophysics Data System (ADS)

    Sasaki, Tetsuo; Sakamoto, Tomoaki; Otsuka, Makoto

    2018-05-01

    Middle molecular weight (MMW) pharmaceuticals (MW 400 4000) are attracting attention for their possible use in new medications. Sharp absorption peaks were observed in MMW pharmaceuticals at low temperatures by measuring with a high-resolution terahertz (THz) spectrometer. As examples, high-resolution THz spectra for amoxicillin trihydrate, atorvastatin calcium trihydrate, probucol, and α,β,γ,δ-tetrakis(1-methylpyridinium-4-yl)porphyrin p-toluenesulfonate (TMPyP) were obtained at 10 K. Typically observed as peaks with full width at half-height (FWHM) values as low as 5.639 GHz at 0.96492 THz in amoxicillin trihydrate and 8.857 GHz at 1.07974 THz for probucol, many sharp peaks of MMW pharmaceuticals could be observed. Such narrow absorption peaks enable evaluation of the crystal quality of MMW pharmaceuticals and afford sensitive detection of impurities.

  1. Performance of single-stage compressor designed on basis of constant total enthalpy with symmetrical velocity diagram at all radii and velocity ratio of 0.7 at rotor hub / Jack R. Burtt and Robert J. Jackson

    NASA Technical Reports Server (NTRS)

    Burtt, Jack R; Jackson, Robert J

    1951-01-01

    A typical inlet axial-flow compressor inlet stage, which was designed on the basis of constant total enthalpy with symmetrical velocity diagram at all radii, was investigated. At a tip speed of 1126 feet per second, a peak pressure ratio of 1.28 was obtained at an efficiency of 0.76. At a tip speed, the highest practical flow was 28 pounds per second per square foot frontal area with an efficiency of 0.78. Data for a rotor relative inlet Mach number range of from 0.5 to 0.875 indicates that the critical value for any stage radial element is approximately 0.80 for the stage investigated.

  2. Calculation of K-shell fluorescence yields for low-Z elements

    NASA Astrophysics Data System (ADS)

    Nekkab, M.; Kahoul, A.; Deghfel, B.; Aylikci, N. Küp; Aylikçi, V.

    2015-03-01

    The analytical methods based on X-ray fluorescence are advantageous for practical applications in a variety of fields including atomic physics, X-ray fluorescence surface chemical analysis and medical research and so the accurate fluorescence yields (ωK) are required for these applications. In this contribution we report a new parameters for calculation of K-shell fluorescence yields (ωK) of elements in the range of 11≤Z≤30. The experimental data are interpolated by using the famous analytical function (ωk/(1 -ωk)) 1 /q (were q=3, 3.5 and 4) vs Z to deduce the empirical K-shell fluorescence yields. A comparison is made between the results of the procedures followed here and those theoretical and other semi-empirical fluorescence yield values. Reasonable agreement was typically obtained between our result and other works.

  3. An upgraded interferometer-polarimeter system for broadband fluctuation measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parke, E., E-mail: eparke@ucla.edu; Ding, W. X.; Brower, D. L.

    2016-11-15

    Measuring high-frequency fluctuations (above tearing mode frequencies) is important for diagnosing instabilities and transport phenomena. The Madison Symmetric Torus interferometer-polarimeter system has been upgraded to utilize improved planar-diode mixer technology. The new mixers reduce phase noise and allow more sensitive measurements of fluctuations at high frequency. Typical polarimeter rms phase noise values of 0.05°–0.07° are obtained with 400 kHz bandwidth. The low phase noise enables the resolution of fluctuations up to 250 kHz for polarimetry and 600 kHz for interferometry. The importance of probe beam alignment for polarimetry is also verified; previously reported tolerances of ≤0.1 mm displacement for equilibriummore » and tearing mode measurements minimize contamination due to spatial misalignment to within acceptable levels for chords near the magnetic axis.« less

  4. Price Bubbles with Discounting: A Web-Based Classroom Experiment

    ERIC Educational Resources Information Center

    Bostian, AJ A.; Holt, Charles A.

    2009-01-01

    The authors describe a Web-based classroom experiment with two assets: cash and a stock that pays a random dividend. The interest rate on cash, coupled with a well-chosen final redemption value for the stock, induces a flat trajectory for the fundamental value of the stock. However, prices typically rise above this value during a session. The…

  5. The Physician Values in Practice Scale: Construction and Initial Validation

    ERIC Educational Resources Information Center

    Hartung, Paul J.; Taber, Brian J.; Richard, George V.

    2005-01-01

    Measures of values typically appraise the construct globally, across life domains or relative to a broad life domain such as work. We conducted two studies to construct and initially validate an occupation- and context-specific values measure. Study 1, based on a sample of 192 medical students, describes the initial construction and item analysis…

  6. Predicting Success in an Online Course Using Expectancies, Values, and Typical Mode of Instruction

    ERIC Educational Resources Information Center

    Zimmerman, Whitney Alicia

    2017-01-01

    Expectancies of success and values were used to predict success in an online undergraduate-level introductory statistics course. Students who identified as primarily face-to-face learners were compared to students who identified as primarily online learners. Expectancy value theory served as a model. Expectancies of success were operationalized as…

  7. Amount of Sleep, Daytime Sleepiness, Hazardous Driving, and Quality of Life of Second Year Medical Students.

    PubMed

    Johnson, Kay M; Simon, Nancy; Wicks, Mark; Barr, Karen; O'Connor, Kim; Schaad, Doug

    2017-10-01

    The authors describe the sleep habits of second year medical students and look for associations between reported sleep duration and depression, burnout, overall quality of life, self-reported academic success, and falling asleep while driving. The authors conducted a cross-sectional descriptive study of two consecutive cohorts of second year medical students at a large public university in the USA. Participants completed an anonymous survey about their sleep habits, daytime sleepiness (Epworth sleepiness scale), burnout (Maslach burnout inventory), depression (PRIME MD), and perceived stress (perceived stress scale). Categorical and continuous variables were compared using chi square tests and t tests, respectively. Sixty-eight percent of the students responded. Many (34.3%) reported fewer than 7 h of sleep on typical weeknights, including 6.5% who typically sleep less than 6 h. Twenty-five students (8.4%) reported nodding off while driving during the current academic year. Low typical weeknight sleep (fewer than 6 h vs 6-6.9 h vs 7 or more hours) was associated with (1) higher Epworth sleepiness scale scores, (2) nodding off while driving, (3) symptoms of burnout or depression, (4) decreased satisfaction with quality of life, and (5) lower perceived academic success (all p values ≤0.01). Students reporting under 6 h of sleep were four times more likely to nod off while driving than those reporting 7 h or more. Educational, behavioral, and curricular interventions should be explored to help pre-clinical medical students obtain at least 7 h of sleep most on weeknights.

  8. Stratospheric Ozone Climatology from Lidar Measurements at Table Mountain (34.0 deg N, 117.7 deg W) and Mauna Loa (19.5 deg N, 155.6 deg W)

    NASA Technical Reports Server (NTRS)

    Leblanc, T.; McDermid, I. S.

    2000-01-01

    Using more than 1600 nighttime profiles obtained by the JPL differential absorption lidars (DIAL) located at Table Mountain Facility (TMF, 34.4 N) and Mauna Loa Observatory (MLO, 19.5 N) is presented in this paper. These two systems have been providing high-resolution vertical profiles of ozone number density between 15-50 km, several nights a week since 1989 (TMF) and 1993 (MLO). The climatology presented here is typical of early night ozone values with only a small influence of the Pinatubo aerosols and the 11-year solar cycle. The observed seasonal and vertical structure of the ozone concentration at TMF is consistent with that typical of mid- to subtropical latitudes. A clear annual cycle in opposite phase below and above the ozone concentration peak is observed. The observed winter maximum below the ozone peak is associated with a maximum day-to-day variability, typical of a dynamically driven lower stratosphere. The maximum concentration observed in summer above the ozone peak emphasizes the more dominant role of photochemistry. Unlike TMF, the ozone concentration observed at MLO tends to be higher during the summer months and lower during the winter months throughout the entire stratospheric ozone layer. Only a weak signature of the extra-tropical latitudes is observed near 19-20 km, with a secondary maximum in late winter. The only large variability observed at MLO is associated with the natural variability of the tropical tropopause.

  9. On Vieta's Formulas and the Determination of a Set of Positive Integers by Their Sum and Product

    ERIC Educational Resources Information Center

    Valahas, Theodoros; Boukas, Andreas

    2011-01-01

    In Years 9 and 10 of secondary schooling students are typically introduced to quadratic expressions and functions and related modelling, algebra, and graphing. This includes work on the expansion and factorisation of quadratic expressions (typically with integer values of coefficients), graphing quadratic functions, finding the roots of quadratic…

  10. A Comparative Evaluation of Condylar Guidance Value from Radiograph with Interocclusal Records made During Jaw Relation and Try-in: A Pilot Study.

    PubMed

    Shetty, Shilpa; Satish Babu, C L; Tambake, Deepti; Surendra Kumar, G P; Setpal, Abhishek T

    2013-09-01

    The purpose of this study was to evaluate the reliability of programming the articulator using the radiographs and the interocclusal records made during Jaw relation (Arrow point tracing) and Try-in stage. The study comprised of 15 edentulous subjects with well formed maxillary and mandibular ridges, with no signs and symptoms of temporomandibular joint disorders and neuromuscular disorders. Digital Orthopantomograph was taken for all the subjects. The condylar guidance angles were traced on Orthopantomograph for right and left sides and the values were recorded. The protrusive interocclusal records were made at jaw relation stage and at try-in stage using bite registration paste (Bitrex- vinyl polysiloxane) for all subjects. These interocclusal records were used to programme the Semi-adjustable articulator (Hanau Wide Vue) and the condylar guidance values on the right and left sides were recorded. The condylar guidance values so obtained were compared with the values obtained by Orthopantomograph. The condylar guidance values obtained by the various procedures were subjected to statistical analysis. The results showed statistically significant difference between the condylar guidance values obtained from Orthopantomograph (Radiograph) and the condylar guidance values obtained at the stage of jaw relation and also between Orthopantomograph and condylar guidance values obtained at the stage of Try-in. Condylar guidance values obtained from the Radiographs were higher than those obtained at the stage of Jaw relation and at the stage of Try-in. However, we notice that the mean condylar guidance values obtained at the stage of Try-in were nearer to the mean condylar guidance values obtained on the Radiographs.

  11. Combination of radar and daily precipitation data to estimate meaningful sub-daily point precipitation extremes

    NASA Astrophysics Data System (ADS)

    Pegram, Geoff; Bardossy, Andras; Sinclair, Scott

    2017-04-01

    The use of radar measurements for the space time estimation of precipitation has for many decades been a central topic in hydro-meteorology. In this presentation we are interested specifically in daily and sub-daily extreme values of precipitation at gauged or ungauged locations which are important for design. The purpose of the presentation is to develop a methodology to combine daily precipitation observations and radar measurements to estimate sub-daily extremes at point locations. Radar data corrected using precipitation-reflectivity relationships lead to biased estimations of extremes. Different possibilities of correcting systematic errors using the daily observations are investigated. Observed gauged daily amounts are interpolated to un-sampled points and subsequently disaggregated using the sub-daily values obtained by the radar. Different corrections based on the spatial variability and the sub-daily entropy of scaled rainfall distributions are used to provide unbiased corrections of short duration extremes. In addition, a statistical procedure not based on a matching day by day correction is tested. In this last procedure, as we are only interested in rare extremes, low to medium values of rainfall depth were neglected leaving 12 days of ranked daily maxima in each set per year, whose sum typically comprises about 50% of each annual rainfall total. The sum of these 12 day maxima is first interpolated using a Kriging procedure. Subsequently this sum is disaggregated to daily values using a nearest neighbour procedure. The daily sums are then disaggregated by using the relative values of the biggest 12 radar based days in each year. Of course, the timings of radar and gauge maxima can be different, so the new method presented here uses radar for disaggregating daily gauge totals down to 15 min intervals in order to extract the maxima of sub-hourly through to daily rainfall. The methodologies were tested in South Africa, where an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense [10 km spacing] set of 45 pluviometers recording in the same 6-year period. This valuable set of data was obtained from each of 37 selected radar pixels [1 km square in plan] which contained a pluviometer, not masked out by the radar foot-print. The pluviometer data were also aggregated to daily totals, for the same purpose. The extremes obtained using disaggregation methods were compared to the observed extremes in a cross validation procedure. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the point extremes, which we found to be stable. Published as: Bárdossy, A., and G. G. S. Pegram (2017) Journal of Hydrology, Volume 544, pp 397-406

  12. Comparison of neptunium sorption results using batch and column techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Triay, I.R.; Furlano, A.C.; Weaver, S.C.

    1996-08-01

    We used crushed-rock columns to study the sorption retardation of neptunium by zeolitic, devitrified, and vitric tuffs typical of those at the site of the potential high-level nuclear waste repository at Yucca Mountain, Nevada. We used two sodium bicarbonate waters (groundwater from Well J-13 at the site and water prepared to simulate groundwater from Well UE-25p No. 1) under oxidizing conditions. It was found that values of the sorption distribution coefficient, Kd, obtained from these column experiments under flowing conditions, regardless of the water or the water velocity used, agreed well with those obtained earlier from batch sorption experiments undermore » static conditions. The batch sorption distribution coefficient can be used to predict the arrival time for neptunium eluted through the columns. On the other hand, the elution curves showed dispersivity, which implies that neptunium sorption in these tuffs may be nonlinear, irreversible, or noninstantaneous. As a result, use of a batch sorption distribution coefficient to calculate neptunium transport through Yucca Mountain tuffs would yield conservative values for neptunium release from the site. We also noted that neptunium (present as the anionic neptunyl carbonate complex) never eluted prior to tritiated water, which implies that charge exclusion does not appear to exclude neptunium from the tuff pores. The column experiments corroborated the trends observed in batch sorption experiments: neptunium sorption onto devitrified and vitric tuffs is minimal and sorption onto zeolitic tuffs decreases as the amount of sodium and bicarbonate/carbonate in the water increases.« less

  13. Effective Debye length in closed nanoscopic systems: a competition between two length scales.

    PubMed

    Tessier, Frédéric; Slater, Gary W

    2006-02-01

    The Poisson-Boltzmann equation (PBE) is widely employed in fields where the thermal motion of free ions is relevant, in particular in situations involving electrolytes in the vicinity of charged surfaces. The applications of this non-linear differential equation usually concern open systems (in osmotic equilibrium with an electrolyte reservoir, a semi-grand canonical ensemble), while solutions for closed systems (where the number of ions is fixed, a canonical ensemble) are either not appropriately distinguished from the former or are dismissed as a numerical calculation exercise. We consider herein the PBE for a confined, symmetric, univalent electrolyte and quantify how, in addition to the Debye length, its solution also depends on a second length scale, which embodies the contribution of ions by the surface (which may be significant in high surface-to-volume ratio micro- or nanofluidic capillaries). We thus establish that there are four distinct regimes for such systems, corresponding to the limits of the two parameters. We also show how the PBE in this case can be formulated in a familiar way by simply replacing the traditional Debye length by an effective Debye length, the value of which is obtained numerically from conservation conditions. But we also show that a simple expression for the value of the effective Debye length, obtained within a crude approximation, remains accurate even as the system size is reduced to nanoscopic dimensions, and well beyond the validity range typically associated with the solution of the PBE.

  14. Experiments and modeling of dilution jet flow fields

    NASA Technical Reports Server (NTRS)

    Holdeman, James D.

    1986-01-01

    Experimental and analytical results of the mixing of single, double, and opposed rows of jets with an isothermal or variable-temperature main stream in a straight duct are presented. This study was performed to investigate flow and geometric variations typical of the complex, three-dimensional flow field in the dilution zone of gas-turbine-engine combustion chambers. The principal results, shown experimentally and analytically, were the following: (1) variations in orifice size and spacing can have a significant effect on the temperature profiles; (2) similar distributions can be obtained, independent of orifice diameter, if momentum-flux ratio and orifice spacing are coupled; (3) a first-order approximation of the mixing of jets with a variable-temperature main stream can be obtained by superimposing the main-stream and jets-in-an-isothermal-crossflow profiles; (4) the penetration of jets issuing mixing is slower and is asymmetric with respect to the jet centerplanes, which shift laterally with increasing downstream distance; (5) double rows of jets give temperature distributions similar to those from a single row of equally spaced, equal-area circular holes; (6) for opposed rows of jets, with the orifice centerlines in line, the optimum ratio of orifice spacing to duct height is one-half the optimum value for single-side injection at the same momentum-flux ratiol and (7) for opposed rows of jets, with the orifice centerlines staggered, the optimum ratio of orifice spacing to duct height is twice the optimum value for single-side injection at the same momentum-flux ratio.

  15. Uncertainty Quantification of GEOS-5 L-band Radiative Transfer Model Parameters Using Bayesian Inference and SMOS Observations

    NASA Technical Reports Server (NTRS)

    DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.

    2013-01-01

    Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).

  16. Modeling and validation of spectral BRDF on material surface of space target

    NASA Astrophysics Data System (ADS)

    Hou, Qingyu; Zhi, Xiyang; Zhang, Huili; Zhang, Wei

    2014-11-01

    The modeling and the validation methods of the spectral BRDF on the material surface of space target were presented. First, the microscopic characteristics of the space targets' material surface were analyzed based on fiber-optic spectrometer using to measure the direction reflectivity of the typical materials surface. To determine the material surface of space target is isotropic, atomic force microscopy was used to measure the material surface structure of space target and obtain Gaussian distribution model of microscopic surface element height. Then, the spectral BRDF model based on that the characteristics of the material surface were isotropic and the surface micro-facet with the Gaussian distribution which we obtained was constructed. The model characterizes smooth and rough surface well for describing the material surface of the space target appropriately. Finally, a spectral BRDF measurement platform in a laboratory was set up, which contains tungsten halogen lamp lighting system, fiber optic spectrometer detection system and measuring mechanical systems with controlling the entire experimental measurement and collecting measurement data by computers automatically. Yellow thermal control material and solar cell were measured with the spectral BRDF, which showed the relationship between the reflection angle and BRDF values at three wavelengths in 380nm, 550nm, 780nm, and the difference between theoretical model values and the measured data was evaluated by relative RMS error. Data analysis shows that the relative RMS error is less than 6%, which verified the correctness of the spectral BRDF model.

  17. Cloud overlapping parameter obtained from CloudSat/CALIPSO dataset and its application in AGCM with McICA scheme

    NASA Astrophysics Data System (ADS)

    Jing, Xianwen; Zhang, Hua; Peng, Jie; Li, Jiangnan; Barker, Howard W.

    2016-03-01

    Vertical decorrelation length (Lcf) as used to determine overlap of cloudy layers in GCMs was obtained from CloudSat/CALIPSO measurements, made between 2007 and 2010, and analyzed in terms of monthly means. Global distributions of Lcf were produced for several cross-sectional lengths. Results show that: Lcf over the tropical convective regions typically exceeds 2 km and shift meridionally with season; the smallest Lcf (< 1 km) tends to occur in regions dominated by marine stratiform clouds; Lcf for mid-to-high latitude continents of the Northern Hemisphere (NH) ranges from 5-6 km during winter to 2-3 km during summer; and there are marked differences between continental and oceanic values of Lcf in the mid-latitudes of the NH. These monthly-gridded, observationally-based values of Lcf data were then used by the Monte Carlo Independent Column Approximation (McICA) radiation routines within the Beijing Climate Center's GCM (BCC_AGCM2.0.1). Additionally, the GCM was run with two other descriptions of Lcf: one varied with latitude only, and the other was simply 2 km everywhere all the time. It is shown that using the observationally-based Lcf in the GCM led to local and seasonal changes in total cloud fraction and shortwave (longwave) cloud radiative effects that serve mostly to reduce model biases. This indicates that usage of Lcf that vary according to location and time has the potential to improve climate simulations.

  18. The "Flexi-Chamber": A Novel Cost-Effective In Situ Respirometry Chamber for Coral Physiological Measurements.

    PubMed

    Camp, Emma F; Krause, Sophie-Louise; Santos, Lourianne M F; Naumann, Malik S; Kikuchi, Ruy K P; Smith, David J; Wild, Christian; Suggett, David J

    2015-01-01

    Coral reefs are threatened worldwide, with environmental stressors increasingly affecting the ability of reef-building corals to sustain growth from calcification (G), photosynthesis (P) and respiration (R). These processes support the foundation of coral reefs by directly influencing biogeochemical nutrient cycles and complex ecological interactions and therefore represent key knowledge required for effective reef management. However, metabolic rates are not trivial to quantify and typically rely on the use of cumbersome in situ respirometry chambers and/or the need to remove material and examine ex situ, thereby fundamentally limiting the scale, resolution and possibly the accuracy of the rate data. Here we describe a novel low-cost in situ respirometry bag that mitigates many constraints of traditional glass and plexi-glass incubation chambers. We subsequently demonstrate the effectiveness of our novel "Flexi-Chamber" approach via two case studies: 1) the Flexi-Chamber provides values of P, R and G for the reef-building coral Siderastrea cf. stellata collected from reefs close to Salvador, Brazil, which were statistically similar to values collected from a traditional glass respirometry vessel; and 2) wide-scale application of obtaining P, R and G rates for different species across different habitats to obtain inter- and intra-species differences. Our novel cost-effective design allows us to increase sampling scale of metabolic rate measurements in situ without the need for destructive sampling and thus significantly expands on existing research potential, not only for corals as we have demonstrated here, but also other important benthic groups.

  19. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Actual consumption amount of personal care products reflecting Japanese cosmetic habits.

    PubMed

    Yamaguchi, Masahiko; Araki, Daisuke; Kanamori, Takeshi; Okiyama, Yasuko; Seto, Hirokazu; Uda, Masaki; Usami, Masahito; Yamamoto, Yutaka; Masunaga, Takuji; Sasa, Hitoshi

    2017-01-01

    Safety assessments of cosmetics are carried out by identifying possible harmful effects of substances in cosmetic products and assessing the exposure to products containing these substances. The present study provided data on the amounts of cosmetic products consumed in Japan to enhance and complement the existing data from Europe and the United States, i.e., the West. The outcomes of this study increase the accuracy of exposure assessments and enable more sophisticated risk assessment as a part of the safety assessment of cosmetic products. Actual amounts of products applied were calculated by determining the difference in the weight of products before and after use by approximately 300 subjects. The results of the study of skincare products revealed that in comparison with the West, large amounts of lotions and emulsions were applied, whereas lower amounts of cream and essence were applied in Japan. In the study of sunscreen products, actual measured values during outdoor leisure use were obtained, and these were lower than the values from the West. The study of the use of facial mask packs yielded data on typical Japanese sheet-type impregnated masks and revealed that high amounts were applied. Furthermore, data were obtained on cleansing foams, makeup removers and makeup products. The data from the present study enhance and complement existing information and will facilitate more sophisticated risk assessments. The present results should be extremely useful in safety assessments of newly developed cosmetic products and to regulatory authorities in Japan and around the world.

  1. Fault-tolerant feature-based estimation of space debris rotational motion during active removal missions

    NASA Astrophysics Data System (ADS)

    Biondi, Gabriele; Mauro, Stefano; Pastorelli, Stefano; Sorli, Massimo

    2018-05-01

    One of the key functionalities required by an Active Debris Removal mission is the assessment of the target kinematics and inertial properties. Passive sensors, such as stereo cameras, are often included in the onboard instrumentation of a chaser spacecraft for capturing sequential photographs and for tracking features of the target surface. A plenty of methods, based on Kalman filtering, are available for the estimation of the target's state from feature positions; however, to guarantee the filter convergence, they typically require continuity of measurements and the capability of tracking a fixed set of pre-defined features of the object. These requirements clash with the actual tracking conditions: failures in feature detection often occur and the assumption of having some a-priori knowledge about the shape of the target could be restrictive in certain cases. The aim of the presented work is to propose a fault-tolerant alternative method for estimating the angular velocity and the relative magnitudes of the principal moments of inertia of the target. Raw data regarding the positions of the tracked features are processed to evaluate corrupted values of a 3-dimentional parameter which entirely describes the finite screw motion of the debris and which primarily is invariant on the particular set of considered features of the object. Missing values of the parameter are completely restored exploiting the typical periodicity of the rotational motion of an uncontrolled satellite: compressed sensing techniques, typically adopted for recovering images or for prognostic applications, are herein used in a completely original fashion for retrieving a kinematic signal that appears sparse in the frequency domain. Due to its invariance about the features, no assumptions are needed about the target's shape and continuity of the tracking. The obtained signal is useful for the indirect evaluation of an attitude signal that feeds an unscented Kalman filter for the estimation of the global rotational state of the target. The results of the computer simulations showed a good robustness of the method and its potential applicability for general motion conditions of the target.

  2. Temperature behavior of the antiferromagnetic susceptibility of nanoferrihydrite from the measurements of the magnetization curves in fields of up to 250 kOe

    NASA Astrophysics Data System (ADS)

    Balaev, D. A.; Popkov, S. I.; Krasikov, A. A.; Balaev, A. D.; Dubrovskiy, A. A.; Stolyar, S. V.; Yaroslavtsev, R. N.; Ladygina, V. P.; Iskhakov, R. S.

    2017-10-01

    The cross-breeding problem of the temperature dependence of the antiferromagnetic susceptibility of ferrihydrite nanoparticles is considered. Iron ions Fe3+ in ferrihydrite are ordered antiferromagnetically; however, the existence of defects on the surface and in the bulk of nanoparticles induces a noncompensated magnetic moment that leads to a typical superparamagnetic behavior of ensemble of the nanoparticles with a characteristic blocking temperature. In an unblocked state, magnetization curves of such objects are described as a superposition of the Langevin function and the linear-in-field contribution of the antiferromagnetic "core" of the nanoparticles. According to many studies of the magnetization curves performed on ferrihydrite (and related ferritin) nanoparticles in fields to 60 kOe, dependence χAF( T) decreases as temperature increases, which was related before to the superantiferromagnetism effect. As the magnetic field range increases to 250 kOe, the values of χAF obtained from an analysis of the magnetization curves become lower in magnitude; however, the character of the temperature evolution of χAF is changed: now, dependence χAF( T) is an increasing function. The latter is typical for a system of AF particles with random orientation of the crystallographic axes. To correctly determine the antiferromagnetic susceptibility of AF nanoparticles (at least, ferrihydrite) and to search for effects related to the superantiferromagnetism effect, it is necessary to use in experiments the range of magnetic field significantly higher than that the standard value 60 kOe used in most experiments. The study of the temperature evolution of the magnetization curves shows that the observed crossover is due to the existence of small magnetic moments in the samples.

  3. Marangoni convection in molten salts

    NASA Astrophysics Data System (ADS)

    Cramer, A.; Landgraf, S.; Beyer, E.; Gerbeth, G.

    2011-02-01

    Marangoni convection is involved in many technological processes. The substances of industrial interest are often governed by diffusive heat transport and their physical modelling is limited with respect to the Prandtl number Pr. The present paper addresses this deficiency. Studies were made on molten salts having Pr values in an intermediate range well below that of the typically employed organics. Since some of the selected species have a relatively high melting point, a high-temperature facility which allows studying thermocapillary convection at temperatures in excess of 1,000°C was built. The results presented here were obtained in a cylindrical geometry, although the equipment that was built is not restricted to this configuration because of its modular construction. Modelled after some applications, the fluid was heated centrically on top. The bulk was embedded in a large thermostatically controlled reservoir so as to establish the lower ambient reference temperature. A characteristic size of the experimental cell was chosen such that, on the one hand, the dynamic Bond number Bo did not become too high; on the other hand, the liquid had to have a certain depth to allow particle image velocimetry. The complicated balance between body forces and thermocapillary forces in the case of intermediate Bo was found to result in a distinct local separation into a bulk motion governed by natural convection with a recirculating Marangoni flow on top. In contrast to low viscosity organics, the vapour pressure of which increases considerably with decreasing Pr, high values of the Marangoni number can be reached. Comparisons of the topology of Marangoni vortices between molten salts with 2.3 ⩽ Pr ⩽ 6.4 and a silicone oil with Pr typically one order of magnitude higher suggest that the regime of non-negligible heat diffusion is entered.

  4. The Dynamic Features of Lip Corners in Genuine and Posed Smiles

    PubMed Central

    Guo, Hui; Zhang, Xiao-Hui; Liang, Jun; Yan, Wen-Jing

    2018-01-01

    The smile is a frequently expressed facial expression that typically conveys a positive emotional state and friendly intent. However, human beings have also learned how to fake smiles, typically by controlling the mouth to provide a genuine-looking expression. This is often accompanied by inaccuracies that can allow others to determine that the smile is false. Mouth movement is one of the most striking features of the smile, yet our understanding of its dynamic elements is still limited. The present study analyzes the dynamic features of lip corners, and considers how they differ between genuine and posed smiles. Employing computer vision techniques, we investigated elements such as the duration, intensity, speed, symmetry of the lip corners, and certain irregularities in genuine and posed smiles obtained from the UvA-NEMO Smile Database. After utilizing the facial analysis tool OpenFace, we further propose a new approach to segmenting the onset, apex, and offset phases of smiles, as well as a means of measuring irregularities and symmetry in facial expressions. We extracted these features according to 2D and 3D coordinates, and conducted an analysis. The results reveal that genuine smiles have higher values for onset, offset, apex, and total durations, as well as offset displacement, and a variable we termed Irregularity-b (the SD of the apex phase) than do posed smiles. Conversely, values tended to be lower for onset and offset Speeds, and Irregularity-a (the rate of peaks), Symmetry-a (the correlation between left and right facial movements), and Symmetry-d (differences in onset frame numbers between the left and right faces). The findings from the present study have been compared to those of previous research, and certain speculations are made. PMID:29515508

  5. Dust in the small Magellanic Cloud. 2: Dust models from interstellar polarization and extinction data

    NASA Technical Reports Server (NTRS)

    Rodrigues, C. V.; Magalhaes, A. M.; Coyne, G. V.

    1995-01-01

    We study the dust in the Small Magellanic Cloud using our polarization and extinction data (Paper 1) and existing dust models. The data suggest that the monotonic SMC extinction curve is related to values of lambda(sub max), the wavelength of maximum polarization, which are on the average smaller than the mean for the Galaxy. On the other hand, AZV 456, a star with an extinction similar to that for the Galaxy, shows a value of lambda(sub max) similar to the mean for the Galaxy. We discuss simultaneous dust model fits to extinction and polarization. Fits to the wavelength dependent polarization data are possible for stars with small lambda(sub max). In general, they imply dust size distributions which are narrower and have smaller mean sizes compared to typical size distributions for the Galaxy. However, stars with lambda(sub max) close to the Galactic norm, which also have a narrower polarization curve, cannot be fit adequately. This holds true for all of the dust models considered. The best fits to the extinction curves are obtained with a power law size distribution by assuming that the cylindrical and spherical silicate grains have a volume distribution which is continuous from the smaller spheres to the larger cylinders. The size distribution for the cylinders is taken from the fit to the polarization. The 'typical', monotonic SMC extinction curve can be fit well with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grain. However, amorphous carbon and silicate grains also fit the data well. AZV456, which has an extinction curve similar to that for the Galaxy, has a UV bump which is too blue to be fit by spherical graphite grains.

  6. Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y

    Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  8. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.

  9. The magnitude and colour of noise in genetic negative feedback systems

    PubMed Central

    Voliotis, Margaritis; Bowsher, Clive G.

    2012-01-01

    The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or ‘noise’ in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier—for transcriptional autorepression, it is frequently negligible. PMID:22581772

  10. Stock optimizing in choice when a token deposit is the operant.

    PubMed

    Widholm, J J; Silberberg, A; Hursh, S R; Imam, A A; Warren-Boulton, F R

    2001-11-01

    Each of 2 monkeys typically earned their daily food ration by depositing tokens in one of two slots. Tokens deposited in one slot dropped into a bin where they were kept (token kept). Deposits to a second slot dropped into a bin where they could be obtained again (token returned). In Experiment 1, a fixed-ratio (FR) 5 schedule that provided two food pellets was associated with each slot. Both monkeys preferred the token-returned slot. In Experiment 2, both subjects chose between unequal FR schedules with the token-returned slot always associated with the leaner schedule. When the FRs were 2 versus 3 and 2 versus 6, preferences were maintained for the token-returned slot; however, when the ratios were 2 versus 12, preference shifted to the token-kept slot. In Experiment 3, both monkeys chose between equal-valued concurrent variable-interval variable-interval schedules. Both monkeys preferred the slot that returned tokens. In Experiment 4, both monkeys chose between FRs that typically differed in size by a factor of 10. Both monkeys preferred the FR schedule that provided more food per trial. These data show that monkeys will choose so as to increase the number of reinforcers earned (stock optimizing) even when this preference reduces the rate of reinforcement (all reinforcers divided by session time).

  11. Texture and mechanical properties of Al-0.5Mg-1.0Si-0.5Cu alloy sheets manufactured via a cross rolling method

    NASA Astrophysics Data System (ADS)

    Jeon, Jae-Yeol; Son, Hyeon-Taek; Woo, Kee-Do; Lee, Kwang-Jin

    2012-04-01

    The relationship between the texture and mechanical properties of 6xxx aluminum alloy sheets processed via cross rolling was investigated. The microstructures of the conventional rolled and cross rolled sheets after annealing were analyzed using optical micrographs (OM). The texture distribution across the thickness in the Al-Mg-Si-Cu alloy, conventional rolled sheets, and cross rolled sheets both before and after annealing was investigated via X-ray texture measurements. The texture was analyzed in three layers from the surface to the center of the sheet. The β-fiber texture of the conventional rolled sheet was typical of the texture obtained using aluminumoll ring. After annealing, the typical β-fiber orientations were changed to recrystallization textures: cube{001}<100> and normal direction (ND)-rotated cubes. However, the texture of the cross rolled sheet was composed of an asymmetrical, rolling direction (RD)-rotated cubes. After annealing, the asymmetrical orientations in the cross rolled sheet were changed to a randomized texture. The average R-value of the annealed cross rolled sheets was higher than that of the conventional rolled sheets. The limit dome height (LDH) test results demonstrated that cross rolling is effective in improving the formability of the Al-Mg-Si-Cu alloy sheets.

  12. Prognostic implications of telomerase expression in pituitary adenomas.

    PubMed

    Tortosa, F; Webb, S M

    2018-04-01

    To analyse the prognostic value of telomerase expression in patients with pituitary adenomas (PAs) followed-up for at least 8 years. A retrospective study was conducted of samples from 51 PAs (40 typical and 11 atypical) from patients who underwent transsphenoidal surgery between 2006 and 2008 and from 10 normal pituitary glands obtained by autopsy. Telomerase expression was assessed by immunohistochemistry, correlating the expression with that of Ki-67 and p53. We observed telomerase expression in 43 PAs (84.3%, 32 of the 40 typical PAs and in the 11 atypical PAs), which was higher in the clinically nonfunctioning cases (P=.0034) and very rare in the patients with acromegaly (P=.0001). There was a significant association between the percentage of tumour cells (>10%) and the recurrence of the adenoma (P=.039). There was no correlation with the expression of Ki-67 and p53 (P=.4986), and there were no differences according to age, sex, tumour size and invasiveness. A telomerase expression rate greater than 10% in the pituitary tumour tissue was associated with recurrence or progression of the PA, especially in the nonfunctioning cases. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Medicina Interna (SEMI). All rights reserved.

  13. A detailed analysis of the energy levels configuration existing in the band gap of supersaturated silicon with titanium for photovoltaic applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pérez, E.; Dueñas, S.; Castán, H.

    2015-12-28

    The energy levels created in supersaturated n-type silicon substrates with titanium implantation in the attempt to create an intermediate band in their band-gap are studied in detail. Two titanium ion implantation doses (10{sup 13 }cm{sup -2} and 10{sup 14 }cm{sup -2}) are studied in this work by conductance transient technique and admittance spectroscopy. Conductance transients have been measured at temperatures of around 100 K. The particular shape of these transients is due to the formation of energy barriers in the conduction band, as a consequence of the band-gap narrowing induced by the high titanium concentration. Moreover, stationary admittance spectroscopy results suggest the existencemore » of different energy level configuration, depending on the local titanium concentration. A continuum energy level band is formed when titanium concentration is over the Mott limit. On the other hand, when titanium concentration is lower than the Mott limit, but much higher than the donor impurity density, a quasi-continuum energy level distribution appears. Finally, a single deep center appears for low titanium concentration. At the n-type substrate, the experimental results obtained by means of thermal admittance spectroscopy at high reverse bias reveal the presence of single levels located at around E{sub c}-425 and E{sub c}-275 meV for implantation doses of 10{sup 13 }cm{sup −2} and 10{sup 14 }cm{sup −2}, respectively. At low reverse bias voltage, quasi-continuously distributed energy levels between the minimum of the conduction bands, E{sub c} and E{sub c}-450 meV, are obtained for both doses. Conductance transients detected at low temperatures reveal that the high impurity concentration induces a band gap narrowing which leads to the formation of a barrier in the conduction band. Besides, the relationship between the activation energy and the capture cross section values of all the energy levels fits very well to the Meyer-Neldel rule. As it is known, the Meyer-Neldel rule typically appears in processes involving multiple excitations, like carrier capture and emission in deep levels, and it is generally observed in disordered systems. The obtained Meyer-Neldel energy value, 15.19 meV, is very close to the value obtained in multicrystalline silicon samples contaminated with iron (13.65 meV), meaning that this energy value could be associated to the phonons energy in this kind of substrates.« less

  14. Repulsive particles under a general external potential: Thermodynamics by neglecting thermal noise.

    PubMed

    Ribeiro, Mauricio S; Nobre, Fernando D

    2016-08-01

    A recent proposal of an effective temperature θ, conjugated to a generalized entropy s_{q}, typical of nonextensive statistical mechanics, has led to a consistent thermodynamic framework in the case q=2. The proposal was explored for repulsively interacting vortices, currently used for modeling type-II superconductors. In these systems, the variable θ presents values much higher than those of typical room temperatures T, so that the thermal noise can be neglected (T/θ≃0). The whole procedure was developed for an equilibrium state obtained after a sufficiently long-time evolution, associated with a nonlinear Fokker-Planck equation and approached due to a confining external harmonic potential, ϕ(x)=αx^{2}/2 (α>0). Herein, the thermodynamic framework is extended to a quite general confining potential, namely ϕ(x)=α|x|^{z}/z (z>1). It is shown that the main results of the previous analyses hold for any z>1: (i) The definition of the effective temperature θ conjugated to the entropy s_{2}. (ii) The construction of a Carnot cycle, whose efficiency is shown to be η=1-(θ_{2}/θ_{1}), where θ_{1} and θ_{2} are the effective temperatures associated with two isothermal transformations, with θ_{1}>θ_{2}. The special character of the Carnot cycle is indicated by analyzing another cycle that presents an efficiency depending on z. (iii) Applying Legendre transformations for a distinct pair of variables, different thermodynamic potentials are obtained, and furthermore, Maxwell relations and response functions are derived. The present approach shows a consistent thermodynamic framework, suggesting that these results should hold for a general confining potential ϕ(x), increasing the possibility of experimental verifications.

  15. Limit load solution for electron beam welded joints with single edge weld center crack in tension

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Shi, Yaowu; Li, Xiaoyan; Lei, Yongping

    2012-05-01

    Limit loads are widely studied and several limit load solutions are proposed to some typical geometry of weldments. However, there are no limit load solutions exist for the single edge crack weldments in tension (SEC(T)), which is also a typical geometry in fracture analysis. The mis-matching limit load for thick plate with SEC(T) are investigated and the special limit load solutions are proposed based on the available mis-matching limit load solutions and systematic finite element analyses. The real weld configurations are simplified as a strip, and different weld strength mis-matching ratio M, crack depth/width ratio a/ W and weld width 2H are in consideration. As a result, it is found that there exists excellent agreement between the limit load solutions and the FE results for almost all the mis-matching ration M, a/ W and ligament-to-weld width ratio ( W-a)/ H. Moreover, useful recommendations are given for evaluating the limit loads of the EBW structure with SEC(T). For the EBW joints with SEC(T), the mis-matching limit loads can be obtained assuming that the components are wholly made of base metal, when M changing from 1.6 to 0.6. When M decreasing to 0.4, the mis-matching limit loads can be obtained assuming that the components are wholly made of base metal only for large value of ( W-a)/ H. The recommendations may be useful for evaluating the limit loads of the EBW structures with SEC(T). The engineering simplifications are given for assessing the limit loads of electron beam welded structure with SEC(T).

  16. Plant Functional Diversity and Species Diversity in the Mongolian Steppe

    PubMed Central

    Liu, Guofang; Xie, Xiufang; Ye, Duo; Ye, Xuehua; Tuvshintogtokh, Indree; Mandakh, Bayart; Huang, Zhenying; Dong, Ming

    2013-01-01

    Background The Mongolian steppe is one of the most important grasslands in the world but suffers from aridization and damage from anthropogenic activities. Understanding structure and function of this community is important for the ecological conservation, but has seldom been investigated. Methodology/Principal Findings In this study, a total of 324 quadrats located on the three main types of Mongolian steppes were surveyed. Early-season perennial forbs (37% of total importance value), late-season annual forbs (33%) and late-season perennial forbs (44%) were dominant in meadow, typical and desert steppes, respectively. Species richness, diversity and plant functional type (PFT) richness decreased from the meadow, via typical to desert steppes, but evenness increased; PFT diversity in the desert and meadow steppes was higher than that in typical steppe. However, above-ground net primary productivity (ANPP) was far lower in desert steppe than in the other two steppes. In addition, the slope of the relationship between species richness and PFT richness increased from the meadow, via typical to desert steppes. Similarly, with an increase in species diversity, PFT diversity increased more quickly in both the desert and typical steppes than that in meadow steppe. Random resampling suggested that this coordination was partly due to a sampling effect of diversity. Conclusions/Significance These results indicate that desert steppe should be strictly protected because of its limited functional redundancy, which its ecological functioning is sensitive to species loss. In contrast, despite high potential forage production shared by the meadow and typical steppes, management of these two types of steppes should be different: meadow steppe should be preserved due to its higher conservation value characterized by more species redundancy and higher spatial heterogeneity, while typical steppe could be utilized moderately because its dominant grass genus Stipa is resistant to herbivory and drought. PMID:24116233

  17. Examining the reinforcing value of stimuli within social and non-social contexts in children with and without high-functioning autism

    PubMed Central

    Goldberg, Melissa C; Allman, Melissa J; Hagopian, Louis P; Triggs, Mandy M; Frank-Crawford, Michelle A; Mostofsky, Stewart H; Denckla, Martha B; DeLeon, Iser G

    2018-01-01

    One of the key diagnostic criteria for autism spectrum disorder includes impairments in social interactions. This study compared the extent to which boys with high-functioning autism and typically developing boys “value” engaging in activities with a parent or alone. Two different assessments that can empirically determine the relative reinforcing value of social and non-social stimuli were employed: paired-choice preference assessments and progressive-ratio schedules. There were no significant differences between boys with high-functioning autism and typically developing boys on either measure. Moreover, there was a strong correspondence in performance across these two measures for participants in each group. These results suggest that the relative reinforcing value of engaging in activities with a primary caregiver is not diminished for children with autism spectrum disorder. PMID:27368350

  18. 41 CFR 102-36.35 - What is the typical process for disposing of excess personal property?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is the typical... agency property or by obtaining excess property from other federal agencies in lieu of new procurements... eligible non-federal activities. Title 40 of the United States Code requires that surplus personal property...

  19. The Process of Including Elementary Students with Autism and Intellectual Impairments in Their Typical Classrooms.

    ERIC Educational Resources Information Center

    Downing, June E.; And Others

    A qualitative case study methodology was used to examine the process of including three students with autism, intellectual impairments, and behavioral challenges in age-appropriate typical classrooms and home schools. Data were obtained over a 9-month period from field notes of a participant researcher and three paraeducators, structured…

  20. Severity of Emotional and Behavioral Problems among Poor and Typical Readers.

    ERIC Educational Resources Information Center

    Arnold, Elizabeth Mayfield; Goldston, David B.; Walsh, Adam K.; Reboussin, Beth A.; Daniel, Stephanie Sergent; Hickman, Enith; Wood, Frank B.

    2005-01-01

    The purpose of this study was to examine the severity of behavioral and emotional problems among adolescents with poor and typical single word reading ability (N = 188) recruited from public schools and followed for a median of 2.4 years. Youth and parents were repeatedly assessed to obtain information regarding the severity and course of symptoms…

  1. The Rest-Frame Optical Luminosity Functions of Galaxies at 2<=z<=3.5

    NASA Astrophysics Data System (ADS)

    Marchesini, D.; van Dokkum, P.; Quadri, R.; Rudnick, G.; Franx, M.; Lira, P.; Wuyts, S.; Gawiser, E.; Christlein, D.; Toft, S.

    2007-02-01

    We present the rest-frame optical (B, V, and R band) luminosity functions (LFs) of galaxies at 2<=z<=3.5, measured from a K-selected sample constructed from the deep NIR MUSYC, the ultradeep FIRES, and the GOODS-CDFS. This sample is unique for its combination of area and range of luminosities. The faint-end slopes of the LFs at z>2 are consistent with those at z~0. The characteristic magnitudes are significantly brighter than the local values (e.g., ~1.2 mag in the R band), while the measured values for Φ* are typically ~5 times smaller. The B-band luminosity density at z~2.3 is similar to the local value, and in the R band it is ~2 times smaller than the local value. We present the LF of distant red galaxies (DRGs), which we compare to that of non-DRGs. While DRGs and non-DRGs are characterized by similar LFs at the bright end, the faint-end slope of the non-DRG LF is much steeper than that of DRGs. The contribution of DRGs to the global densities down to the faintest probed luminosities is 14%-25% in number and 22%-33% in luminosity. From the derived rest-frame U-V colors and stellar population synthesis models, we estimate the mass-to-light ratios (M/L) of the different subsamples. The M/L ratios of DRGs are ~5 times higher (in the R and V bands) than those of non-DRGs. The global stellar mass density at 2<=z<=3.5 appears to be dominated by DRGs, whose contribution is of order ~60%-80% of the global value. Qualitatively similar results are obtained when the population is split by rest-frame U-V color instead of observed J-K color. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. Also based on observations collected at the European Southern Observatories on Paranal, Chile as part of the ESO program 164.O-0612.

  2. Transient thermal and stress analysis of maxillary second premolar tooth using an exact three-dimensional model.

    PubMed

    Hashemipour, Maryam Alsadat; Mohammadpour, Ali; Nassab, Seiied Abdolreza Gandjalikhan

    2010-01-01

    In this paper, the temperature and stress distributions in an exact 3D-model of a restored maxillary second premolar tooth are obtained with finite element approach. The carious teeth need to restore with appropriate restorative materials. There are too many restorative materials which can be used instead of tooth structures; since tooth structures are being replaced, the restorative materials should be similar to original structure as could as possible. In the present study, a Mesial Occlusal Distal (MOD) type of restoration is chosen and applied to a sound tooth model. Four cases of restoration are investigated: two cases in which base are used under restorative materials and two cases in which base is deleted. The restorative materials are amalgam and composite and glass-inomer is used as a base material. Modeling is done in the solid works ambient by means of an exact measuring of a typical human tooth dimensions. Tooth behavior under thermal load due to consuming hot liquids is analyzed by means of a three dimensional finite element method using ANSYS software. The highest values of tensile and compressive stresses are compared with tensile and compressive strength of the tooth and restorative materials and the value of shear stress on the tooth and restoration junctions is compared with the bond strength. Also, sound tooth under the same thermal load is analyzed and the results are compared with those obtained for restored models. Temperature and stress distributions in the tooth are calculated for each case, with a special consideration in the vicinity of pulp and restoration region. Numerical results show that in two cases with amalgam, using the base material (Glass-ionomer) under the restorative material causes to decrease the maximum temperature in the restorative teeth. In the stress analysis, it is seen that the principal stress has its maximum values in composite restorations. The maximum temperatures are found in the restoration case of amalgam without base. Besides, it is found that restoration has not any influence on the stress values at DEJ, such that for all cases, these values are close to sound tooth results.

  3. Distribution of Radioactive Cesium during Milling and Cooking of Contaminated Buckwheat.

    PubMed

    Hachinohe, Mayumi; Nihei, Naoto; Kawamoto, Shinichi; Hamamatsu, Shioka

    2018-06-01

    To clarify the behavior of radioactive cesium (Cs) in buckwheat grains during milling and cooking processes, parameters such as processing factor (Pf) and food processing retention factor (Fr) were evaluated in two lots of buckwheat grains, R1 and R2, with different concentrations of radioactive Cs. Three milling fractions, the husk, bran, and flour fractions, were obtained using a mill and electric sieve. The radioactive Cs ( 134 Cs + 137 Cs) concentrations in husk and bran were higher than that in grain, whereas the concentration in flour was lower than that in grain. Pf values for the flours of R1 and R2 were 0.60 and 0.80, respectively. Fr values for the flours of R1 and R2 were 0.28 and 0.53, respectively. Raw buckwheat noodles (soba) were prepared using a mixture of buckwheat flour and wheat flour according to the typical recipe and were cooked with boiling water for 0.5, 1, and 2 min, followed by rinsing with water. Pf values for the soba boiled for 2 min (optimal for eating) made with R1 and R2 were 0.34 and 0.40, respectively. Fr values for these R1 and R2 samples were 0.55 and 0.66, respectively. Pf and Fr values for soba boiled for different times for both R1 and R2 were less than 0.6 and 0.8, respectively. Thus, buckwheat flour and its product, soba, cooked by boiling, are considered acceptable for human consumption according to the standard limit for radioactive Cs in buckwheat grains.

  4. Clumpak: a program for identifying clustering modes and packaging population structure inferences across K.

    PubMed

    Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay

    2015-09-01

    The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.

  5. Distribution analysis of airborne nicotine concentrations in hospitality facilities.

    PubMed

    Schorp, Matthias K; Leyden, Donald E

    2002-02-01

    A number of publications report statistical summaries for environmental tobacco smoke (ETS) concentrations. Despite compelling evidence for the data not being normally distributed, these publications typically report the arithmetic mean and standard deviation of the data, thereby losing important information related to the distribution of values contained in the original data. We were interested in the frequency distributions of reported nicotine concentrations in hospitality environments and subjected available data to distribution analyses. The distribution of experimental indoor airborne nicotine concentration data taken from hospitality facilities worldwide was fit to lognormal, Weibull, exponential, Pearson (Type V), logistic, and loglogistic distribution models. Comparison of goodness of fit (GOF) parameters and indications from the literature verified the selection of a lognormal distribution as the overall best model. When individual data were not reported in the literature, statistical summaries of results were used to model sets of lognormally distributed data that are intended to mimic the original data distribution. Grouping the data into various categories led to 31 frequency distributions that were further interpreted. The median values in nonsmoking environments are about half of the median values in smoking sections. When different continents are compared, Asian, European, and North American median values in restaurants are about a factor of three below levels encountered in other hospitality facilities. On a comparison of nicotine concentrations in North American smoking sections and nonsmoking sections, median values are about one-third of the European levels. The results obtained may be used to address issues related to exposure to ETS in the hospitality sector.

  6. "You Be the Judge."

    ERIC Educational Resources Information Center

    Black, Susan

    1995-01-01

    Although teachers at all levels are encouraged to use role-playing and simulation, they usually overestimate role-playing's learning value. Teachers use these methods mainly to change behavior (and values), not reinforce curriculum content. Sociodramas (scenes based on typical situations facing children) are more effective role-playing activities…

  7. Diamagnetic Corrections and Pascal's Constants

    ERIC Educational Resources Information Center

    Bain, Gordon A.; Berry, John F.

    2008-01-01

    Measured magnetic susceptibilities of paramagnetic substances must typically be corrected for their underlying diamagnetism. This correction is often accomplished by using tabulated values for the diamagnetism of atoms, ions, or whole molecules. These tabulated values can be problematic since many sources contain incomplete and conflicting data.…

  8. Economic value added: can it apply to an S corporation medical practice?

    PubMed

    Shapiro, Michael D

    2007-08-01

    Typically, owners of medical practices use financial formulas such as ROI and net present value to evaluate the financial benefit of new projects. However, economic value added, a concept used by many large corporations to define and maximize return, may add greater benefit in helping medical practice owners realize a reasonable return on their core business.

  9. Relaxor properties of barium titanate crystals grown by Remeika method

    NASA Astrophysics Data System (ADS)

    Roth, Michel; Tiagunov, Jenia; Dul'kin, Evgeniy; Mojaev, Evgeny

    2017-06-01

    Barium titanate (BaTiO3, BT) crystals have been grown by the Remeika method using both the regular KF and mixed KF-NaF (0.6-0.4) solvents. Typical acute angle "butterfly wing" BT crystals have been obtained, and they were characterized using x-ray diffraction, scanning electron microscopy (including energy dispersive spectroscopy), conventional dielectric and acoustic emission methods. A typical wing has a triangular plate shape which is up to 0.5 mm thick with a 10-15 mm2 area. The plate has a (001) habit and an atomically smooth outer surface. Both K+ and F- solvent ions are incorporated as dopants into the crystal lattice during growth substituting for Ba2+ and O2- ions respectively. The dopants' distribution is found to be inhomogeneous, their content being almost an order of magnitude higher (up to 2 mol%) at out surface of the plate relatively to the bulk. A few μm thick surface layer is formed where a multidomain ferroelectric net is confined between two≤1 μm thick dopant-rich surfaces. The layer as a whole possess relaxor ferroelectric properties, which is apparent from the appearance of additional broad maxima, Tm, in the temperature dependence of the dielectric permittivity around the ferroelectric phase transition. Intense acoustic emission responses detected at temperatures corresponding to the Tm values allow to observe the Tm shift to lower temperatures at higher frequencies, or dispersion, typical for relaxor ferroelectrics. The outer surface of the BT wing can thus serve as a relaxor thin film for various electronic application, such as capacitors, or as a substrate for BT-based multiferroic structure. Crystals grown from KF-NaF fluxes contain sodium atoms as an additional impurity, but the crystal yield is much smaller, and while the ferroelectric transition peak is diffuse it does not show any sign of dispersion typical for relaxor behavior.

  10. Alkaline hydrothermal liquefaction of swine carcasses to bio-oil.

    PubMed

    Zheng, Ji-Lu; Zhu, Ming-Qiang; Wu, Hai-tang

    2015-09-01

    It is imperative that swine carcasses are disposed of safely, practically and economically. Alkaline hydrothermal liquefaction of swine carcasses to bio-oil was performed. Firstly, the effects of temperature, reaction time and pH value on the yield of each liquefaction product were determined. Secondly, liquefaction products, including bio-oil and solid residue, were characterized. Finally, the energy recovery ratio (ERR), which was defined as the energy of the resultant products compared to the energy input of the material, was investigated. Our experiment shows that reaction time had certain influence on the yield of liquefaction products, but temperature and pH value had bigger influence on the yield of liquefaction products. Yields of 62.2wt% bio-oil, having a high heating value of 32.35MJ/kg and a viscosity of 305cp, and 22wt% solid residue were realized at a liquefaction temperature of 250°C, a reaction time of 60min and a pH value of 9.0. The bio-oil contained up to hundreds of different chemical components that may be classified according to functional groups. Typical compound classes in the bio-oil were hydrocarbons, organic acids, esters, ketones and heterocyclics. The energy recovery ratio (ERR) reached 93.63%. The bio-oil is expected to contribute to fossil fuel replacement in stationary applications, including boilers and furnaces, and upgrading processes for the bio-oil may be used to obtain liquid transport fuels. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, R; Le, Y; Armour, E

    Purpose: Dose-response studies in radiation therapy are typically using single response values for tumors across ensembles of tumors. Using the high dose rate (HDR) treatment plan dose grid and pre- and post-therapy FDG-PET images, we look for correlations between voxelized dose and FDG uptake response in individual tumors. Methods: Fifteen patients were treated for localized rectal cancer using 192Ir HDR brachytherapy in conjunction with surgery. FDG-PET images were acquired before HDR therapy and 6–8 weeks after treatment (prior to surgery). Treatment planning was done on a commercial workstation and the dose grid was calculated. The two PETs and the treatmentmore » dose grid were registered to each other using non-rigid registration. The difference in PET SUV values before and after HDR was plotted versus absorbed radiation dose for each voxel. The voxels were then separated into bins for every 400 cGy of absorbed dose and the bin average values plotted similarly. Results: Individual voxel doses did not correlate with PET response; however, when group into tumor subregions corresponding to dose bins, eighty percent of the patients showed a significant positive correlation (R2 > 0) between PET uptake difference in the targeted region and the absorbed dose. Conclusion: By considering larger ensembles of voxels, such as organ average absorbed dose or the dose bins considered here, valuable information may be obtained. The dose-response correlations as measured by FDG-PET difference potentially underlines the importance of FDG-PET as a measure of response, as well as the value of voxelized information.« less

  12. Accurate and fast multiple-testing correction in eQTL studies.

    PubMed

    Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm

    2015-06-04

    In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  13. Fourier transform spectroscopy of the CO-stretching band of O-18 methanol

    NASA Astrophysics Data System (ADS)

    Lees, R. M.; Murphy, Reba-Jean; Moruzzi, Giovanni; Predoi-Cross, Adriana; Xu, Li-Hong; Appadoo, D. R. T.; Billinghurst, B.; Goulding, R. R. J.; Zhao, Saibei

    2009-07-01

    The high-resolution Fourier transform spectrum of the ν8 CO-stretching band of CH 318OH between 900 and 1100 cm -1 has been recorded at the Canadian Light Source (CLS) synchrotron facility in Saskatoon, and the majority of the torsion-rotation structure has been analyzed. For the ν t = 0 torsional ground state, subbands have been identified for K values from 0 to 11 for A and E torsional symmetries up to J values typically well over 30. For ν t = 1, A and E subbands have been assigned up to K = 7, and several ν t = 2 subbands have also been identified. Upper-state term values determined from the assigned transitions using the Ritz program have been fitted to J( J + 1) power-series expansions to obtain substate origins and sets of state-specific parameters giving a compact representation of the substate J-dependence. The ν t = 0 subband origins have been fitted to effective molecular constants for the excited CO-stretching state and a torsional barrier of 377.49(32) cm -1 is found, representing a 0.89% increase over the ground-state value. The vibrational energy for the CO-stretch state was found to be 1007.49(7) cm -1. A number of subband-wide and J-localized perturbations have been seen in the spectrum, arising both from anharmonic and Coriolis interactions, and several of the interacting states have been identified.

  14. Physical Processes Controlling the Spatial Distributions of Relative Humidity in the Tropical Tropopause Layer Over the Pacific

    NASA Technical Reports Server (NTRS)

    Jensen, Eric J.; Thornberry, Troy D.; Rollins, Andrew W.; Ueyama, Rei; Pfister, Leonhard; Bui, Thaopaul; Diskin, Glenn S.; Digangi, Joshua P.; Hintsa, Eric; Gao, Ru-Shan; hide

    2017-01-01

    The spatial distribution of relative humidity with respect to ice (RHI) in the boreal wintertime tropical tropopause layer (TTL, is asymptotically Equal to 14-18 km) over the Pacific is examined with the measurements provided by the NASA Airborne Tropical TRopopause EXperiment. We also compare the measured RHI distributions with results from a transport and microphysical model driven by meteorological analysis fields. Notable features in the distribution of RHI versus temperature and longitude include (1) the common occurrence of RHI values near ice saturation over the western Pacific in the lower to middle TTL; (2) low RHI values in the lower TTL over the central and eastern Pacific; (3) common occurrence of RHI values following a constant mixing ratio in the middle to upper TTL (temperatures between 190 and 200 K); (4) RHI values typically near ice saturation in the coldest airmasses sampled; and (5) RHI values typically near 100% across the TTL temperature range in air parcels with ozone mixing ratios less than 50 ppbv. We suggest that the typically saturated air in the lower TTL over the western Pacific is likely driven by a combination of the frequent occurrence of deep convection and the predominance of rising motion in this region. The nearly constant water vapor mixing ratios in the middle to upper TTL likely result from the combination of slow ascent (resulting in long residence times) and wave-driven temperature variability. The numerical simulations generally reproduce the observed RHI distribution features, and sensitivity tests further emphasize the strong influence of convective input and vertical motions on TTL relative humidity.

  15. Decay of Kadomtsev-Petviashvili lumps in dissipative media

    NASA Astrophysics Data System (ADS)

    Clarke, S.; Gorshkov, K.; Grimshaw, R.; Stepanyants, Y.

    2018-03-01

    The decay of Kadomtsev-Petviashvili lumps is considered for a few typical dissipations-Rayleigh dissipation, Reynolds dissipation, Landau damping, Chezy bottom friction, viscous dissipation in the laminar boundary layer, and radiative losses caused by large-scale dispersion. It is shown that the straight-line motion of lumps is unstable under the influence of dissipation. The lump trajectories are calculated for two most typical models of dissipation-the Rayleigh and Reynolds dissipations. A comparison of analytical results obtained within the framework of asymptotic theory with the direct numerical calculations of the Kadomtsev-Petviashvili equation is presented. Good agreement between the theoretical and numerical results is obtained.

  16. Missing value imputation for gene expression data by tailored nearest neighbors.

    PubMed

    Faisal, Shahla; Tutz, Gerhard

    2017-04-25

    High dimensional data like gene expression and RNA-sequences often contain missing values. The subsequent analysis and results based on these incomplete data can suffer strongly from the presence of these missing values. Several approaches to imputation of missing values in gene expression data have been developed but the task is difficult due to the high dimensionality (number of genes) of the data. Here an imputation procedure is proposed that uses weighted nearest neighbors. Instead of using nearest neighbors defined by a distance that includes all genes the distance is computed for genes that are apt to contribute to the accuracy of imputed values. The method aims at avoiding the curse of dimensionality, which typically occurs if local methods as nearest neighbors are applied in high dimensional settings. The proposed weighted nearest neighbors algorithm is compared to existing missing value imputation techniques like mean imputation, KNNimpute and the recently proposed imputation by random forests. We use RNA-sequence and microarray data from studies on human cancer to compare the performance of the methods. The results from simulations as well as real studies show that the weighted distance procedure can successfully handle missing values for high dimensional data structures where the number of predictors is larger than the number of samples. The method typically outperforms the considered competitors.

  17. UNCINATE FASCICULUS FRACTIONAL ANISOTROPY CORRELATES WITH TYPICAL USE OF REAPPRAISAL IN WOMEN BUT NOT MEN

    PubMed Central

    Zuurbier, Lisette A.; Nikolova, Yuliya S.; Ahs, Fredrik; Hariri, Ahmad R.

    2014-01-01

    Emotion regulation refers to strategies through which individuals influence their experience and expression of emotions. Two typical strategies are reappraisal, a cognitive strategy for reframing the context of an emotional experience, and suppression, a behavioral strategy for inhibiting emotional responses. Functional neuroimaging studies have revealed that regions of the prefrontal cortex modulate amygdala reactivity during both strategies, but relatively greater down-regulation of the amygdala occurs during reappraisal. Moreover, these studies demonstrated that engagement of this modulatory circuitry varies as a function of gender. The uncinate fasciculus is a major structural pathway connecting regions of the anterior temporal lobe, including the amygdala, to inferior frontal regions, especially the orbitofrontal cortex. The objective of the current study was to map variability in the structural integrity of the uncinate fasciculus onto individual differences in self-reported typical use of reappraisal and suppression. Diffusion tensor imaging was used in 194 young adults to derive regional fractional anisotropy values for the right and left uncinate fasciculus. All participants also completed the Emotion Regulation Questionnaire. In women but not men, self-reported typical reappraisal use was positively correlated with fractional anisotropy values in a region of the left uncinate fasciculus within the orbitofrontal cortex. In contrast, typical use of suppression was not significantly correlated with fractional anisotropy in any region of the uncinate fasciculus in either men or women. Our data suggest that in women typical reappraisal use is specifically related to the integrity of white matter pathways linking the amygdala and prefrontal cortex. PMID:23398586

  18. Solving Differential Equations Using Modified Picard Iteration

    ERIC Educational Resources Information Center

    Robin, W. A.

    2010-01-01

    Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…

  19. Carbon isotope analyses of n-alkanes released from rapid pyrolysis of oil asphaltenes in a closed system.

    PubMed

    Chen, Shasha; Jia, Wanglu; Peng, Ping'an

    2016-08-15

    Carbon isotope analysis of n-alkanes produced by the pyrolysis of oil asphaltenes is a useful tool for characterizing and correlating oil sources. Low-temperature (320-350°C) pyrolysis lasting 2-3 days is usually employed in such studies. Establishing a rapid pyrolysis method is necessary to reduce the time taken for the pretreatment process in isotope analyses. One asphaltene sample was pyrolyzed in sealed ampoules for different durations (60-120 s) at 610°C. The δ(13) C values of the pyrolysates were determined by gas chromatography/combustion/isotope ratio mass spectrometry (GC/C/IRMS). The molecular characteristics and isotopic signatures of the pyrolysates were investigated for the different pyrolysis durations and compared with results obtained using the normal pyrolysis method, to determine the optimum time interval. Several asphaltene samples derived from various sources were analyzed using this method. The asphaltene pyrolysates of each sample were similar to those obtained by the flash pyrolysis method on similar samples. However, the molecular characteristics of the pyrolysates obtained over durations longer than 90 s showed intensified secondary reactions. The carbon isotopic signatures of individual compounds obtained at pyrolysis durations less than 90 s were consistent with those obtained from typical low-temperature pyrolysis. Several asphaltene samples from various sources released n-alkanes with distinct carbon isotopic signatures. This easy-to-use pyrolysis method, combined with a subsequent purification procedure, can be used to rapidly obtain clean n-alkanes from oil asphaltenes. Carbon isotopic signatures of n-alkanes released from oil asphaltenes from different sources demonstrate the potential application of this method in 'oil-oil' and 'oil-source' correlations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Daytime Secretion of Salivary Cortisol and Alpha-Amylase in Preschool-Aged Children with Autism and Typically Developing Children

    ERIC Educational Resources Information Center

    Kidd, Sharon A.; Corbett, Blythe A.; Granger, Douglas A.; Boyce, W. Thomas; Anders, Thomas F.; Tager, Ira B.

    2012-01-01

    We examined daytime salivary cortisol and salivary alpha-amylase (sAA) secretion levels and variability in preschool-aged children with autism (AUT) and typically developing children (TYP). Fifty-two subjects (26 AUT and 26 TYP) were enrolled. Salivary samples were obtained at waking, midday, and bedtime on two consecutive days at three phases…

Top