Sample records for random output variation

  1. Mining and Querying Multimedia Data

    DTIC Science & Technology

    2011-09-29

    able to capture more subtle spatial variations such as repetitiveness. Local feature descriptors such as SIFT [74] and SURF [12] have also been widely...empirically set to s = 90%, r = 50%, K = 20, where small variations lead to little perturbation of the output. The pseudo-code of the algorithm is...by constructing a three-layer graph based on clustering outputs, and executing a slight variation of random walk with restart algorithm. It provided

  2. Probabilistic Physics-Based Risk Tools Used to Analyze the International Space Station Electrical Power System Output

    NASA Technical Reports Server (NTRS)

    Patel, Bhogila M.; Hoge, Peter A.; Nagpal, Vinod K.; Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2004-01-01

    This paper describes the methods employed to apply probabilistic modeling techniques to the International Space Station (ISS) power system. These techniques were used to quantify the probabilistic variation in the power output, also called the response variable, due to variations (uncertainties) associated with knowledge of the influencing factors called the random variables. These uncertainties can be due to unknown environmental conditions, variation in the performance of electrical power system components or sensor tolerances. Uncertainties in these variables, cause corresponding variations in the power output, but the magnitude of that effect varies with the ISS operating conditions, e.g. whether or not the solar panels are actively tracking the sun. Therefore, it is important to quantify the influence of these uncertainties on the power output for optimizing the power available for experiments.

  3. Simulation of random road microprofile based on specified correlation function

    NASA Astrophysics Data System (ADS)

    Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Vlasov, V. G.; Fedotov, K. V.

    2018-03-01

    The paper aims to develop a numerical simulation method and an algorithm for a random microprofile of special roads based on the specified correlation function. The paper used methods of correlation, spectrum and numerical analysis. It proves that the transfer function of the generating filter for known expressions of spectrum input and output filter characteristics can be calculated using a theorem on nonnegative and fractional rational factorization and integral transformation. The model of the random function equivalent of the real road surface microprofile enables us to assess springing system parameters and identify ranges of variations.

  4. Effect of Random Circuit Fabrication Errors on Small Signal Gain and Phase in Helix Traveling Wave Tubes

    NASA Astrophysics Data System (ADS)

    Pengvanich, P.; Chernin, D. P.; Lau, Y. Y.; Luginsland, J. W.; Gilgenbach, R. M.

    2007-11-01

    Motivated by the current interest in mm-wave and THz sources, which use miniature, difficult-to-fabricate circuit components, we evaluate the statistical effects of random fabrication errors on a helix traveling wave tube amplifier's small signal characteristics. The small signal theory is treated in a continuum model in which the electron beam is assumed to be monoenergetic, and axially symmetric about the helix axis. Perturbations that vary randomly along the beam axis are introduced in the dimensionless Pierce parameters b, the beam-wave velocity mismatch, C, the gain parameter, and d, the cold tube circuit loss. Our study shows, as expected, that perturbation in b dominates the other two. The extensive numerical data have been confirmed by our analytic theory. They show in particular that the standard deviation of the output phase is linearly proportional to standard deviation of the individual perturbations in b, C, and d. Simple formulas have been derived which yield the output phase variations in terms of the statistical random manufacturing errors. This work was supported by AFOSR and by ONR.

  5. System level analysis and control of manufacturing process variation

    DOEpatents

    Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.

    2005-05-31

    A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.

  6. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  7. Apparatus and Method for Compensating for Process, Voltage, and Temperature Variation of the Time Delay of a Digital Delay Line

    NASA Technical Reports Server (NTRS)

    Seefeldt, James (Inventor); Feng, Xiaoxin (Inventor); Roper, Weston (Inventor)

    2013-01-01

    A process, voltage, and temperature (PVT) compensation circuit and a method of continuously generating a delay measure are provided. The compensation circuit includes two delay lines, each delay line providing a delay output. The two delay lines may each include a number of delay elements, which in turn may include one or more current-starved inverters. The number of delay lines may differ between the two delay lines. The delay outputs are provided to a combining circuit that determines an offset pulse based on the two delay outputs and then averages the voltage of the offset pulse to determine a delay measure. The delay measure may be one or more currents or voltages indicating an amount of PVT compensation to apply to input or output signals of an application circuit, such as a memory-bus driver, dynamic random access memory (DRAM), a synchronous DRAM, a processor or other clocked circuit.

  8. Solar Pumped Laser

    DTIC Science & Technology

    1976-09-01

    1 dB into 50 ohm load, output VSWR less than 1.5. Phase variation relative to the optical pulse train less than +A.5 Rod Temperature...design of the PSQM laser. All phases of design, mechanical, electronic and optical , borrowed heavily from the EFM lamp pumped laser...opnical power input change for the germanium device is twice that for the silicon device, its random phase noise for a typical in- put of 1 mW optical

  9. Identification of biomechanical nonlinearity in whole-body vibration using a reverse path multi-input-single-output method

    NASA Astrophysics Data System (ADS)

    Huang, Ya; Ferguson, Neil S.

    2018-04-01

    The study implements a classic signal analysis technique, typically applied to structural dynamics, to examine the nonlinear characteristics seen in the apparent mass of a recumbent person during whole-body horizontal random vibration. The nonlinearity in the present context refers to the amount of 'output' that is not correlated or coherent to the 'input', usually indicated by values of the coherence function that are less than unity. The analysis is based on the longitudinal horizontal inline and vertical cross-axis apparent mass of twelve human subjects exposed to 0.25-20 Hz random acceleration vibration at 0.125 and 1.0 ms-2 r.m.s. The conditioned reverse path frequency response functions (FRF) reveal that the uncorrelated 'linear' relationship between physical input (acceleration) and outputs (inline and cross-axis forces) has much greater variation around the primary resonance frequency between 0.5 and 5 Hz. By reversing the input and outputs of the physical system, it is possible to assemble additional mathematical inputs from the physical output forces and mathematical constructs (e.g. square root of inline force). Depending on the specific construct, this can improve the summed multiple coherence at frequencies where the response magnitude is low. In the present case this is between 6 and 20 Hz. The statistical measures of the response force time histories of each of the twelve subjects indicate that there are potential anatomical 'end-stops' for the sprung mass in the inline axis. No previous study has applied this reverse path multi-input-single-output approach to human vibration kinematic and kinetic data before. The implementation demonstrated in the present study will allow new and existing data to be examined using this different analytical tool.

  10. Determination of the precision error of the pulmonary artery thermodilution catheter using an in vitro continuous flow test rig.

    PubMed

    Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M

    2011-01-01

    Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.

  11. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  12. ASDTIC: A feedback control innovation

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Schoenfeld, A. D.

    1972-01-01

    The ASDTIC (Analog Signal to Discrete Time Interval Converter) control subsystem provides precise output control of high performance aerospace power supplies. The key to ASDTIC operation is that it stably controls output by sensing output energy change as well as output magnitude. The ASDTIC control subsystem and control module were developed to improve power supply performance during static and dynamic input voltage and output load variations, to reduce output voltage or current regulation due to component variations or aging, to maintain a stable feedback control with variations in the loop gain or loop time constants, and to standardize the feedback control subsystem for power conditioning equipment.

  13. ASDTIC - A feedback control innovation.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Schoenfeld, A. D.

    1972-01-01

    The ASDTIC (analog signal to discrete time interval converter) control subsystem provides precise output control of high performance aerospace power supplies. The key to ASDTIC operation is that it stably controls output by sensing output energy change as well as output magnitude. The ASDTIC control subsystem and control module were developed to improve power supply performance during static and dynamic input voltage and output load variations, to reduce output voltage or current regulation due to component variations or aging, to maintain a stable feedback control with variations in the loop gain or loop time constants, and to standardize the feedback control subsystem for power conditioning equipment.

  14. Investigating output and energy variations and their relationship to delivery QA results using Statistical Process Control for helical tomotherapy.

    PubMed

    Binny, Diana; Mezzenga, Emilio; Lancaster, Craig M; Trapp, Jamie V; Kairn, Tanya; Crowe, Scott B

    2017-06-01

    The aims of this study were to investigate machine beam parameters using the TomoTherapy quality assurance (TQA) tool, establish a correlation to patient delivery quality assurance results and to evaluate the relationship between energy variations detected using different TQA modules. TQA daily measurement results from two treatment machines for periods of up to 4years were acquired. Analyses of beam quality, helical and static output variations were made. Variations from planned dose were also analysed using Statistical Process Control (SPC) technique and their relationship to output trends were studied. Energy variations appeared to be one of the contributing factors to delivery output dose seen in the analysis. Ion chamber measurements were reliable indicators of energy and output variations and were linear with patient dose verifications. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  15. SU-F-T-284: The Effect of Linear Accelerator Output Variation On the Quality of Patient Specific Rapid Arc Verification Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, G; Cao, F; Szpala, S

    2016-06-15

    Purpose: The aim of the current study is to investigate the effect of machine output variation on the delivery of the RapidArc verification plans. Methods: Three verification plans were generated using Eclipse™ treatment planning system (V11.031) with plan normalization value 100.0%. These plans were delivered on the linear accelerators using ArcCHECK− device, with machine output 1.000 cGy/MU at calibration point. These planned and delivered dose distributions were used as reference plans. Additional plans were created in Eclipse− with normalization values ranging 92.80%–102% to mimic the machine output ranging 1.072cGy/MU-0.980cGy/MU, at the calibration point. These plans were compared against the referencemore » plans using gamma indices (3%, 3mm) and (2%, 2mm). Calculated gammas were studied for its dependence on machine output. Plans were considered passed if 90% of the points satisfy the defined gamma criteria. Results: The gamma index (3%, 3mm) was insensitive to output fluctuation within the output tolerance level (2% of calibration), and showed failures, when the machine output exceeds ≥3%. Gamma (2%, 2mm) was found to be more sensitive to the output variation compared to the gamma (3%, 3mm), and showed failures, when output exceeds ≥1.7%. The variation of the gamma indices with output variability also showed dependence upon the plan parameters (e.g. MLC movement and gantry rotation). The variation of the percentage points passing gamma criteria with output variation followed a non-linear decrease beyond the output tolerance level. Conclusion: Data from the limited plans and output conditions showed that gamma (2%, 2mm) is more sensitive to the output fluctuations compared to Gamma (3%,3mm). Work under progress, including detail data from a large number of plans and a wide range of output conditions, may be able to conclude the quantitative dependence of gammas on machine output, and hence the effect on the quality of delivered rapid arc plans.« less

  16. Influence of environmental temperature on 40 km cycling time-trial performance.

    PubMed

    Peiffer, Jeremiah J; Abbiss, Chris R

    2011-06-01

    The purpose of this study was to examine the effect of environmental temperature on variability in power output, self-selected pacing strategies, and performance during a prolonged cycling time trial. Nine trained male cyclists randomly completed four 40 km cycling time trials in an environmental chamber at 17°C, 22°C, 27°C, and 32°C (40% RH). During the time trials, heart rate, core body temperature, and power output were recorded. The variability in power output was assessed with the use of exposure variation analysis. Mean 40 km power output was significantly lower during 32°C (309 ± 35 W) compared with 17°C (329 ± 31 W), 22°C (324 ± 34 W), and 27°C (322 ± 32 W). In addition, greater variability in power production was observed at 32°C compared with 17°C, as evidenced by a lower (P = .03) standard deviation of the exposure variation matrix (2.9 ± 0.5 vs 3.5 ± 0.4 units, respectively). Core temperature was greater (P < .05) at 32°C compared with 17°C and 22°C from 30 to 40 km, and the rate of rise in core temperature throughout the 40 km time trial was greater (P < .05) at 32°C (0.06 ± 0.04°C·km-1) compared with 17°C (0.05 ± 0.05°C·km-1). This study showed that time-trial performance is reduced under hot environmental conditions, and is associated with a shift in the composition of power output. These finding provide insight into the control of pacing strategies during exercise in the heat.

  17. Assessing the Impact of Socioeconomic Variables on Small Area Variations in Suicide Outcomes in England

    PubMed Central

    Congdon, Peter

    2012-01-01

    Ecological studies of suicide and self-harm have established the importance of area variables (e.g., deprivation, social fragmentation) in explaining variations in suicide risk. However, there are likely to be unobserved influences on risk, typically spatially clustered, which can be modeled as random effects. Regression impacts may be biased if no account is taken of spatially structured influences on risk. Furthermore a default assumption of linear effects of area variables may also misstate or understate their impact. This paper considers variations in suicide outcomes for small areas across England, and investigates the impact on them of area socio-economic variables, while also investigating potential nonlinearity in their impact and allowing for spatially clustered unobserved factors. The outcomes are self-harm hospitalisations and suicide mortality over 6,781 Middle Level Super Output Areas. PMID:23271304

  18. Assessing the impact of socioeconomic variables on small area variations in suicide outcomes in England.

    PubMed

    Congdon, Peter

    2012-12-27

    Ecological studies of suicide and self-harm have established the importance of area variables (e.g., deprivation, social fragmentation) in explaining variations in suicide risk. However, there are likely to be unobserved influences on risk, typically spatially clustered, which can be modeled as random effects. Regression impacts may be biased if no account is taken of spatially structured influences on risk. Furthermore a default assumption of linear effects of area variables may also misstate or understate their impact. This paper considers variations in suicide outcomes for small areas across England, and investigates the impact on them of area socio-economic variables, while also investigating potential nonlinearity in their impact and allowing for spatially clustered unobserved factors. The outcomes are self-harm hospitalisations and suicide mortality over 6,781 Middle Level Super Output Areas.

  19. Localization noise in deep subwavelength plasmonic devices

    NASA Astrophysics Data System (ADS)

    Ghoreyshi, Ali; Victora, R. H.

    2018-05-01

    The grain shape dependence of absorption has been investigated in metal-insulator thin films. We demonstrate that randomness in the size and shape of plasmonic particles can lead to Anderson localization of polarization modes in the deep subwavelength regime. These localized modes can contribute to significant variation in the local field. In the case of plasmonic nanodevices, the effects of the localized modes have been investigated by mapping an electrostatic Hamiltonian onto the Anderson Hamiltonian in the presence of a random vector potential. We show that local behavior of the optical beam can be understood in terms of the weighted local density of the localized modes of the depolarization field. Optical nanodevices that operate on a length scale with high variation in the density of states of localized modes will experience a previously unidentified localized noise. This localization noise contributes uncertainty to the output of plasmonic nanodevices and limits their scalability. In particular, the resulting impact on heat-assisted magnetic recording is discussed.

  20. Method and apparatus for in-situ characterization of energy storage and energy conversion devices

    DOEpatents

    Christophersen, Jon P [Idaho Falls, ID; Motloch, Chester G [Idaho Falls, ID; Morrison, John L [Butte, MT; Albrecht, Weston [Layton, UT

    2010-03-09

    Disclosed are methods and apparatuses for determining an impedance of an energy-output device using a random noise stimulus applied to the energy-output device. A random noise signal is generated and converted to a random noise stimulus as a current source correlated to the random noise signal. A bias-reduced response of the energy-output device to the random noise stimulus is generated by comparing a voltage at the energy-output device terminal to an average voltage signal. The random noise stimulus and bias-reduced response may be periodically sampled to generate a time-varying current stimulus and a time-varying voltage response, which may be correlated to generate an autocorrelated stimulus, an autocorrelated response, and a cross-correlated response. Finally, the autocorrelated stimulus, the autocorrelated response, and the cross-correlated response may be combined to determine at least one of impedance amplitude, impedance phase, and complex impedance.

  1. Survey of the variation in ultraviolet outputs from ultraviolet A sunbeds in Bradford.

    PubMed

    Wright, A L; Hart, G C; Kernohan, E; Twentyman, G

    1996-02-01

    Concerns have been expressed for some time regarding the growth of the cosmetic suntanning industry and the potential harmful effects resulting from these exposures. Recently published work has appeared to confirm a link between sunbed use and skin cancer. A previous survey in Oxford some years ago demonstrated significant output variations, and we have attempted to extend and update that work. Ultraviolet A, UVB and blue-light output measurements were made on 50 sunbeds using a radiometer fitted with broad-band filters and detectors. A number of irradiance measurements were made on each sunbed within each waveband so that the uniformity of the output could also be assessed. UVA outputs varied by a factor of 3, with a mean of 13.5 mW/cm2; UVB outputs varied by a factor of 60, with a mean of 19.2 microW/cm2; and blue-light outputs varied by a factor of 2.5, with a mean of 2.5 mW/cm2. Outputs fall on average to 80% of the central value at either end of the sunbed. Facial units in sunbeds ranged in output between 18 and 45 mW/cm2. Output uniformity shows wide variation, with 16% of the sunbeds having an axial coefficient of variation > 10%. UVB output is highly tube-specific. Eyewear used in sunbeds should also protect against blue light.

  2. Uncertainty in Measurement: A Review of Monte Carlo Simulation Using Microsoft Excel for the Calculation of Uncertainties Through Functional Relationships, Including Uncertainties in Empirically Derived Constants

    PubMed Central

    Farrance, Ian; Frenkel, Robert

    2014-01-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835

  3. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants.

    PubMed

    Farrance, Ian; Frenkel, Robert

    2014-02-01

    The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.

  4. Radiation Transport in Random Media With Large Fluctuations

    NASA Astrophysics Data System (ADS)

    Olson, Aaron; Prinja, Anil; Franke, Brian

    2017-09-01

    Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.

  5. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  6. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  7. Comparisons of Three-Dimensional Variational Data Assimilation and Model Output Statistics in Improving Atmospheric Chemistry Forecasts

    NASA Astrophysics Data System (ADS)

    Ma, Chaoqun; Wang, Tijian; Zang, Zengliang; Li, Zhijin

    2018-07-01

    Atmospheric chemistry models usually perform badly in forecasting wintertime air pollution because of their uncertainties. Generally, such uncertainties can be decreased effectively by techniques such as data assimilation (DA) and model output statistics (MOS). However, the relative importance and combined effects of the two techniques have not been clarified. Here, a one-month air quality forecast with the Weather Research and Forecasting-Chemistry (WRF-Chem) model was carried out in a virtually operational setup focusing on Hebei Province, China. Meanwhile, three-dimensional variational (3DVar) DA and MOS based on one-dimensional Kalman filtering were implemented separately and simultaneously to investigate their performance in improving the model forecast. Comparison with observations shows that the chemistry forecast with MOS outperforms that with 3DVar DA, which could be seen in all the species tested over the whole 72 forecast hours. Combined use of both techniques does not guarantee a better forecast than MOS only, with the improvements and degradations being small and appearing rather randomly. Results indicate that the implementation of MOS is more suitable than 3DVar DA in improving the operational forecasting ability of WRF-Chem.

  8. Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, David O.

    2007-01-01

    A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less

  9. High power tunable mid-infrared optical parametric oscillator enabled by random fiber laser.

    PubMed

    Wu, Hanshuo; Wang, Peng; Song, Jiaxin; Ye, Jun; Xu, Jiangming; Li, Xiao; Zhou, Pu

    2018-03-05

    Random fiber laser, as a kind of novel fiber laser that utilizes random distributed feedback as well as Raman gain, has become a research focus owing to its advantages of wavelength flexibility, modeless property and output stability. Herein, a tunable optical parametric oscillator (OPO) enabled by a random fiber laser is reported for the first time. By exploiting a tunable random fiber laser to pump the OPO, the central wavelength of idler light can be continuously tuned from 3977.34 to 4059.65 nm with stable temporal average output power. The maximal output power achieved is 2.07 W. So far as we know, this is the first demonstration of a continuous-wave tunable OPO pumped by a tunable random fiber laser, which could not only provide a new approach for achieving tunable mid-infrared (MIR) emission, but also extend the application scenarios of random fiber lasers.

  10. Random one-of-N selector

    DOEpatents

    Kronberg, J.W.

    1993-04-20

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  11. Random one-of-N selector

    DOEpatents

    Kronberg, James W.

    1993-01-01

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartolac, S; Letourneau, D; University of Toronto, Toronto, Ontario

    Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventionsmore » with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances and interventions are required less frequently.« less

  13. Method and apparatus for determining position using global positioning satellites

    NASA Technical Reports Server (NTRS)

    Ward, John (Inventor); Ward, William S. (Inventor)

    1998-01-01

    A global positioning satellite receiver having an antenna for receiving a L1 signal from a satellite. The L1 signal is processed by a preamplifier stage including a band pass filter and a low noise amplifier and output as a radio frequency (RF) signal. A mixer receives and de-spreads the RF signal in response to a pseudo-random noise code, i.e., Gold code, generated by an internal pseudo-random noise code generator. A microprocessor enters a code tracking loop, such that during the code tracking loop, it addresses the pseudo-random code generator to cause the pseudo-random code generator to sequentially output pseudo-random codes corresponding to satellite codes used to spread the L1 signal, until correlation occurs. When an output of the mixer is indicative of the occurrence of correlation between the RF signal and the generated pseudo-random codes, the microprocessor enters an operational state which slows the receiver code sequence to stay locked with the satellite code sequence. The output of the mixer is provided to a detector which, in turn, controls certain routines of the microprocessor. The microprocessor will output pseudo range information according to an interrupt routine in response detection of correlation. The pseudo range information is to be telemetered to a ground station which determines the position of the global positioning satellite receiver.

  14. Scram signal generator

    DOEpatents

    Johanson, Edward W.; Simms, Richard

    1981-01-01

    A scram signal generating circuit for nuclear reactor installations monitors a flow signal representing the flow rate of the liquid sodium coolant which is circulated through the reactor, and initiates reactor shutdown for a rapid variation in the flow signal, indicative of fuel motion. The scram signal generating circuit includes a long-term drift compensation circuit which processes the flow signal and generates an output signal representing the flow rate of the coolant. The output signal remains substantially unchanged for small variations in the flow signal, attributable to long term drift in the flow rate, but a rapid change in the flow signal, indicative of a fast flow variation, causes a corresponding change in the output signal. A comparator circuit compares the output signal with a reference signal, representing a given percentage of the steady state flow rate of the coolant, and generates a scram signal to initiate reactor shutdown when the output signal equals the reference signal.

  15. Scram signal generator

    DOEpatents

    Johanson, E.W.; Simms, R.

    A scram signal generating circuit for nuclear reactor installations monitors a flow signal representing the flow rate of the liquid sodium coolant which is circulated through the reactor, and initiates reactor shutdown for a rapid variation in the flow signal, indicative of fuel motion. The scram signal generating circuit includes a long-term drift compensation circuit which processes the flow signal and generates an output signal representing the flow rate of the coolant. The output signal remains substantially unchanged for small variations in the flow signal, attributable to long term drift in the flow rate, but a rapid change in the flow signal, indicative of a fast flow variation, causes a corresponding change in the output signal. A comparator circuit compares the output signal with a reference signal, representing a given percentage of the steady state flow rate of the coolant, and generates a scram signal to initiate reactor shutdown when the output signal equals the reference signal.

  16. True randomness from an incoherent source

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2017-11-01

    Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.

  17. Source-Independent Quantum Random Number Generation

    NASA Astrophysics Data System (ADS)

    Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng

    2016-01-01

    Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .

  18. Apparatus for synthesis of a solar spectrum

    DOEpatents

    Sopori, Bhushan L.

    1993-01-01

    A xenon arc lamp and a tungsten filament lamp provide light beams that together contain all the wavelengths required to accurately simulate a solar spectrum. Suitable filter apparatus selectively direct visible and ultraviolet light from the xenon arc lamp into two legs of a trifurcated randomized fiber optic cable. Infrared light selectively filtered from the tungsten filament lamp is directed into the third leg of the fiber optic cable. The individual optic fibers from the three legs are brought together in a random fashion into a single output leg. The output beam emanating from the output leg of the trifurcated randomized fiber optic cable is extremely uniform and contains wavelengths from each of the individual filtered light beams. This uniform output beam passes through suitable collimation apparatus before striking the surface of the solar cell being tested. Adjustable aperture apparatus located between the lamps and the input legs of the trifurcated fiber optic cable can be selectively adjusted to limit the amount of light entering each leg, thereby providing a means of "fine tuning" or precisely adjusting the spectral content of the output beam. Finally, an adjustable aperture apparatus may also be placed in the output beam to adjust the intensity of the output beam without changing the spectral content and distribution of the output beam.

  19. Unbiased All-Optical Random-Number Generator

    NASA Astrophysics Data System (ADS)

    Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja

    2017-10-01

    The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.

  20. Improving electrofishing catch consistency by standardizing power

    USGS Publications Warehouse

    Burkhardt, Randy W.; Gutreuter, Steve

    1995-01-01

    The electrical output of electrofishing equipment is commonly standardized by using either constant voltage or constant amperage, However, simplified circuit and wave theories of electricity suggest that standardization of power (wattage) available for transfer from water to fish may be critical for effective standardization of electrofishing. Electrofishing with standardized power ensures that constant power is transferable to fish regardless of water conditions. The in situ performance of standardized power output is poorly known. We used data collected by the interagency Long Term Resource Monitoring Program (LTRMP) in the upper Mississippi River system to assess the effectiveness of standardizing power output. The data consisted of 278 electrofishing collections, comprising 9,282 fishes in eight species groups, obtained during 1990 from main channel border, backwater, and tailwater aquatic areas in four reaches of the upper Mississippi River and one reach of the Illinois River. Variation in power output explained an average of 14.9% of catch variance for night electrofishing and 12.1 % for day electrofishing. Three patterns in catch per unit effort were observed for different species: increasing catch with increasing power, decreasing catch with increasing power, and no power-related pattern. Therefore, in addition to reducing catch variation, controlling power output may provide some capability to select particular species. The LTRMP adopted standardized power output beginning in 1991; standardized power output is adjusted for variation in water conductivity and water temperature by reference to a simple chart. Our data suggest that by standardizing electrofishing power output, the LTRMP has eliminated substantial amounts of catch variation at virtually no additional cost.

  1. Shortcomings with Tree-Structured Edge Encodings for Neural Networks

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2004-01-01

    In evolutionary algorithms a common method for encoding neural networks is to use a tree structured assembly procedure for constructing them. Since node operators have difficulties in specifying edge weights and these operators are execution-order dependent, an alternative is to use edge operators. Here we identify three problems with edge operators: in the initialization phase most randomly created genotypes produce an incorrect number of inputs and outputs; variation operators can easily change the number of input/output (I/O) units; and units have a connectivity bias based on their order of creation. Instead of creating I/O nodes as part of the construction process we propose using parameterized operators to connect to preexisting I/O units. Results from experiments show that these parameterized operators greatly improve the probability of creating and maintaining networks with the correct number of I/O units, remove the connectivity bias with I/O units and produce better controllers for a goal-scoring task.

  2. Lagrangian Turbulence and Transport in Semi-Enclosed Basins and Coastal Regions

    DTIC Science & Technology

    2008-09-30

    P.M. Poulain, R. Signell, J. Chiggiato , S. Carniel, 2008: Variational analysis of drifter positions and model outputs for the reconstruction of... Chiggiato , S. Carniel, 2008: Variational analysis of drifter positions and model outputs for the reconstruction of surface currents in the Central

  3. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    NASA Astrophysics Data System (ADS)

    Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.

    2014-12-01

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.

  4. A dynamic system matching technique for improving the accuracy of MEMS gyroscopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stubberud, Peter A., E-mail: stubber@ee.unlv.edu; Stubberud, Stephen C., E-mail: scstubberud@ieee.org; Stubberud, Allen R., E-mail: stubberud@att.net

    A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This canmore » be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.« less

  5. The effects of goal-directed fluid therapy based on dynamic parameters on post-surgical outcome: a meta-analysis of randomized controlled trials.

    PubMed

    Benes, Jan; Giglio, Mariateresa; Brienza, Nicola; Michard, Frederic

    2014-10-28

    Dynamic predictors of fluid responsiveness, namely systolic pressure variation, pulse pressure variation, stroke volume variation and pleth variability index have been shown to be useful to identify in advance patients who will respond to a fluid load by a significant increase in stroke volume and cardiac output. As a result, they are increasingly used to guide fluid therapy. Several randomized controlled trials have tested the ability of goal-directed fluid therapy (GDFT) based on dynamic parameters (GDFTdyn) to improve post-surgical outcome. These studies have yielded conflicting results. Therefore, we performed this meta-analysis to investigate whether the use of GDFTdyn is associated with a decrease in post-surgical morbidity. A systematic literature review, using MEDLINE, EMBASE, and The Cochrane Library databases through September 2013 was conducted. Data synthesis was obtained by using odds ratio (OR) and weighted mean difference (WMD) with 95% confidence interval (CI) by random-effects model. In total, 14 studies met the inclusion criteria (961 participants). Post-operative morbidity was reduced by GDFTdyn (OR 0.51; CI 0.34 to 0.75; P <0.001). This effect was related to a significant reduction in infectious (OR 0.45; CI 0.27 to 0.74; P = 0.002), cardiovascular (OR 0.55; CI 0.36 to 0.82; P = 0.004) and abdominal (OR 0.56; CI 0.37 to 0.86; P = 0.008) complications. It was associated with a significant decrease in ICU length of stay (WMD -0.75 days; CI -1.37 to -0.12; P = 0.02). In surgical patients, we found that GDFTdyn decreased post-surgical morbidity and ICU length of stay. Because of the heterogeneity of studies analyzed, large prospective clinical trials would be useful to confirm our findings.

  6. Efficiency of static core turn-off in a system-on-a-chip with variation

    DOEpatents

    Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong

    2013-10-29

    A processor-implemented method for improving efficiency of a static core turn-off in a multi-core processor with variation, the method comprising: conducting via a simulation a turn-off analysis of the multi-core processor at the multi-core processor's design stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's design stage includes a first output corresponding to a first multi-core processor core to turn off; conducting a turn-off analysis of the multi-core processor at the multi-core processor's testing stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's testing stage includes a second output corresponding to a second multi-core processor core to turn off; comparing the first output and the second output to determine if the first output is referring to the same core to turn off as the second output; outputting a third output corresponding to the first multi-core processor core if the first output and the second output are both referring to the same core to turn off.

  7. Effects of gustatory stimulants of salivary secretion on salivary pH and flow: a randomized controlled trial.

    PubMed

    da Mata, A D S P; da Silva Marques, D N; Silveira, J M L; Marques, J R O F; de Melo Campos Felino, E T; Guilherme, N F R P M

    2009-04-01

    To compare salivary pH changes and stimulation efficacy of two different gustatory stimulants of salivary secretion (GSSS). Portuguese Dental Faculty Clinic. Double blind randomized controlled trial. One hundred and twenty volunteers were randomized to two intervention groups. Sample sized was calculated using an alpha error of 0.05 and a beta of 0.20. Participants were randomly assigned to receive a new gustatory stimulant of secretory secretion containing a weaker malic acid, fluoride and xylitol or a traditionally citric acid-based one. Saliva collection was obtained by established methods at different times. The salivary pH of the samples was determined with a pH meter and a microelectrode. Salivary pH variations and counts of subjects with pH below 5.5 for over 1 min and stimulated salivary flow were the main outcome measures. Both GSSS significantly stimulated salivary output without significant differences between the two groups. The new gustatory stimulant of salivary secretion presented a risk reduction of 80 +/- 10.6% (95% CI) when compared with the traditional one. Gustatory stimulants of salivary secretion with fluoride, xylitol and lower acid content maintain similar salivary stimulation capacity while reducing significantly the dental erosion predictive potential.

  8. Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.

    1997-01-01

    A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.

  9. Random pulse generator

    NASA Technical Reports Server (NTRS)

    Lindsey, R. S., Jr. (Inventor)

    1975-01-01

    An exemplary embodiment of the present invention provides a source of random width and random spaced rectangular voltage pulses whose mean or average frequency of operation is controllable within prescribed limits of about 10 hertz to 1 megahertz. A pair of thin-film metal resistors are used to provide a differential white noise voltage pulse source. Pulse shaping and amplification circuitry provide relatively short duration pulses of constant amplitude which are applied to anti-bounce logic circuitry to prevent ringing effects. The pulse outputs from the anti-bounce circuits are then used to control two one-shot multivibrators whose output comprises the random length and random spaced rectangular pulses. Means are provided for monitoring, calibrating and evaluating the relative randomness of the generator.

  10. Active flutter suppression using optical output feedback digital controllers

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A method for synthesizing digital active flutter suppression controllers using the concept of optimal output feedback is presented. A convergent algorithm is employed to determine constrained control law parameters that minimize an infinite time discrete quadratic performance index. Low order compensator dynamics are included in the control law and the compensator parameters are computed along with the output feedback gain as part of the optimization process. An input noise adjustment procedure is used to improve the stability margins of the digital active flutter controller. Sample rate variation, prefilter pole variation, control structure variation and gain scheduling are discussed. A digital control law which accommodates computation delay can stabilize the wing with reasonable rms performance and adequate stability margins.

  11. Extracting random numbers from quantum tunnelling through a single diode.

    PubMed

    Bernardo-Gavito, Ramón; Bagci, Ibrahim Ethem; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J; Woodhead, Christopher S; Missous, Mohamed; Roedig, Utz; Young, Robert J

    2017-12-19

    Random number generation is crucial in many aspects of everyday life, as online security and privacy depend ultimately on the quality of random numbers. Many current implementations are based on pseudo-random number generators, but information security requires true random numbers for sensitive applications like key generation in banking, defence or even social media. True random number generators are systems whose outputs cannot be determined, even if their internal structure and response history are known. Sources of quantum noise are thus ideal for this application due to their intrinsic uncertainty. In this work, we propose using resonant tunnelling diodes as practical true random number generators based on a quantum mechanical effect. The output of the proposed devices can be directly used as a random stream of bits or can be further distilled using randomness extraction algorithms, depending on the application.

  12. TU-F-CAMPUS-T-04: Variations in Nominally Identical Small Fields From Photon Jaw Reproducibility and Associated Effects On Small Field Dosimetric Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, B R; McEwen, M R

    2015-06-15

    Purpose: To investigate uncertainties in small field output factors and detector specific correction factors from variations in field size for nominally identical fields using measurements and Monte Carlo simulations. Methods: Repeated measurements of small field output factors are made with the Exradin W1 (plastic scintillation detector) and the PTW microDiamond (synthetic diamond detector) in beams from the Elekta Precise linear accelerator. We investigate corrections for a 0.6x0.6 cm{sup 2} nominal field size shaped with secondary photon jaws at 100 cm source to surface distance (SSD). Measurements of small field profiles are made in a water phantom at 10 cm depthmore » using both detectors and are subsequently used for accurate detector positioning. Supplementary Monte Carlo simulations with EGSnrc are used to calculate the absorbed dose to the detector and absorbed dose to water under the same conditions when varying field size. The jaws in the BEAMnrc model of the accelerator are varied by a reasonable amount to investigate the same situation without the influence of measurements uncertainties (such as detector positioning or variation in beam output). Results: For both detectors, small field output factor measurements differ by up to 11 % when repeated measurements are made in nominally identical 0.6x0.6 cm{sup 2} fields. Variations in the FWHM of measured profiles are consistent with field size variations reported by the accelerator. Monte Carlo simulations of the dose to detector vary by up to 16 % under worst case variations in field size. These variations are also present in calculations of absorbed dose to water. However, calculated detector specific correction factors are within 1 % when varying field size because of cancellation of effects. Conclusion: Clinical physicists should be aware of potentially significant uncertainties in measured output factors required for dosimetry of small fields due to field size variations for nominally identical fields.« less

  13. Transfer of Timing Information from RGC to LGN Spike Trains

    NASA Astrophysics Data System (ADS)

    Teich, Malvin C.; Lowen, Steven B.; Saleh, Bahaa E. A.; Kaplan, Ehud

    1998-03-01

    We have studied the firing patterns of retinal ganglion cells (RGCs) and their target lateral geniculate nucleus (LGN) cells. We find that clusters of spikes in the RGC neural firing pattern appear at the LGN output essentially unchanged, while isolated RGC firing events are more likely to be eliminated; thus the LGN action-potential sequence is therefore not merely a randomly deleted version of the RGC spike train. Employing information-theoretic techniques we developed for point processes,(B. E. A. Saleh and M. C. Teich, Phys. Rev. Lett.) 58, 2656--2659 (1987). we are able to estimate the information efficiency of the LGN neuronal output --- the proportion of the variation in the LGN firing pattern that carries information about its associated RGC input. A suitably modified integrate-and-fire neural model reproduces both the enhanced clustering in the LGN data (which accounts for the increased coefficient of variation) and the measured value of information efficiency, as well as mimicking the results of other observed statistical measures. Reliable information transmission therefore coexists with fractal fluctuations, which appear in RGC and LGN firing patterns.(M. C. Teich, C. Heneghan, S. B. Lowen, T. Ozaki, and E. Kaplan, J. Opt. Soc. Am. A) 14, 529--546 (1997).

  14. Note: The design of thin gap chamber simulation signal source based on field programmable gate array.

    PubMed

    Hu, Kun; Lu, Houbing; Wang, Xu; Li, Feng; Liang, Futian; Jin, Ge

    2015-01-01

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  15. Note: The design of thin gap chamber simulation signal source based on field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Kun; Wang, Xu; Li, Feng

    The Thin Gap Chamber (TGC) is an important part of ATLAS detector and LHC accelerator. Targeting the feature of the output signal of TGC detector, we have designed a simulation signal source. The core of the design is based on field programmable gate array, randomly outputting 256-channel simulation signals. The signal is generated by true random number generator. The source of randomness originates from the timing jitter in ring oscillators. The experimental results show that the random number is uniform in histogram, and the whole system has high reliability.

  16. Time-varying output performances of piezoelectric vibration energy harvesting under nonstationary random vibrations

    NASA Astrophysics Data System (ADS)

    Yoon, Heonjun; Kim, Miso; Park, Choon-Su; Youn, Byeng D.

    2018-01-01

    Piezoelectric vibration energy harvesting (PVEH) has received much attention as a potential solution that could ultimately realize self-powered wireless sensor networks. Since most ambient vibrations in nature are inherently random and nonstationary, the output performances of PVEH devices also randomly change with time. However, little attention has been paid to investigating the randomly time-varying electroelastic behaviors of PVEH systems both analytically and experimentally. The objective of this study is thus to make a step forward towards a deep understanding of the time-varying performances of PVEH devices under nonstationary random vibrations. Two typical cases of nonstationary random vibration signals are considered: (1) randomly-varying amplitude (amplitude modulation; AM) and (2) randomly-varying amplitude with randomly-varying instantaneous frequency (amplitude and frequency modulation; AM-FM). In both cases, this study pursues well-balanced correlations of analytical predictions and experimental observations to deduce the relationships between the time-varying output performances of the PVEH device and two primary input parameters, such as a central frequency and an external electrical resistance. We introduce three correlation metrics to quantitatively compare analytical prediction and experimental observation, including the normalized root mean square error, the correlation coefficient, and the weighted integrated factor. Analytical predictions are in an excellent agreement with experimental observations both mechanically and electrically. This study provides insightful guidelines for designing PVEH devices to reliably generate electric power under nonstationary random vibrations.

  17. Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    Depending on detector number, there are random fluctuations in the background level for spectral band 1 of magnitudes ranging from 2 to 3.5 digital numbers (DN). Similar variability is observed in all the other reflective bands, but with smaller magnitude in the range 0.5 to 2.5 DN. Observations of background reference levels show that line dependent variations in raw TM image data and in the associated calibration data can be measured and corrected within an operational environment by applying simple offset corrections on a line-by-line basis. The radiometric calibration procedure defined by the Canadian Center for Remote Sensing was revised accordingly in order to prevent striping in the output product.

  18. Method of controlling cyclic variation in engine combustion

    DOEpatents

    Davis, L.I. Jr.; Daw, C.S.; Feldkamp, L.A.; Hoard, J.W.; Yuan, F.; Connolly, F.T.

    1999-07-13

    Cyclic variation in combustion of a lean burning engine is reduced by detecting an engine combustion event output such as torsional acceleration in a cylinder (i) at a combustion event (k), using the detected acceleration to predict a target acceleration for the cylinder at the next combustion event (k+1), modifying the target output by a correction term that is inversely proportional to the average phase of the combustion event output of cylinder (i) and calculating a control output such as fuel pulse width or spark timing necessary to achieve the target acceleration for cylinder (i) at combustion event (k+1) based on anti-correlation with the detected acceleration and spill-over effects from fueling. 27 figs.

  19. Method of controlling cyclic variation in engine combustion

    DOEpatents

    Davis, Jr., Leighton Ira; Daw, Charles Stuart; Feldkamp, Lee Albert; Hoard, John William; Yuan, Fumin; Connolly, Francis Thomas

    1999-01-01

    Cyclic variation in combustion of a lean burning engine is reduced by detecting an engine combustion event output such as torsional acceleration in a cylinder (i) at a combustion event (k), using the detected acceleration to predict a target acceleration for the cylinder at the next combustion event (k+1), modifying the target output by a correction term that is inversely proportional to the average phase of the combustion event output of cylinder (i) and calculating a control output such as fuel pulse width or spark timing necessary to achieve the target acceleration for cylinder (i) at combustion event (k+1) based on anti-correlation with the detected acceleration and spill-over effects from fueling.

  20. The drivers of facility-based immunization performance and costs. An application to Moldova.

    PubMed

    Maceira, Daniel; Goguadze, Ketevan; Gotsadze, George

    2015-05-07

    This paper identifies factors that affect the cost and performance of the routine immunization program in Moldova through an analysis of facility-based data collected as part of a multi-country costing and financing study of routine immunization (EPIC). A nationally representative sample of health care facilities (50) was selected through multi-stage, stratified random sampling. Data on inputs, unit prices and facility outputs were collected during October 3rd 2012-January 14th 2013 using a pre-tested structured questionnaire. Ordinary least square (OLS) regression analysis was performed to determine factors affecting facility outputs (number of doses administered and fully immunized children) and explaining variation in total facility costs. The study found that the number of working hours, vaccine wastage rates, and whether or not a doctor worked at a facility (among other factors) were positively and significantly associated with output levels. In addition, the level of output, price of inputs and share of the population with university education were significantly associated with higher facility costs. A 1% increase in fully immunized child would increase total cost by 0.7%. Few costing studies of primary health care services in developing countries evaluate the drivers of performance and cost. This exercise attempted to fill this knowledge gap and helped to identify organizational and managerial factors at a primary care district and national level that could be addressed by improved program management aimed at improved performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Low settings of the ventricular pacing output in patients dependent on a pacemaker: are they really safe?

    PubMed

    Schuchert, Andreas; Frese, Jens; Stammwitz, Ekkehard; Novák, Miroslav; Schleich, Arthur; Wagner, Stefan M; Meinertz, Thomas

    2002-06-01

    It is generally acknowledged that pacemaker output must be adjusted with a 100% voltage safety margin above the pacing threshold to avoid ineffective pacing, especially in patients dependent on pacemakers. The aim of this prospective crossover study was to assess the beat-to-beat safety of low outputs in patients who are dependent on a pacemaker between 2 follow-up examinations. The study included 12 patients who had received a DDD pacemaker with an automatic beat-to-beat capture verification function. The ventricular output at 0.4 milliseconds pulse duration was programmed independently of the actual pacing threshold in a crossover randomization to 1.0 V, 1.5 V, and 2.5 V for 6 weeks each. At each follow-up, the diagnostic counters were interrogated and the pacing threshold at 0.4 milliseconds was determined in 0.1-V steps. The diagnostic pacemaker counters depict the frequency of back-up pulses delivered because of a loss of capture. During the randomization to 1.0-V output, we evaluated whether the adjustment of the output under consideration of the >100% voltage safety margin reduced the frequency of back-up pulses. Pacing thresholds at the randomization to 1.0-V, 1.5-V, and 2.5-V output were not significantly different, with 0.7 +/- 0.3 V at 2.5-V output, 0.6 +/- 0.2 V at 1.5-V output, and 0.6 +/- 0.2 V at 1.0-V output. The frequency of back-up pulses was similar at 2.5-V and 1.5-V output, 2.2% +/- 1.9% and 2.0% +/- 2.0%, respectively. The frequency of back-up pulses significantly increased at 1.0-V output to 5.8% +/- 6.4% (P <.05). Back-up pulses >5% of the time between the 2 follow-ups were observed in no patient at 2.5 V, in 1 patient at 1.5 V, and in 5 patients at 1.0 V. At the randomization to the 1.0-V output, 6 patients had pacing thresholds of 0.5 V or less, and 6 patients had pacing thresholds >0.5 V. The frequency of back-up pulses in the 2 groups was not significantly different, 6.4% +/- 8.6% and 5.7% +/- 2.6%. The frequency of back-up pulses was significantly higher at 1.0-V output than at 1.5-V and 2.5-V output. This also applied to patients with pacing thresholds of < or =0.5 V. Fixed low outputs seem not to be absolutely safe between 2 follow-ups in patients who are dependent on a pacemaker, even when the output has a 100% voltage safety margin above the pacing threshold. When patients with pacemakers programmed to a low ventricular output have symptoms of ineffective pacing, an intermittent increase of the pacing threshold should be carefully ruled out.

  2. Long period pseudo random number sequence generator

    NASA Technical Reports Server (NTRS)

    Wang, Charles C. (Inventor)

    1989-01-01

    A circuit for generating a sequence of pseudo random numbers, (A sub K). There is an exponentiator in GF(2 sup m) for the normal basis representation of elements in a finite field GF(2 sup m) each represented by m binary digits and having two inputs and an output from which the sequence (A sub K). Of pseudo random numbers is taken. One of the two inputs is connected to receive the outputs (E sub K) of maximal length shift register of n stages. There is a switch having a pair of inputs and an output. The switch outputs is connected to the other of the two inputs of the exponentiator. One of the switch inputs is connected for initially receiving a primitive element (A sub O) in GF(2 sup m). Finally, there is a delay circuit having an input and an output. The delay circuit output is connected to the other of the switch inputs and the delay circuit input is connected to the output of the exponentiator. Whereby after the exponentiator initially receives the primitive element (A sub O) in GF(2 sup m) through the switch, the switch can be switched to cause the exponentiator to receive as its input a delayed output A(K-1) from the exponentiator thereby generating (A sub K) continuously at the output of the exponentiator. The exponentiator in GF(2 sup m) is novel and comprises a cyclic-shift circuit; a Massey-Omura multiplier; and, a control logic circuit all operably connected together to perform the function U(sub i) = 92(sup i) (for n(sub i) = 1 or 1 (for n(subi) = 0).

  3. Modelling health and output at business cycle horizons for the USA.

    PubMed

    Narayan, Paresh Kumar

    2010-07-01

    In this paper we employ a theoretical framework - a simple macro model augmented with health - that draws guidance from the Keynesian view of business cycles to examine the relative importance of permanent and transitory shocks in explaining variations in health expenditure and output at business cycle horizons for the USA. The variance decomposition analysis of shocks reveals that at business cycle horizons permanent shocks explain the bulk of the variations in output, while transitory shocks explain the bulk of the variations in health expenditures. We undertake a shock decomposition analysis for private health expenditures versus public health expenditures and interestingly find that while transitory shocks are more important for private sector expenditures, permanent shocks dominate public health expenditures. Copyright (c) 2009 John Wiley & Sons, Ltd.

  4. Dual Rate Adaptive Control for an Industrial Heat Supply Process Using Signal Compensation Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Tianyou; Jia, Yao; Wang, Hong

    The industrial heat supply process (HSP) is a highly nonlinear cascaded process which uses a steam valve opening as its control input, the steam flow-rate as its inner loop output and the supply water temperature as its outer loop output. The relationship between the heat exchange rate and the model parameters, such as steam density, entropy, and fouling correction factor and heat exchange efficiency are unknown and nonlinear. Moreover, these model parameters vary in line with steam pressure, ambient temperature and the residuals caused by the quality variations of the circulation water. When the steam pressure and the ambient temperaturemore » are of high values and are subjected to frequent external random disturbances, the supply water temperature and the steam flow-rate would interact with each other and fluctuate a lot. This is also true when the process exhibits unknown characteristic variations of the process dynamics caused by the unexpected changes of the heat exchange residuals. As a result, it is difficult to control the supply water temperature and the rates of changes of steam flow-rate well inside their targeted ranges. In this paper, a novel compensation signal based dual rate adaptive controller is developed by representing the unknown variations of dynamics as unmodeled dynamics. In the proposed controller design, such a compensation signal is constructed and added onto the control signal obtained from the linear deterministic model based feedback control design. Such a compensation signal aims at eliminating the unmodeled dynamics and the rate of changes of the currently sample unmodeled dynamics. A successful industrial application is carried out, where it has been shown that both the supply water temperature and the rate of the changes of the steam flow-rate can be controlled well inside their targeted ranges when the process is subjected to unknown variations of its dynamics.« less

  5. Use of the single-breath method of estimating cardiac output during exercise-stress testing.

    NASA Technical Reports Server (NTRS)

    Buderer, M. C.; Rummel, J. A.; Sawin, C. F.; Mauldin, D. G.

    1973-01-01

    The single-breath cardiac output measurement technique of Kim et al. (1966) has been modified for use in obtaining cardiac output measurements during exercise-stress tests on Apollo astronauts. The modifications involve the use of a respiratory mass spectrometer for data acquisition and a digital computer program for data analysis. The variation of the modified method for triplicate steady-state cardiac output measurements was plus or minus 1 liter/min. The combined physiological and methodological variation seen during a set of three exercise tests on a series of subjects was 1 to 2.5 liter/min. Comparison of the modified method with the direct Fick technique showed that although the single-breath values were consistently low, the scatter of data was small and the correlation between the two methods was high. Possible reasons for the low single-breath cardiac output values are discussed.

  6. European Community Respiratory Health Survey calibration project of dosimeter driving pressures.

    PubMed

    Ward, R J; Ward, C; Johns, D P; Skoric, B; Abramson, M; Walters, E H

    2002-02-01

    Two potential sources of systematic variation in output from Mefar dosimeters, the system used in the European Community Respiratory Health Survey (ECRHS) study have been evaluated: individual nebulizer characteristics and dosimeter driving pressure. Output variation from 366 new nebulizers produced in two batches for the second ECRHS were evaluated, using a solute tracer method, at a fixed driving pressure. The relationship between dosimeter driving pressure was then characterized and between-centre variation in dosimeter driving pressure was evaluated in an Internet-based survey. A systematic difference between nebulizers manufactured in the two batches was identified. Batch one had a mean+/-SD output of 7.0+/-0.8 mg x s(-1) and batch two, 6.3+/-0.7 mg x s(-1) (p<0.005). There was a wide range of driving pressures generated by Mefar dosimeters as set, ranging between 70-245 kPa, with most outside the quoted manufacturer's specification of 180+/-5%. Nebulizer output was confirmed as linearly related to dosimeter driving pressure (coefficient of determination (R2)=0.99, output=0.0377 x driving pressure-0.4151). The range in driving pressures observed was estimated as consistent with a variation of about one doubling in the provocative dose causing a 20% fall in forced expiratory volume in one second. Systematic variation has been identified that constitutes potentially significant confounders for between-centre comparisons of airway responsiveness in the European Community Respiratory Health Survey, with the dosimeter driving pressure representing the most serious issue. This work confirms the need for appropriate quality control of both nebulizer output and dosimeter driving pressure, in laboratories undertaking field measurements of airway responsiveness. In particular, appropriate data on driving pressures need to be collected and factored into between-centre comparisons. Comprehensive collection of such data to optimize quality control is practicable and has been instigated by the organizing committee for the European Community Respiratory Health Survey II.

  7. Method and apparatus for stabilizing pulsed microwave amplifiers

    DOEpatents

    Hopkins, Donald B.

    1993-01-01

    Phase and amplitude variations at the output of a high power pulsed microwave amplifier arising from instabilities of the driving electron beam are suppressed with a feed-forward system that can stabilize pulses which are too brief for regulation by conventional feedback techniques. Such variations tend to be similar during successive pulses. The variations are detected during each pulse by comparing the amplifier output with the low power input signal to obtain phase and amplitude error signals. This enables storage of phase and amplitude correction signals which are used to make compensating changes in the low power input signal during the following amplifier output pulse which suppress the variations. In the preferred form of the invention, successive increments of the correction signals for each pulse are stored in separate channels of a multi-channel storage. Sequential readout of the increments during the next pulse provides variable control voltages to a voltage controlled phase shifter and voltage controlled amplitude modulator in the amplifier input signal path.

  8. Method and apparatus for stabilizing pulsed microwave amplifiers

    DOEpatents

    Hopkins, D.B.

    1993-01-26

    Phase and amplitude variations at the output of a high power pulsed microwave amplifier arising from instabilities of the driving electron beam are suppressed with a feed-forward system that can stabilize pulses which are too brief for regulation by conventional feedback techniques. Such variations tend to be similar during successive pulses. The variations are detected during each pulse by comparing the amplifier output with the low power input signal to obtain phase and amplitude error signals. This enables storage of phase and amplitude correction signals which are used to make compensating changes in the low power input signal during the following amplifier output pulse which suppress the variations. In the preferred form of the invention, successive increments of the correction signals for each pulse are stored in separate channels of a multi-channel storage. Sequential readout of the increments during the next pulse provides variable control voltages to a voltage controlled phase shifter and voltage controlled amplitude modulator in the amplifier input signal path.

  9. Learning in the machine: The symmetries of the deep learning channel.

    PubMed

    Baldi, Pierre; Sadowski, Peter; Lu, Zhiqin

    2017-11-01

    In a physical neural system, learning rules must be local both in space and time. In order for learning to occur, non-local information must be communicated to the deep synapses through a communication channel, the deep learning channel. We identify several possible architectures for this learning channel (Bidirectional, Conjoined, Twin, Distinct) and six symmetry challenges: (1) symmetry of architectures; (2) symmetry of weights; (3) symmetry of neurons; (4) symmetry of derivatives; (5) symmetry of processing; and (6) symmetry of learning rules. Random backpropagation (RBP) addresses the second and third symmetry, and some of its variations, such as skipped RBP (SRBP) address the first and the fourth symmetry. Here we address the last two desirable symmetries showing through simulations that they can be achieved and that the learning channel is particularly robust to symmetry variations. Specifically, random backpropagation and its variations can be performed with the same non-linear neurons used in the main input-output forward channel, and the connections in the learning channel can be adapted using the same algorithm used in the forward channel, removing the need for any specialized hardware in the learning channel. Finally, we provide mathematical results in simple cases showing that the learning equations in the forward and backward channels converge to fixed points, for almost any initial conditions. In symmetric architectures, if the weights in both channels are small at initialization, adaptation in both channels leads to weights that are essentially symmetric during and after learning. Biological connections are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Evaluating the use of fibrin glue for sealing low-output enterocutaneous fistulas: study protocol for a randomized controlled trial.

    PubMed

    Wu, Xiuwen; Ren, Jianan; Wang, Gefei; Wang, Jianzhong; Wang, Feng; Fan, Yueping; Li, Yuanxin; Han, Gang; Zhou, Yanbing; Song, Xiaofei; Quan, Bin; Yao, Min; Li, Jieshou

    2015-10-07

    The management of an enterocutaneous fistula poses a significant challenge to surgeons and is often associated with a costly hospital stay and long-term discomfort. The use of fibrin glue in the fistula tract has been shown to promote closure of low output enterocutaneous fistulas. Our previous nonrandomized study demonstrated that autologous platelet-rich fibrin glue treatment significantly decreased time to fistula closure and promoted closure rates. However, there are several limitations in the study, which may lead to bias in our conclusion. Thus, a multicenter, randomized, controlled clinical trial is required. The study is designed as a randomized, open-label, three-arm, multicenter study in nine Chinese academic hospitals for evaluating the efficacy and safety of fibrin glue for sealing low-output fistulas. An established number of 171 fistula patients will undergo prospective random assignment to autologous fibrin glue, commercial porcine fibrin sealants or drainage cessation (1:1:1). The primary endpoint is fistula closure time (defined as the interval between the day of enrollment and day of fistula closure) during the 14-day treatment period. To our knowledge, this is the first study to evaluate the safety and efficacy of both autologous and commercial fibrin glue sealing for patients with low-output volume fistulas. NCT01828892 . Registration date: April 2013.

  11. Modeling the variations of reflection coefficient of Earth's lower ionosphere using very low frequency radio wave data by artificial neural network

    NASA Astrophysics Data System (ADS)

    Ghanbari, Keyvan; Khakian Ghomi, Mehdi; Mohammadi, Mohammad; Marbouti, Marjan; Tan, Le Minh

    2016-08-01

    The ionized atmosphere lying from 50 to 600 km above surface, known as ionosphere, contains high amount of electrons and ions. Very Low Frequency (VLF) radio waves with frequencies between 3 and 30 kHz are reflected from the lower ionosphere specifically D-region. A lot of applications in long range communications and navigation systems have been inspired by this characteristic of ionosphere. There are several factors which affect the ionization rate in this region, such as: time of day (presence of sun in the sky), solar zenith angle (seasons) and solar activities. Due to nonlinear response of ionospheric reflection coefficient to these factors, finding an accurate relation between these parameters and reflection coefficient is an arduous task. In order to model these kinds of nonlinear functionalities, some numerical methods are employed. One of these methods is artificial neural network (ANN). In this paper, the VLF radio wave data of 4 sudden ionospheric disturbance (SID) stations are given to a multi-layer perceptron ANN in order to simulate the variations of reflection coefficient of D region ionosphere. After training, validation and testing the ANN, outputs of ANN and observed values are plotted together for 2 random cases of each station. By evaluating the results using 2 parameters of pearson correlation coefficient and root mean square error, a satisfying agreement was found between ANN outputs and real observed data.

  12. Modeling and Simulation of Linear and Nonlinear MEMS Scale Electromagnetic Energy Harvesters for Random Vibration Environments

    PubMed Central

    Sassani, Farrokh

    2014-01-01

    The simulation results for electromagnetic energy harvesters (EMEHs) under broad band stationary Gaussian random excitations indicate the importance of both a high transformation factor and a high mechanical quality factor to achieve favourable mean power, mean square load voltage, and output spectral density. The optimum load is different for random vibrations and for sinusoidal vibration. Reducing the total damping ratio under band-limited random excitation yields a higher mean square load voltage. Reduced bandwidth resulting from decreased mechanical damping can be compensated by increasing the electrical damping (transformation factor) leading to a higher mean square load voltage and power. Nonlinear EMEHs with a Duffing spring and with linear plus cubic damping are modeled using the method of statistical linearization. These nonlinear EMEHs exhibit approximately linear behaviour under low levels of broadband stationary Gaussian random vibration; however, at higher levels of such excitation the central (resonant) frequency of the spectral density of the output voltage shifts due to the increased nonlinear stiffness and the bandwidth broadens slightly. Nonlinear EMEHs exhibit lower maximum output voltage and central frequency of the spectral density with nonlinear damping compared to linear damping. Stronger nonlinear damping yields broader bandwidths at stable resonant frequency. PMID:24605063

  13. Adaptive Electronic Camouflage Using Texture Synthesis

    DTIC Science & Technology

    2012-04-01

    algorithm begins by computing the GLCMs, GIN and GOUT , of the input image (e.g., image of local environment) and output image (randomly generated...respectively. The algorithm randomly selects a pixel from the output image and cycles its gray-level through all values. For each value, GOUT is updated...The value of the selected pixel is permanently changed to the gray-level value that minimizes the error between GIN and GOUT . Without selecting a

  14. Analog graphic display method and apparatus

    DOEpatents

    Kronberg, J.W.

    1991-08-13

    Disclosed are an apparatus and method for using an output device such as an LED to show the approximate analog level of a variable electrical signal wherein a modulating AC waveform is superimposed either on the signal or a reference voltage, both of which are then fed to a comparator which drives the output device. Said device flashes at a constant perceptible rate with a duty cycle which varies in response to variations in the level of the input signal. The human eye perceives these variations in duty cycle as analogous to variations in the level of the input signal. 21 figures.

  15. Analog graphic display method and apparatus

    DOEpatents

    Kronberg, James W.

    1991-01-01

    An apparatus and method for using an output device such as an LED to show the approximate analog level of a variable electrical signal wherein a modulating AC waveform is superimposed either on the signal or a reference voltage, both of which are then fed to a comparator which drives the output device. Said device flashes at a constant perceptible rate with a duty cycle which varies in response to variations in the level of the input signal. The human eye perceives these variations in duty cycle as analogous to variations in the level of the input signal.

  16. Designing an agricultural vegetative waste-management system under uncertain prices of treatment-technology output products.

    PubMed

    Broitman, D; Raviv, O; Ayalon, O; Kan, I

    2018-05-01

    Setting up a sustainable agricultural vegetative waste-management system is a challenging investment task, particularly when markets for output products of waste-treatment technologies are not well established. We conduct an economic analysis of possible investments in treatment technologies of agricultural vegetative waste, while accounting for fluctuating output prices. Under a risk-neutral approach, we find the range of output-product prices within which each considered technology becomes most profitable, using average final prices as the exclusive factor. Under a risk-averse perspective, we rank the treatment technologies based on their computed certainty-equivalent profits as functions of the coefficient of variation of the technologies' output prices. We find the ranking of treatment technologies based on average prices to be robust to output-price fluctuations provided that the coefficient of variation of the output prices is below about 0.4, that is, approximately twice as high as that of well-established recycled-material markets such as glass, paper and plastic. We discuss some policy implications that arise from our analysis regarding vegetative waste management and its associated risks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Optimized MPPT algorithm for boost converters taking into account the environmental variables

    NASA Astrophysics Data System (ADS)

    Petit, Pierre; Sawicki, Jean-Paul; Saint-Eve, Frédéric; Maufay, Fabrice; Aillerie, Michel

    2016-07-01

    This paper presents a study on the specific behavior of the Boost DC-DC converters generally used for powering conversion of PV panels connected to a HVDC (High Voltage Direct Current) Bus. It follows some works pointing out that converter MPPT (Maximum Power Point Tracker) is severely perturbed by output voltage variations due to physical dependency of parameters as the input voltage, the output voltage and the duty cycle of the PWM switching control of the MPPT. As a direct consequence many converters connected together on a same load perturb each other because of the output voltage variations induced by fluctuations on the HVDC bus essentially due to a not insignificant bus impedance. In this paper we show that it is possible to include an internal computed variable in charge to compensate local and external variations to take into account the environment variables.

  18. Variations in laser energy outputs over a series of simulated treatments.

    PubMed

    Lister, T S; Brewin, M P

    2014-10-01

    Test patches are routinely employed to determine the likely efficacy and the risk of adverse effects from cutaneous laser treatments. However, the degree to which these represent a full treatment has not been investigated in detail. This study aimed to determine the variability in pulse-to-pulse output energy from a representative selection of cutaneous laser systems in order to assess the value of laser test patches. The output energies of each pulse from seven cutaneous laser systems were measured using a pyroelectric measurement head over a 2-h period, employing a regime of 10-min simulated treatments followed by a 5-min rest period (between patients). Each laser system appeared to demonstrate a different pattern of variation in output energy per pulse over the period measured. The output energies from a range of cutaneous laser systems have been shown to vary considerably between a representative test patch and a full treatment, and over the course of an entire simulated clinic list. © 2014 British Association of Dermatologists.

  19. Programmable electronic synthesized capacitance

    NASA Technical Reports Server (NTRS)

    Kleinberg, Leonard L. (Inventor)

    1987-01-01

    A predetermined and variable synthesized capacitance which may be incorporated into the resonant portion of an electronic oscillator for the purpose of tuning the oscillator comprises a programmable operational amplifier circuit. The operational amplifier circuit has its output connected to its inverting input, in a follower configuration, by a network which is low impedance at the operational frequency of the circuit. The output of the operational amplifier is also connected to the noninverting input by a capacitor. The noninverting input appears as a synthesized capacitance which may be varied with a variation in gain-bandwidth product of the operational amplifier circuit. The gain-bandwidth product may, in turn, be varied with a variation in input set current with a digital to analog converter whose output is varied with a command word. The output impedance of the circuit may also be varied by the output set current. This circuit may provide very small ranges in oscillator frequency with relatively large control voltages unaffected by noise.

  20. Optimum systems design with random input and output applied to solar water heating

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, L. L.

    1980-03-01

    Solar water heating systems are evaluated. Models were developed to estimate the percentage of energy supplied from the Sun to a household. Since solar water heating systems have random input and output queueing theory, birth and death processes were the major tools in developing the models of evaluation. Microeconomics methods help in determining the optimum size of the solar water heating system design parameters, i.e., the water tank volume and the collector area.

  1. Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; L'Esperance, A.

    2017-01-01

    A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.

  2. An application of quantile random forests for predictive mapping of forest attributes

    Treesearch

    E.A. Freeman; G.G. Moisen

    2015-01-01

    Increasingly, random forest models are used in predictive mapping of forest attributes. Traditional random forests output the mean prediction from the random trees. Quantile regression forests (QRF) is an extension of random forests developed by Nicolai Meinshausen that provides non-parametric estimates of the median predicted value as well as prediction quantiles. It...

  3. Post-Secondary Science Students' Explanations of "Randomness" and "Variation" and Implications for Science Learning

    ERIC Educational Resources Information Center

    Gougis, Rebekka Darner; Stomberg, Janet F.; O'Hare, Alicia T.; O'Reilly, Catherine M.; Bader, Nicholas E.; Meixner, Thomas; Carey, Cayelan C.

    2017-01-01

    The concepts of randomness and variation are pervasive in science. The purpose of this study was to document how post-secondary life science students explain randomness and variation, infer relationships between their explanations, and ability to describe and identify appropriate and inappropriate variation, and determine if students can identify…

  4. High-power ultralong-wavelength Tm-doped silica fiber laser cladding-pumped with a random distributed feedback fiber laser

    PubMed Central

    Jin, Xiaoxi; Du, Xueyuan; Wang, Xiong; Zhou, Pu; Zhang, Hanwei; Wang, Xiaolin; Liu, Zejin

    2016-01-01

    We demonstrated a high-power ultralong-wavelength Tm-doped silica fiber laser operating at 2153 nm with the output power exceeding 18 W and the slope efficiency of 25.5%. A random distributed feedback fiber laser with the center wavelength of 1173 nm was employed as pump source of Tm-doped fiber laser for the first time. No amplified spontaneous emissions or parasitic oscillations were observed when the maximum output power reached, which indicates that employing 1173 nm random distributed feedback fiber laser as pump laser is a feasible and promising scheme to achieve high-power emission of long-wavelength Tm-doped fiber laser. The output power of this Tm-doped fiber laser could be further improved by optimizing the length of active fiber, reflectivity of FBGs, increasing optical efficiency of pump laser and using better temperature management. We also compared the operation of 2153 nm Tm-doped fiber lasers pumped with 793 nm laser diodes, and the maximum output powers were limited to ~2 W by strong amplified spontaneous emission and parasitic oscillation in the range of 1900–2000 nm. PMID:27416893

  5. High-power ultralong-wavelength Tm-doped silica fiber laser cladding-pumped with a random distributed feedback fiber laser.

    PubMed

    Jin, Xiaoxi; Du, Xueyuan; Wang, Xiong; Zhou, Pu; Zhang, Hanwei; Wang, Xiaolin; Liu, Zejin

    2016-07-15

    We demonstrated a high-power ultralong-wavelength Tm-doped silica fiber laser operating at 2153 nm with the output power exceeding 18 W and the slope efficiency of 25.5%. A random distributed feedback fiber laser with the center wavelength of 1173 nm was employed as pump source of Tm-doped fiber laser for the first time. No amplified spontaneous emissions or parasitic oscillations were observed when the maximum output power reached, which indicates that employing 1173 nm random distributed feedback fiber laser as pump laser is a feasible and promising scheme to achieve high-power emission of long-wavelength Tm-doped fiber laser. The output power of this Tm-doped fiber laser could be further improved by optimizing the length of active fiber, reflectivity of FBGs, increasing optical efficiency of pump laser and using better temperature management. We also compared the operation of 2153 nm Tm-doped fiber lasers pumped with 793 nm laser diodes, and the maximum output powers were limited to ~2 W by strong amplified spontaneous emission and parasitic oscillation in the range of 1900-2000 nm.

  6. Effect of random phase mask on input plane in photorefractive authentic memory with two-wave encryption method

    NASA Astrophysics Data System (ADS)

    Mita, Akifumi; Okamoto, Atsushi; Funakoshi, Hisatoshi

    2004-06-01

    We have proposed an all-optical authentic memory with the two-wave encryption method. In the recording process, the image data are encrypted to a white noise by the random phase masks added on the input beam with the image data and the reference beam. Only reading beam with the phase-conjugated distribution of the reference beam can decrypt the encrypted data. If the encrypted data are read out with an incorrect phase distribution, the output data are transformed into a white noise. Moreover, during read out, reconstructions of the encrypted data interfere destructively resulting in zero intensity. Therefore our memory has a merit that we can detect unlawful accesses easily by measuring the output beam intensity. In our encryption method, the random phase mask on the input plane plays important roles in transforming the input image into a white noise and prohibiting to decrypt a white noise to the input image by the blind deconvolution method. Without this mask, when unauthorized users observe the output beam by using CCD in the readout with the plane wave, the completely same intensity distribution as that of Fourier transform of the input image is obtained. Therefore the encrypted image will be decrypted easily by using the blind deconvolution method. However in using this mask, even if unauthorized users observe the output beam using the same method, the encrypted image cannot be decrypted because the observed intensity distribution is dispersed at random by this mask. Thus it can be said the robustness is increased by this mask. In this report, we compare two correlation coefficients, which represents the degree of a white noise of the output image, between the output image and the input image in using this mask or not. We show that the robustness of this encryption method is increased as the correlation coefficient is improved from 0.3 to 0.1 by using this mask.

  7. Hydroclimatic projections for the Murray-Darling Basin based on an ensemble derived from Intergovernmental Panel on Climate Change AR4 climate models

    NASA Astrophysics Data System (ADS)

    Sun, Fubao; Roderick, Michael L.; Lim, Wee Ho; Farquhar, Graham D.

    2011-12-01

    We assess hydroclimatic projections for the Murray-Darling Basin (MDB) using an ensemble of 39 Intergovernmental Panel on Climate Change AR4 climate model runs based on the A1B emissions scenario. The raw model output for precipitation, P, was adjusted using a quantile-based bias correction approach. We found that the projected change, ΔP, between two 30 year periods (2070-2099 less 1970-1999) was little affected by bias correction. The range for ΔP among models was large (˜±150 mm yr-1) with all-model run and all-model ensemble averages (4.9 and -8.1 mm yr-1) near zero, against a background climatological P of ˜500 mm yr-1. We found that the time series of actually observed annual P over the MDB was indistinguishable from that generated by a purely random process. Importantly, nearly all the model runs showed similar behavior. We used these facts to develop a new approach to understanding variability in projections of ΔP. By plotting ΔP versus the variance of the time series, we could easily identify model runs with projections for ΔP that were beyond the bounds expected from purely random variations. For the MDB, we anticipate that a purely random process could lead to differences of ±57 mm yr-1 (95% confidence) between successive 30 year periods. This is equivalent to ±11% of the climatological P and translates into variations in runoff of around ±29%. This sets a baseline for gauging modeled and/or observed changes.

  8. Eating marshmallows reduces ileostomy output: a randomized crossover trial.

    PubMed

    Clarebrough, E; Guest, G; Stupart, D

    2015-12-01

    Anecdotally, many ostomates believe that eating marshmallows can reduce ileostomy effluent. There is a plausible mechanism for this, as the gelatine contained in marshmallows may thicken small bowel fluid, but there is currently no evidence that this is effective. This was a randomized crossover trial. Adult patients with well-established ileostomies were included. Ileostomy output was measured for 1 week during which three marshmallows were consumed three times daily, and for one control week where marshmallows were not eaten. There was a 2-day washout period. Patients were randomly allocated to whether the control or intervention week occurred first. In addition, a questionnaire was administered regarding patient's subjective experience of their ileostomy function. Thirty-one participants were recruited; 28 completed the study. There was a median reduction in ileostomy output volume of 75 ml per day during the study period (P = 0.0054, 95% confidence interval 23.4-678.3) compared with the control week. Twenty of 28 subjects (71%) experienced a reduction in their ileostomy output, two had no change and six reported an increase. During the study period, participants reported fewer ileostomy bag changes (median five per day vs six in the control period, P = 0.0255). Twenty of 28 (71%) reported that the ileostomy effluent was thicker during the study week (P = 0.023). Overall 19 (68%) participants stated they would use marshmallows in the future if they wanted to reduce or thicken their ileostomy output. Eating marshmallows leads to a small but statistically significant reduction in ileostomy output. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giovannetti, Vittorio; Maccone, Lorenzo; Shapiro, Jeffrey H.

    The minimum Renyi and Wehrl output entropies are found for bosonic channels in which the signal photons are either randomly displaced by a Gaussian distribution (classical-noise channel), or coupled to a thermal environment through lossy propagation (thermal-noise channel). It is shown that the Renyi output entropies of integer orders z{>=}2 and the Wehrl output entropy are minimized when the channel input is a coherent state.

  10. More efficient optimization of long-term water supply portfolios

    NASA Astrophysics Data System (ADS)

    Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.

    2009-03-01

    The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.

  11. Systematic and random variations in digital Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Duggin, M. J. (Principal Investigator); Sakhavat, H.

    1985-01-01

    Radiance recorded by any remote sensing instrument will contain noise which will consist of both systematic and random variations. Systematic variations may be due to sun-target-sensor geometry, atmospheric conditions, and the interaction of the spectral characteristics of the sensor with those of upwelling radiance. Random variations in the data may be caused by variations in the nature and in the heterogeneity of the ground cover, by variations in atmospheric transmission, and by the interaction of these variations with the sensing device. It is important to be aware of the extent of random and systematic errors in recorded radiance data across ostensibly uniform ground areas in order to assess the impact on quantative image analysis procedures for both the single date and the multidate cases. It is the intention here to examine the systematic and the random variations in digital radiance data recorded in each band by the thematic mapper over crop areas which are ostensibly uniform and which are free from visible cloud.

  12. Postoperative Hydrocortisone Infusion Reduces the Prevalence of Low Cardiac Output Syndrome After Neonatal Cardiopulmonary Bypass.

    PubMed

    Robert, Stephen M; Borasino, Santiago; Dabal, Robert J; Cleveland, David C; Hock, Kristal M; Alten, Jeffrey A

    2015-09-01

    Neonatal cardiac surgery with cardiopulmonary bypass is often complicated by morbidity associated with inflammation and low cardiac output syndrome. Hydrocortisone "stress dosing" is reported to provide hemodynamic benefits in some patients with refractory shock. Development of cardiopulmonary bypass-induced adrenal insufficiency may provide further rationale for postoperative hydrocortisone administration. We sought to determine whether prophylactic, postoperative hydrocortisone infusion could decrease prevalence of low cardiac output syndrome after neonatal cardiac surgery with cardiopulmonary bypass. Double-blind, randomized control trial. Pediatric cardiac ICU and operating room in tertiary care center. Forty neonates undergoing cardiac surgery with cardiopulmonary bypass were randomized (19 hydrocortisone and 21 placebo). Demographics and known risk factors were similar between groups. After cardiopulmonary bypass separation, bolus hydrocortisone (50 mg/m²) or placebo was administered, followed by continuous hydrocortisone infusion (50 mg/m²/d) or placebo tapered over 5 days. Adrenocorticotropic hormone stimulation testing (1 μg) was performed before and after cardiopulmonary bypass, prior to steroid administration. Blood was collected for cytokine analysis before and after cardiopulmonary bypass. Subjects receiving hydrocortisone were less likely to develop low cardiac output syndrome (5/19, 26% vs 12/21, 57%; p = 0.049). Hydrocortisone group had more negative net fluid balance at 48 hours (-114 vs -64 mL/kg; p = 0.01) and greater urine output at 0-24 hours (2.7 vs 1.2 mL/kg/hr; p = 0.03). Hydrocortisone group weaned off catecholamines and vasopressin sooner than placebo, with a difference in inotrope-free subjects apparent after 48 hours (p = 0.033). Five placebo subjects (24%) compared with no hydrocortisone subjects required rescue steroids (p = 0.02). Thirteen (32.5%) had adrenal insufficiency after cardiopulmonary bypass. Patients with adrenal insufficiency randomized to receive hydrocortisone had lower prevalence of low cardiac output syndrome compared with patients with adrenal insufficiency randomized to placebo (1/6 vs 6/7, respectively; p = 0.02). Hydrocortisone significantly reduced proinflammatory cytokines. Ventilator-free days, hospital length of stay, and kidney injury were similar. Prophylactic, postoperative hydrocortisone reduces low cardiac output syndrome, improves fluid balance and urine output, and attenuates inflammation after neonatal cardiopulmonary bypass surgery. Further studies are necessary to show if these benefits lead to improvements in more important clinical outcomes.

  13. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  14. Damage Propagation Modeling for Aircraft Engine Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Goebel, Kai; Simon, Don; Eklund, Neil

    2008-01-01

    This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the Prognostics and Health Management (PHM) data competition at PHM 08.

  15. Analysis of a utility-interactive wind-photovoltaic hybrid system with battery storage using neural network

    NASA Astrophysics Data System (ADS)

    Giraud, Francois

    1999-10-01

    This dissertation investigates the application of neural network theory to the analysis of a 4-kW Utility-interactive Wind-Photovoltaic System (WPS) with battery storage. The hybrid system comprises a 2.5-kW photovoltaic generator and a 1.5-kW wind turbine. The wind power generator produces power at variable speed and variable frequency (VSVF). The wind energy is converted into dc power by a controlled, tree-phase, full-wave, bridge rectifier. The PV power is maximized by a Maximum Power Point Tracker (MPPT), a dc-to-dc chopper, switching at a frequency of 45 kHz. The whole dc power of both subsystems is stored in the battery bank or conditioned by a single-phase self-commutated inverter to be sold to the utility at a predetermined amount. First, the PV is modeled using Artificial Neural Network (ANN). To reduce model uncertainty, the open-circuit voltage VOC and the short-circuit current ISC of the PV are chosen as model input variables of the ANN. These input variables have the advantage of incorporating the effects of the quantifiable and non-quantifiable environmental variants affecting the PV power. Then, a simplified way to predict accurately the dynamic responses of the grid-linked WPS to gusty winds using a Recurrent Neural Network (RNN) is investigated. The RNN is a single-output feedforward backpropagation network with external feedback, which allows past responses to be fed back to the network input. In the third step, a Radial Basis Functions (RBF) Network is used to analyze the effects of clouds on the Utility-Interactive WPS. Using the irradiance as input signal, the network models the effects of random cloud movement on the output current, the output voltage, the output power of the PV system, as well as the electrical output variables of the grid-linked inverter. Fourthly, using RNN, the combined effects of a random cloud and a wind gusts on the system are analyzed. For short period intervals, the wind speed and the solar radiation are considered as the sole sources of power, whose variations influence the system variables. Since both subsystems have different dynamics, their respective responses are expected to impact differently the whole system behavior. The dispatchability of the battery-supported system as well as its stability and reliability during gusts and/or cloud passage is also discussed. In the fifth step, the goal is to determine to what extent the overall power quality of the grid would be affected by a proliferation of Utility-interactive hybrid system and whether recourse to bulky or individual filtering and voltage controller is necessary. The final stage of the research includes a steady-state analysis of two-year operation (May 96--Apr 98) of the system, with a discussion on system reliability, on any loss of supply probability, and on the effects of the randomness in the wind and solar radiation upon the system design optimization.

  16. Territorial Patterns of Diversity in Education.

    ERIC Educational Resources Information Center

    Ryba, Raymond

    1979-01-01

    The author examines within-nation and between-nation diversity in educational inputs, processes, and outputs, and the interdependence of input, output, and cultural context. Implications are drawn for planning equal educational opportunities in light of these complex regional variations. (SJL)

  17. Reactive Power Pricing Model Considering the Randomness of Wind Power Output

    NASA Astrophysics Data System (ADS)

    Dai, Zhong; Wu, Zhou

    2018-01-01

    With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.

  18. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  19. Evaluation of between-cow variation in milk urea and rumen ammonia nitrogen concentrations and the association with nitrogen utilization and diet digestibility in lactating cows.

    PubMed

    Huhtanen, P; Cabezas-Garcia, E H; Krizsan, S J; Shingfield, K J

    2015-05-01

    Concentrations of milk urea N (MUN) are influenced by dietary crude protein concentration and intake and could therefore be used as a biomarker of the efficiency of N utilization for milk production (milk N/N intake; MNE) in lactating cows. In the present investigation, data from milk-production trials (production data set; n=1,804 cow/period observations from 21 change-over studies) and metabolic studies involving measurements of nutrient flow at the omasum in lactating cows (flow data set; n=450 cow/period observations from 29 studies) were used to evaluate the influence of between-cow variation on the relationship of MUN with MNE, urinary N (UN) output, and diet digestibility. All measurements were made on cows fed diets based on grass silage supplemented with a range of protein supplements. Data were analyzed by mixed-model regression analysis with diet within experiment and period within experiment as random effects, allowing the effect of diet and period to be excluded. Between-cow coefficient of variation in MUN concentration and MNE was 0.13 and 0.07 in the production data set and 0.11 and 0.08 in the flow data set, respectively. Based on residual variance, the best model for predicting MNE developed from the production data set was MNE (g/kg)=238 + 7.0 × milk yield (MY; kg/d) - 0.064 × MY(2) - 2.7 × MUN (mg/dL) - 0.10 body weight (kg). For the flow data set, including both MUN and rumen ammonia N concentration with MY in the model accounted for more variation in MNE than when either term was used with MY alone. The best model for predicting UN excretion developed from the production data set (n=443) was UN (g/d)=-29 + 4.3 × dry matter intake (kg/d) + 4.3 × MUN + 0.14 × body weight. Between-cow variation had a smaller influence on the association of MUN with MNE and UN output than published estimates of these relationships based on treatment means, in which differences in MUN generally arise from variation in dietary crude protein concentration. For the flow data set, between-cow variation in MUN and rumen ammonia N concentrations was positively associated with total-tract organic matter digestibility. In conclusion, evaluation of phenotypic variation in MUN indicated that between-cow variation in MUN had a smaller effect on MNE compared with published responses of MUN to dietary crude protein concentration, suggesting that a closer control over diet composition relative to requirements has greater potential to improve MNE and lower UN on farm than genetic selection. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Effects of Random Circuit Fabrication Errors on Small Signal Gain and on Output Phase In a Traveling Wave Tube

    NASA Astrophysics Data System (ADS)

    Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.

    2011-10-01

    Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.

  1. High dose-per-pulse electron beam dosimetry: Commissioning of the Oriatron eRT6 prototype linear accelerator for preclinical use.

    PubMed

    Jaccard, Maud; Durán, Maria Teresa; Petersson, Kristoffer; Germond, Jean-François; Liger, Philippe; Vozenin, Marie-Catherine; Bourhis, Jean; Bochud, François; Bailat, Claude

    2018-02-01

    The Oriatron eRT6 is an experimental high dose-per-pulse linear accelerator (linac) which was designed to deliver an electron beam with variable dose-rates, ranging from a few Gy/min up to hundreds of Gy/s. It was built to study the radiobiological effects of high dose-per-pulse/dose-rate electron beam irradiation, in the context of preclinical and cognitive studies. In this work, we report on the commissioning and beam monitoring of the Oriatron eRT6 prototype linac. The beam was characterized in different steps. The output stability was studied by performing repeated measurements over a period of 20 months. The relative output variations caused by changing beam parameters, such as the temporal electron pulse width, the pulse repetition frequency and the pulse amplitude were also analyzed. Finally, depth dose curves and field sizes were measured for two different beam settings, resulting in one beam with a conventional radiotherapy dose-rate and one with a much higher dose-rate. Measurements were performed with Gafchromic EBT3 films and with a PTW Advanced Markus ionization chamber. In addition, we developed a beam current monitoring system based on the signals from an induction torus positioned at the beam exit of the waveguide and from a graphite beam collimator. The stability of the output over repeated measurements was found to be good, with a standard deviation smaller than 1%. However, non-negligible day-to-day variations of the beam output were observed. Those output variations showed different trends depending on the dose-rate. The analysis of the relative output variation as a function of various beam parameters showed that in a given configuration, the dose-rate could be reliably varied over three orders of magnitude. Interdependence effects on the output variation between the parameters were also observed. The beam energy and field size were found to be slightly dose-rate-dependent and suitable mainly for small animal irradiation. The beam monitoring system was able to measure in a reproducible way the total charge of electrons that exit the machine, as long as the electron pulse amplitude remains above a given threshold. Furthermore, we were able to relate the charge measured with the monitoring system to the absorbed dose in a solid water phantom. The Oriatron eRT6 was successfully commissioned for preclinical use and is currently in full operation, with studies being performed on the radiobiological effects of high dose-per-pulse irradiation. © 2017 American Association of Physicists in Medicine.

  2. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  3. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  4. Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun

    2014-02-01

    We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.

  5. Modeling heterogeneous responsiveness of intrinsic apoptosis pathway

    PubMed Central

    2013-01-01

    Background Apoptosis is a cell suicide mechanism that enables multicellular organisms to maintain homeostasis and to eliminate individual cells that threaten the organism’s survival. Dependent on the type of stimulus, apoptosis can be propagated by extrinsic pathway or intrinsic pathway. The comprehensive understanding of the molecular mechanism of apoptotic signaling allows for development of mathematical models, aiming to elucidate dynamical and systems properties of apoptotic signaling networks. There have been extensive efforts in modeling deterministic apoptosis network accounting for average behavior of a population of cells. Cellular networks, however, are inherently stochastic and significant cell-to-cell variability in apoptosis response has been observed at single cell level. Results To address the inevitable randomness in the intrinsic apoptosis mechanism, we develop a theoretical and computational modeling framework of intrinsic apoptosis pathway at single-cell level, accounting for both deterministic and stochastic behavior. Our deterministic model, adapted from the well-accepted Fussenegger model, shows that an additional positive feedback between the executioner caspase and the initiator caspase plays a fundamental role in yielding the desired property of bistability. We then examine the impact of intrinsic fluctuations of biochemical reactions, viewed as intrinsic noise, and natural variation of protein concentrations, viewed as extrinsic noise, on behavior of the intrinsic apoptosis network. Histograms of the steady-state output at varying input levels show that the intrinsic noise could elicit a wider region of bistability over that of the deterministic model. However, the system stochasticity due to intrinsic fluctuations, such as the noise of steady-state response and the randomness of response delay, shows that the intrinsic noise in general is insufficient to produce significant cell-to-cell variations at physiologically relevant level of molecular numbers. Furthermore, the extrinsic noise represented by random variations of two key apoptotic proteins, namely Cytochrome C and inhibitor of apoptosis proteins (IAP), is modeled separately or in combination with intrinsic noise. The resultant stochasticity in the timing of intrinsic apoptosis response shows that the fluctuating protein variations can induce cell-to-cell stochastic variability at a quantitative level agreeing with experiments. Finally, simulations illustrate that the mean abundance of fluctuating IAP protein is positively correlated with the degree of cellular stochasticity of the intrinsic apoptosis pathway. Conclusions Our theoretical and computational study shows that the pronounced non-genetic heterogeneity in intrinsic apoptosis responses among individual cells plausibly arises from extrinsic rather than intrinsic origin of fluctuations. In addition, it predicts that the IAP protein could serve as a potential therapeutic target for suppression of the cell-to-cell variation in the intrinsic apoptosis responsiveness. PMID:23875784

  6. Magnetic switch coupling to synchronize magnetic modulators

    DOEpatents

    Reed, K.W.; Kiekel, P.

    1999-04-27

    Apparatus for synchronizing the output pulses from a pair of magnetic switches is disclosed. An electrically conductive loop is provided between the pair of switches with the loop having windings about the core of each of the magnetic switches. The magnetic coupling created by the loop removes voltage and timing variations between the outputs of the two magnetic switches caused by any of a variety of factors. The only remaining variation is a very small fixed timing offset caused by the geometry and length of the loop itself. 13 figs.

  7. Statistical study of defects caused by primary knock-on atoms in fcc Cu and bcc W using molecular dynamics

    NASA Astrophysics Data System (ADS)

    Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.

    2015-12-01

    We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5 keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.

  8. A Polymer Optical Fiber Temperature Sensor Based on Material Features.

    PubMed

    Leal-Junior, Arnaldo; Frizera-Netoc, Anselmo; Marques, Carlos; Pontes, Maria José

    2018-01-19

    This paper presents a polymer optical fiber (POF)-based temperature sensor. The operation principle of the sensor is the variation in the POF mechanical properties with the temperature variation. Such mechanical property variation leads to a variation in the POF output power when a constant stress is applied to the fiber due to the stress-optical effect. The fiber mechanical properties are characterized through a dynamic mechanical analysis, and the output power variation with different temperatures is measured. The stress is applied to the fiber by means of a 180° curvature, and supports are positioned on the fiber to inhibit the variation in its curvature with the temperature variation. Results show that the sensor proposed has a sensitivity of 1.04 × 10 -3 °C -1 , a linearity of 0.994, and a root mean squared error of 1.48 °C, which indicates a relative error of below 2%, which is lower than the ones obtained for intensity-variation-based temperature sensors. Furthermore, the sensor is able to operate at temperatures up to 110 °C, which is higher than the ones obtained for similar POF sensors in the literature.

  9. The relative nature of fertilization success: Implications for the study of post-copulatory sexual selection

    PubMed Central

    2008-01-01

    Background The determination of genetic variation in sperm competitive ability is fundamental to distinguish between post-copulatory sexual selection models based on good-genes vs compatible genes. The sexy-sperm and the good-sperm hypotheses for the evolution of polyandry require additive (intrinsic) effects of genes influencing sperm competitiveness, whereas the genetic incompatibility hypothesis invokes non-additive genetic effects. A male's sperm competitive ability is typically estimated from his fertilization success, a measure that is dependent on the ability of rival sperm competitors to fertilize the ova. It is well known that fertilization success may be conditional to genotypic interactions among males as well as between males and females. However, the consequences of effects arising from the random sampling of sperm competitors upon the estimation of genetic variance in sperm competitiveness have been overlooked. Here I perform simulations of mating trials performed in the context of sibling analysis to investigate whether the ability to detect additive genetic variance underlying the sperm competitiveness phenotype is hindered by the relative nature of fertilization success measurements. Results Fertilization success values render biased sperm competitive ability values. Furthermore, asymmetries among males in the errors committed when estimating sperm competitive abilities are likely to exist as long as males exhibit variation in sperm competitiveness. Critically, random effects arising from the relative nature of fertilization success lead to an underestimation of underlying additive genetic variance in sperm competitive ability. Conclusion The results show that, regardless of the existence of genotypic interactions affecting the output of sperm competition, fertilization success is not a perfect predictor of sperm competitive ability because of the stochasticity of the background used to obtain fertilization success measures. Random effects need to be considered in the debate over the maintenance of genetic variation in sperm competitiveness, and when testing good-genes and compatible-genes processes as explanations of polyandrous behaviour using repeatability/heritability data in sperm competitive ability. These findings support the notion that the genetic incompatibility hypothesis needs to be treated as an alternative hypothesis, rather than a null hypothesis, in studies that fail to detect intrinsic sire effects on the sperm competitiveness phenotype. PMID:18474087

  10. ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog

    NASA Technical Reports Server (NTRS)

    Gray, F. P., Jr. (Editor)

    1979-01-01

    A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.

  11. Cortical activity predicts good variation in human motor output.

    PubMed

    Babikian, Sarine; Kanso, Eva; Kutch, Jason J

    2017-04-01

    Human movement patterns have been shown to be particularly variable if many combinations of activity in different muscles all achieve the same task goal (i.e., are goal-equivalent). The nervous system appears to automatically vary its output among goal-equivalent combinations of muscle activity to minimize muscle fatigue or distribute tissue loading, but the neural mechanism of this "good" variation is unknown. Here we use a bimanual finger task, electroencephalography (EEG), and machine learning to determine if cortical signals can predict goal-equivalent variation in finger force output. 18 healthy participants applied left and right index finger forces to repeatedly perform a task that involved matching a total (sum of right and left) finger force. As in previous studies, we observed significantly more variability in goal-equivalent muscle activity across task repetitions compared to variability in muscle activity that would not achieve the goal: participants achieved the task in some repetitions with more right finger force and less left finger force (right > left) and in other repetitions with less right finger force and more left finger force (left > right). We found that EEG signals from the 500 milliseconds (ms) prior to each task repetition could make a significant prediction of which repetitions would have right > left and which would have left > right. We also found that cortical maps of sites contributing to the prediction contain both motor and pre-motor representation in the appropriate hemisphere. Thus, goal-equivalent variation in motor output may be implemented at a cortical level.

  12. The use of low output laser therapy to accelerate healing of diabetic foot ulcers: a randomized prospective controlled trial

    NASA Astrophysics Data System (ADS)

    Naidu, S. V. L. G.; Subapriya, S.; Yeoh, C. N.; Soosai, S.; Shalini, V.; Harwant, S.

    2005-11-01

    The aim of this study was to assess the effects of low output laser therapy as an adjuvant treatment in grade 1 diabetic foot ulcers. Methods: Sixteen patients were randomly divided equally into two groups. Group A had daily dressing only, while group B had low output laser therapy instituted five days a week in addition to daily dressing. Serial measurement of the ulcer was done weekly using digital photography and analyzed. Results: The rate of healing in group A was 10.42 mm2/week, and in group B was 66.14mm2/week. The difference in the rate of healing was statistically significant, p<0.05. Conclusion: Laser therapy as an adjuvant treatment accelerates diabetic ulcer healing by six times in a six week period.

  13. Thermionic converter output as a function of collector temperature

    NASA Technical Reports Server (NTRS)

    Stark, G.; Saunders, M.; Lieb, D.

    1980-01-01

    Surprisingly few data are available on the variation of thermionic converter output with collector temperature. In this study the output power density has been measured as a function of collector temperature (at a fixed emitter temperature of 1650 K) for six converters with different electrode combinations. Collector temperatures ranged from 750 to 1100 K. For collector temperatures below 900 K, converters built with sublimed molybdenum oxide collectors gave the best performance.

  14. Pseudo-Random Number Generator Based on Coupled Map Lattices

    NASA Astrophysics Data System (ADS)

    Lü, Huaping; Wang, Shihong; Hu, Gang

    A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.

  15. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  16. Analysis of Summertime Convective Initiation in Central Alabama Using the Land Information System

    NASA Technical Reports Server (NTRS)

    James, Robert S.; Case, Jonathan L.; Molthan, Andrew L.; Jedlovec, Gary J.

    2011-01-01

    During the summer months in the southeastern United States, convective initiation presents a frequent challenge to operational forecasters. Thunderstorm development has traditionally been referred to as random due to their disorganized, sporadic appearance and lack of atmospheric forcing. Horizontal variations in land surface characteristics such as soil moisture, soil type, land and vegetation cover could possibly be a focus mechanism for afternoon convection during the summer months. The NASA Land Information System (LIS) provides a stand-alone land surface modeling framework that incorporates these varying soil and vegetation properties, antecedent precipitation, and atmospheric forcing to represent the soil state at high resolution. The use of LIS as a diagnostic tool may help forecasters to identify boundaries in land surface characteristics that could correlate to favored regions of convection initiation. The NASA Shortterm Prediction Research and Transition (SPoRT) team has been collaborating with the National Weather Service Office in Birmingham, AL to help incorporate LIS products into their operational forecasting methods. This paper highlights selected convective case dates from summer 2009 when synoptic forcing was weak, and identifies any boundaries in land surface characteristics that may have contributed to convective initiation. The LIS output depicts the effects of increased sensible heat flux from urban areas on the development of convection, as well as convection along gradients in land surface characteristics and surface sensible and latent heat fluxes. These features may promote mesoscale circulations and/or feedback processes that can either enhance or inhibit convection. With this output previously unavailable to operational forecasters, LIS provides a new tool to forecasters in order to help eliminate the randomness of summertime convective initiation.

  17. An optimal output feedback gain variation scheme for the control of plants exhibiting gross parameter changes

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.

    1987-01-01

    A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.

  18. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.

  19. Apparatus and method for detecting and measuring changes in linear relationships between a number of high frequency signals

    DOEpatents

    Bittner, J.W.; Biscardi, R.W.

    1991-03-19

    An electronic measurement circuit is disclosed for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals. 2 figures.

  20. Apparatus and method for detecting and measuring changes in linear relationships between a number of high frequency signals

    DOEpatents

    Bittner, John W.; Biscardi, Richard W.

    1991-01-01

    An electronic measurement circuit for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals.

  1. Spray outputs from a variable-rate sprayer manipulated with PWM solenoid valves

    USDA-ARS?s Scientific Manuscript database

    Pressure fluctuations during variable-rate spray applications can affect nozzle flow rate fluctuations, resulting in spray outputs that do not coincide with the prescribed canopy structure volume. Variations in total flow rate discharged from 40 nozzles, each coupled with a pulse-width-modulated (PW...

  2. Response Modality Variations Affect Determinations of Children's Learning Styles.

    ERIC Educational Resources Information Center

    Janowitz, Jeffrey M.

    The Swassing-Barbe Modality Index (SBMI) uses visual, auditory, and tactile inputs, but only reconstructed output, to measure children's modality strengths. In this experiment, the SBMI's three input modalities were crossed with two output modalities (spoken and drawn) in addition to the reconstructed standard to result in nine treatment…

  3. A random utility based estimation framework for the household activity pattern problem.

    DOT National Transportation Integrated Search

    2016-06-01

    This paper develops a random utility based estimation framework for the Household Activity : Pattern Problem (HAPP). Based on the realization that output of complex activity-travel decisions : form a continuous pattern in space-time dimension, the es...

  4. Thickness related textural properties of retinal nerve fiber layer in color fundus images.

    PubMed

    Odstrcilik, Jan; Kolar, Radim; Tornow, Ralf-Peter; Jan, Jiri; Budai, Attila; Mayer, Markus; Vodakova, Martina; Laemmer, Robert; Lamos, Martin; Kuna, Zdenek; Gazarek, Jiri; Kubena, Tomas; Cernosek, Pavel; Ronzhina, Marina

    2014-09-01

    Images of ocular fundus are routinely utilized in ophthalmology. Since an examination using fundus camera is relatively fast and cheap procedure, it can be used as a proper diagnostic tool for screening of retinal diseases such as the glaucoma. One of the glaucoma symptoms is progressive atrophy of the retinal nerve fiber layer (RNFL) resulting in variations of the RNFL thickness. Here, we introduce a novel approach to capture these variations using computer-aided analysis of the RNFL textural appearance in standard and easily available color fundus images. The proposed method uses the features based on Gaussian Markov random fields and local binary patterns, together with various regression models for prediction of the RNFL thickness. The approach allows description of the changes in RNFL texture, directly reflecting variations in the RNFL thickness. Evaluation of the method is carried out on 16 normal ("healthy") and 8 glaucomatous eyes. We achieved significant correlation (normals: ρ=0.72±0.14; p≪0.05, glaucomatous: ρ=0.58±0.10; p≪0.05) between values of the model predicted output and the RNFL thickness measured by optical coherence tomography, which is currently regarded as a standard glaucoma assessment device. The evaluation thus revealed good applicability of the proposed approach to measure possible RNFL thinning. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. From samples to populations in retinex models

    NASA Astrophysics Data System (ADS)

    Gianini, Gabriele

    2017-05-01

    Some spatial color algorithms, such as Brownian Milano retinex (MI-retinex) and random spray retinex (RSR), are based on sampling. In Brownian MI-retinex, memoryless random walks (MRWs) explore the neighborhood of a pixel and are then used to compute its output. Considering the relative redundancy and inefficiency of MRW exploration, the algorithm RSR replaced the walks by samples of points (the sprays). Recent works point to the fact that a mapping from the sampling formulation to the probabilistic formulation of the corresponding sampling process can offer useful insights into the models, at the same time featuring intrinsically noise-free outputs. The paper continues the development of this concept and shows that the population-based versions of RSR and Brownian MI-retinex can be used to obtain analytical expressions for the outputs of some test images. The comparison of the two analytic expressions from RSR and from Brownian MI-retinex demonstrates not only that the two outputs are, in general, different but also that they depend in a qualitatively different way upon the features of the image.

  6. Dose uniformity analysis among ten 16-slice same-model CT scanners.

    PubMed

    Erdi, Yusuf Emre

    2012-01-01

    With the introduction of multislice scanners, computed tomographic (CT) dose optimization has become important. The patient-absorbed dose may differ among the scanners although they are the same type and model. To investigate the dose output variation of the CT scanners, we designed the study to analyze dose outputs of 10 same-model CT scanners using 3 clinical protocols. Ten GE Lightspeed (GE Healthcare, Waukesha, Wis) 16-slice scanners located at main campus and various satellite locations of our institution have been included in this study. All dose measurements were performed using poly (methyl methacrylate) (PMMA) head (diameter, 16 cm) and body (diameter, 32 cm) phantoms manufactured by Radcal (RadCal Corp, Monrovia, Calif) using a 9095 multipurpose analyzer with 10 × 9-3CT ion chamber both from the same manufacturer. Ion chamber is inserted into the peripheral and central axis locations and volume CT dose index (CTDIvol) is calculated as weighted average of doses at those locations. Three clinical protocol settings for adult head, high-resolution chest, and adult abdomen are used for dose measurements. We have observed up to 9.4% CTDIvol variation for the adult head protocol in which the largest variation occurred among the protocols. However, head protocol uses higher milliampere second values than the other 2 protocols. Most of the measured values were less than the system-stored CTDIvol values. It is important to note that reduction in dose output from tubes as they age is expected in addition to the intrinsic radiation output fluctuations of the same scanner. Although the same model CT scanners were used in this study, it is possible to see CTDIvol variation in standard patient scanning protocols of head, chest, and abdomen. The compound effect of the dose variation may be larger with higher milliampere and multiphase and multilocation CT scans.

  7. Flight test evaluation of predicted light aircraft drag, performance, and stability

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Fox, S. R.

    1979-01-01

    A technique was developed which permits simultaneous extraction of complete lift, drag, and thrust power curves from time histories of a single aircraft maneuver such as a pull up (from V max to V stall) and pushover (to V max for level flight). The technique, which is an extension of nonlinear equations of motion of the parameter identification methods of Iliff and Taylor and includes provisions for internal data compatibility improvement as well, was shown to be capable of correcting random errors in the most sensitive data channel and yielding highly accurate results. Flow charts, listings, sample inputs and outputs for the relevant routines are provided as appendices. This technique was applied to flight data taken on the ATLIT aircraft. Lack of adequate knowledge of the correct full throttle thrust horsepower true airspeed variation and considerable internal data inconsistency made it impossible to apply the trajectory matching features of the technique.

  8. A nonlinear control scheme based on dynamic evolution path theory for improved dynamic performance of boost PFC converter working on nonlinear features.

    PubMed

    Mohanty, Pratap Ranjan; Panda, Anup Kumar

    2016-11-01

    This paper is concerned to performance improvement of boost PFC converter under large random load fluctuation, ensuring unity power factor (UPF) at source end and regulated voltage at load side. To obtain such performance, a nonlinear controller based on dynamic evolution path theory is designed and its robustness is examined under both heavy and light loading condition. In this paper, %THD and zero-cross-over dead-zone of input current is significantly reduced. Also, very less response time of input current and output voltage to that of load and reference variation is remarked. A simulation model of proposed system is designed and it is realized using dSPACE 1104 signal processor for a 390V DC , 500W prototype. The relevant experimental and simulation waveforms are presented. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1998-01-01

    The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.

  10. An Evaluation of Two Methods for Generating Synthetic HL7 Segments Reflecting Real-World Health Information Exchange Transactions

    PubMed Central

    Mwogi, Thomas S.; Biondich, Paul G.; Grannis, Shaun J.

    2014-01-01

    Motivated by the need for readily available data for testing an open-source health information exchange platform, we developed and evaluated two methods for generating synthetic messages. The methods used HL7 version 2 messages obtained from the Indiana Network for Patient Care. Data from both methods were analyzed to assess how effectively the output reflected original ‘real-world’ data. The Markov Chain method (MCM) used an algorithm based on transitional probability matrix while the Music Box model (MBM) randomly selected messages of particular trigger type from the original data to generate new messages. The MBM was faster, generated shorter messages and exhibited less variation in message length. The MCM required more computational power, generated longer messages with more message length variability. Both methods exhibited adequate coverage, producing a high proportion of messages consistent with original messages. Both methods yielded similar rates of valid messages. PMID:25954458

  11. Sparse distributed memory and related models

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1992-01-01

    Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.

  12. Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1998-01-01

    The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.

  13. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  14. An improved output feedback control of flexible large space structures

    NASA Technical Reports Server (NTRS)

    Lin, Y. H.; Lin, J. G.

    1980-01-01

    A special output feedback control design technique for flexible large space structures is proposed. It is shown that the technique will increase both the damping and frequency of selected modes for more effective control. It is also able to effect integrated control of elastic and rigid-body modes and, in particular, closed-loop system stability and robustness to modal truncation and parameter variation. The technique is seen as marking an improvement over previous work concerning large space structures output feedback control.

  15. Private randomness expansion with untrusted devices

    NASA Astrophysics Data System (ADS)

    Colbeck, Roger; Kent, Adrian

    2011-03-01

    Randomness is an important resource for many applications, from gambling to secure communication. However, guaranteeing that the output from a candidate random source could not have been predicted by an outside party is a challenging task, and many supposedly random sources used today provide no such guarantee. Quantum solutions to this problem exist, for example a device which internally sends a photon through a beamsplitter and observes on which side it emerges, but, presently, such solutions require the user to trust the internal workings of the device. Here, we seek to go beyond this limitation by asking whether randomness can be generated using untrusted devices—even ones created by an adversarial agent—while providing a guarantee that no outside party (including the agent) can predict it. Since this is easily seen to be impossible unless the user has an initially private random string, the task we investigate here is private randomness expansion. We introduce a protocol for private randomness expansion with untrusted devices which is designed to take as input an initially private random string and produce as output a longer private random string. We point out that private randomness expansion protocols are generally vulnerable to attacks that can render the initial string partially insecure, even though that string is used only inside a secure laboratory; our protocol is designed to remove this previously unconsidered vulnerability by privacy amplification. We also discuss extensions of our protocol designed to generate an arbitrarily long random string from a finite initially private random string. The security of these protocols against the most general attacks is left as an open question.

  16. Comparison of a new integrated current source with the modified Howland circuit for EIT applications.

    PubMed

    Hong, Hongwei; Rahal, Mohamad; Demosthenous, Andreas; Bayford, Richard H

    2009-10-01

    Multi-frequency electrical impedance tomography (MF-EIT) systems require current sources that are accurate over a wide frequency range (1 MHz) and with large load impedance variations. The most commonly employed current source design in EIT systems is the modified Howland circuit (MHC). The MHC requires tight matching of resistors to achieve high output impedance and may suffer from instability over a wide frequency range in an integrated solution. In this paper, we introduce a new integrated current source design in CMOS technology and compare its performance with the MHC. The new integrated design has advantages over the MHC in terms of power consumption and area. The output current and the output impedance of both circuits were determined through simulations and measurements over the frequency range of 10 kHz to 1 MHz. For frequencies up to 1 MHz, the measured maximum variation of the output current for the integrated current source is 0.8% whereas for the MHC the corresponding value is 1.5%. Although the integrated current source has an output impedance greater than 1 MOmega up to 1 MHz in simulations, in practice, the impedance is greater than 160 kOmega up to 1 MHz due to the presence of stray capacitance.

  17. The analysis of temperature effect and temperature compensation of MOEMS accelerometer based on a grating interferometric cavity

    NASA Astrophysics Data System (ADS)

    Han, Dandan; Bai, Jian; Lu, Qianbo; Lou, Shuqi; Jiao, Xufen; Yang, Guoguang

    2016-08-01

    There is a temperature drift of an accelerometer attributed to the temperature variation, which would adversely influence the output performance. In this paper, a quantitative analysis of the temperature effect and the temperature compensation of a MOEMS accelerometer, which is composed of a grating interferometric cavity and a micromachined sensing chip, are proposed. A finite-element-method (FEM) approach is applied in this work to simulate the deformation of the sensing chip of the MOEMS accelerometer at different temperature from -20°C to 70°C. The deformation results in the variation of the distance between the grating and the sensing chip of the MOEMS accelerometer, modulating the output intensities finally. A static temperature model is set up to describe the temperature characteristics of the accelerometer through the simulation results and the temperature compensation is put forward based on the temperature model, which can improve the output performance of the accelerometer. This model is permitted to estimate the temperature effect of this type accelerometer, which contains a micromachined sensing chip. Comparison of the output intensities with and without temperature compensation indicates that the temperature compensation can improve the stability of the output intensities of the MOEMS accelerometer based on a grating interferometric cavity.

  18. Ecosystem processes and nitrogen export in northern U.S. watersheds.

    USGS Publications Warehouse

    Stottlemyer, R.

    2001-01-01

    There is much interest in the relationship of atmospheric nitrogen (N) inputs to ecosystem outputs as an indicator of possible "nitrogen saturation" by human activity. Longer-term, ecosystem-level mass balance studies suggest that the relationship is not clear and that other ecosystem processes may dominate variation in N outputs. We have been studying small, forested watershed ecosystems in five northern watersheds for periods up to 35 years. Here I summarize the research on ecosystem processes and the N budget. During the past 2 decades, average wet-precipitation N inputs ranged from about 0.1 to 6 kg N ha(-1) year(-1) among sites. In general, sites with the lowest N inputs had the highest output-to-input ratios. In the Alaska watersheds, streamwater N output exceeded inputs by 70 to 250%. The ratio of mean monthly headwater nitrate (NO3-) concentration to precipitation NO3- concentration declined with increased precipitation concentration. A series of ecosystem processes have been studied and related to N outputs. The most important appear to be seasonal change in hydrologic flowpath, soil freezing, seasonal forest-floor inorganic N pools resulting from over-winter mineralization beneath the snowpack, spatial variation in watershed forest-floor inorganic N pools, the degree to which snowmelt percolates soils, and gross soil N mineralization rates.

  19. Amplitude- and rise-time-compensated filters

    DOEpatents

    Nowlin, Charles H.

    1984-01-01

    An amplitude-compensated rise-time-compensated filter for a pulse time-of-occurrence (TOOC) measurement system is disclosed. The filter converts an input pulse, having the characteristics of random amplitudes and random, non-zero rise times, to a bipolar output pulse wherein the output pulse has a zero-crossing time that is independent of the rise time and amplitude of the input pulse. The filter differentiates the input pulse, along the linear leading edge of the input pulse, and subtracts therefrom a pulse fractionally proportional to the input pulse. The filter of the present invention can use discrete circuit components and avoids the use of delay lines.

  20. Random Deep Belief Networks for Recognizing Emotions from Speech Signals.

    PubMed

    Wen, Guihua; Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang

    2017-01-01

    Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.

  1. Random Deep Belief Networks for Recognizing Emotions from Speech Signals

    PubMed Central

    Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang

    2017-01-01

    Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition. PMID:28356908

  2. Fast counting electronics for neutron coincidence counting

    DOEpatents

    Swansen, James E.

    1987-01-01

    An amplifier-discriminator is tailored to output a very short pulse upon an above-threshold input from a detector which may be a .sup.3 He detector. The short pulse output is stretched and energizes a light emitting diode (LED) to provide a visual output of operation and pulse detection. The short pulse is further fed to a digital section for processing and possible ORing with other like generated pulses. Finally, the output (or ORed output ) is fed to a derandomizing buffer which converts the rapidly and randomly occurring pulses into synchronized and periodically spaced-apart pulses for the accurate counting thereof. Provision is also made for the internal and external disabling of each individual channel of amplifier-discriminators in an ORed plurality of same.

  3. Fast counting electronics for neutron coincidence counting

    DOEpatents

    Swansen, J.E.

    1985-03-05

    An amplifier-discriminator is tailored to output a very short pulse upon an above-threshold input from a detector which may be a /sup 3/He detector. The short pulse output is stretched and energizes a light emitting diode (LED) to provide a visual output of operation and pulse detection. The short pulse is further fed to a digital section for processing and possible ORing with other like generated pulses. Finally, the output (or ORed output) is fed to a derandomizing buffer which converts the rapidly and randomly occurring pulses into synchronized and periodically spaced-apart pulses for the accurate counting thereof. Provision is also made for the internal and external disabling of each individual channel of amplifier-discriminators in an ORed plurality of same.

  4. Calculation of gas turbine characteristic

    NASA Astrophysics Data System (ADS)

    Mamaev, B. I.; Murashko, V. L.

    2016-04-01

    The reasons and regularities of vapor flow and turbine parameter variation depending on the total pressure drop rate π* and rotor rotation frequency n are studied, as exemplified by a two-stage compressor turbine of a power-generating gas turbine installation. The turbine characteristic is calculated in a wide range of mode parameters using the method in which analytical dependences provide high accuracy for the calculated flow output angle and different types of gas dynamic losses are determined with account of the influence of blade row geometry, blade surface roughness, angles, compressibility, Reynolds number, and flow turbulence. The method provides satisfactory agreement of results of calculation and turbine testing. In the design mode, the operation conditions for the blade rows are favorable, the flow output velocities are close to the optimal ones, the angles of incidence are small, and the flow "choking" modes (with respect to consumption) in the rows are absent. High performance and a nearly axial flow behind the turbine are obtained. Reduction of the rotor rotation frequency and variation of the pressure drop change the flow parameters, the parameters of the stages and the turbine, as well as the form of the characteristic. In particular, for decreased n, nonmonotonic variation of the second stage reactivity with increasing π* is observed. It is demonstrated that the turbine characteristic is mainly determined by the influence of the angles of incidence and the velocity at the output of the rows on the losses and the flow output angle. The account of the growing flow output angle due to the positive angle of incidence for decreased rotation frequencies results in a considerable change of the characteristic: poorer performance, redistribution of the pressure drop at the stages, and change of reactivities, growth of the turbine capacity, and change of the angle and flow velocity behind the turbine.

  5. Effects of range-wide variation in climate and isolation on floral traits and reproductive output of Clarkia pulchella.

    PubMed

    Bontrager, Megan; Angert, Amy L

    2016-01-01

    Plant mating systems and geographic range limits are conceptually linked by shared underlying drivers, including landscape-level heterogeneity in climate and in species' abundance. Studies of how geography and climate interact to affect plant traits that influence mating system and population dynamics can lend insight to ecological and evolutionary processes shaping ranges. Here, we examined how spatiotemporal variation in climate affects reproductive output of a mixed-mating annual, Clarkia pulchella. We also tested the effects of population isolation and climate on mating-system-related floral traits across the range. We measured reproductive output and floral traits on herbarium specimens collected across the range of C. pulchella. We extracted climate data associated with specimens and derived a population isolation metric from a species distribution model. We then examined how predictors of reproductive output and floral traits vary among populations of increasing distance from the range center. Finally, we tested whether reproductive output and floral traits vary with increasing distance from the center of the range. Reproductive output decreased as summer precipitation decreased, and low precipitation may contribute to limiting the southern and western range edges of C. pulchella. High spring and summer temperatures are correlated with low herkogamy, but these climatic factors show contrasting spatial patterns in different quadrants of the range. Limiting factors differ among different parts of the range. Due to the partial decoupling of geography and environment, examining relationships between climate, reproductive output, and mating-system-related floral traits reveals spatial patterns that might be missed when focusing solely on geographic position. © 2016 Botanical Society of America.

  6. Coherent random lasing from liquid waveguide gain channels with biological scatters

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Feng, Guoying; Wang, Shutong; Yang, Chao; Yin, Jiajia; Zhou, Shouhuan

    2014-12-01

    A unidirectional coherent random laser based on liquid waveguide gain channels with biological scatters is demonstrated. The optical feedback of the random laser is provided by both light scattering and waveguide confinement. This waveguide-scattering-feedback scheme not only reduces the pump threshold but also makes the output of random laser directional. The threshold of our random laser is about 11 μJ. The emission spectra can be sensitively tuned by changing pump position due to the micro/nano-scale randomness of butterfly wings. It shows the potential applications of optofluidic random lasers for bio-chemical sensors on-chip.

  7. Effect of incremental flaxseed supplementation of an herbage diet on methane output and ruminal fermentation in continuous culture

    USDA-ARS?s Scientific Manuscript database

    A 4-unit dual-flow continuous culture fermentor system was used to assess the effect of increasing flax supplementation of an herbage-based diet on nutrient digestibility, bacterial N synthesis and methane output. Treatments were randomly assigned to fermentors in a 4 x 4 Latin square design with 7 ...

  8. Sampling estimators of total mill receipts for use in timber product output studies

    Treesearch

    John P. Brown; Richard G. Oderwald

    2012-01-01

    Data from the 2001 timber product output study for Georgia was explored to determine new methods for stratifying mills and finding suitable sampling estimators. Estimators for roundwood receipts totals comprised several types: simple random sample, ratio, stratified sample, and combined ratio. Two stratification methods were examined: the Dalenius-Hodges (DH) square...

  9. The Effects of Practice Modality on Pragmatic Development in L2 Chinese

    ERIC Educational Resources Information Center

    Li, Shuai; Taguchi, Naoko

    2014-01-01

    This study investigated the effects of input-based and output-based practice on the development of accuracy and speed in recognizing and producing request-making forms in L2 Chinese. Fifty American learners of Chinese with intermediate level proficiency were randomly assigned to an input-based training group, an output-based training group, or a…

  10. Effect of feeding warm-season annuals with orchardgrass on ruminal fermentation and methane output in continuous culture

    USDA-ARS?s Scientific Manuscript database

    A 4-unit, dual-flow continuous culture fermentor system was used to assess nutrient digestibility, volatile fatty acids (VFA) production, bacterial protein synthesis and CH4 output of warm-season summer annual grasses. Treatments were randomly assigned to fermentors in a 4 × 4 Latin square design us...

  11. Is the whole more than the sum of its parts? Evolutionary trade-offs between burst and sustained locomotion in lacertid lizards.

    PubMed

    Vanhooydonck, B; James, R S; Tallis, J; Aerts, P; Tadic, Z; Tolley, K A; Measey, G J; Herrel, A

    2014-02-22

    Trade-offs arise when two functional traits impose conflicting demands on the same design trait. Consequently, excellence in one comes at the cost of performance in the other. One of the most widely studied performance trade-offs is the one between sprint speed and endurance. Although biochemical, physiological and (bio)mechanical correlates of either locomotor trait conflict with each other, results at the whole-organism level are mixed. Here, we test whether burst (speed, acceleration) and sustained locomotion (stamina) trade off at both the isolated muscle and whole-organism level among 17 species of lacertid lizards. In addition, we test for a mechanical link between the organismal and muscular (power output, fatigue resistance) performance traits. We find weak evidence for a trade-off between burst and sustained locomotion at the whole-organism level; however, there is a significant trade-off between muscle power output and fatigue resistance in the isolated muscle level. Variation in whole-animal sprint speed can be convincingly explained by variation in muscular power output. The variation in locomotor stamina at the whole-organism level does not relate to the variation in muscle fatigue resistance, suggesting that whole-organism stamina depends not only on muscle contractile performance but probably also on the performance of the circulatory and respiratory systems.

  12. Is the whole more than the sum of its parts? Evolutionary trade-offs between burst and sustained locomotion in lacertid lizards

    PubMed Central

    Vanhooydonck, B.; James, R. S.; Tallis, J.; Aerts, P.; Tadic, Z.; Tolley, K. A.; Measey, G. J.; Herrel, A.

    2014-01-01

    Trade-offs arise when two functional traits impose conflicting demands on the same design trait. Consequently, excellence in one comes at the cost of performance in the other. One of the most widely studied performance trade-offs is the one between sprint speed and endurance. Although biochemical, physiological and (bio)mechanical correlates of either locomotor trait conflict with each other, results at the whole-organism level are mixed. Here, we test whether burst (speed, acceleration) and sustained locomotion (stamina) trade off at both the isolated muscle and whole-organism level among 17 species of lacertid lizards. In addition, we test for a mechanical link between the organismal and muscular (power output, fatigue resistance) performance traits. We find weak evidence for a trade-off between burst and sustained locomotion at the whole-organism level; however, there is a significant trade-off between muscle power output and fatigue resistance in the isolated muscle level. Variation in whole-animal sprint speed can be convincingly explained by variation in muscular power output. The variation in locomotor stamina at the whole-organism level does not relate to the variation in muscle fatigue resistance, suggesting that whole-organism stamina depends not only on muscle contractile performance but probably also on the performance of the circulatory and respiratory systems. PMID:24403334

  13. Signal processing and analysis for copper layer thickness measurement within a large variation range in the CMP process.

    PubMed

    Li, Hongkai; Zhao, Qian; Lu, Xinchun; Luo, Jianbin

    2017-11-01

    In the copper (Cu) chemical mechanical planarization (CMP) process, accurate determination of a process reaching the end point is of great importance. Based on the eddy current technology, the in situ thickness measurement of the Cu layer is feasible. Previous research studies focus on the application of the eddy current method to the metal layer thickness measurement or endpoint detection. In this paper, an in situ measurement system, which is independently developed by using the eddy current method, is applied to the actual Cu CMP process. A series of experiments are done for further analyzing the dynamic response characteristic of the output signal within different thickness variation ranges. In this study, the voltage difference of the output signal is used to represent the thickness of the Cu layer, and we can extract the voltage difference variations from the output signal fast by using the proposed data processing algorithm. The results show that the voltage difference decreases as thickness decreases in the conventional measurement range and the sensitivity increases at the same time. However, it is also found that there exists a thickness threshold, and the correlation is negative, when the thickness is more than the threshold. Furthermore, it is possible that the in situ measurement system can be used within a larger Cu layer thickness variation range by creating two calibration tables.

  14. Experimental research on the stability and the multilongitudinal mode interference of bidirectional outputs of LD-pumped solid state ring laser

    NASA Astrophysics Data System (ADS)

    Wan, Shunping; Tian, Qian; Sun, Liqun; Yao, Minyan; Mao, Xianhui; Qiu, Hongyun

    2004-05-01

    This paper reports an experimental research on the stability of bidirectional outputs and multi-longitudinal mode interference of laser diode end-pumped Nd:YVO4 solid-state ring laser (DPSSL). The bidirectional, multi-longitudinal and TEM00 mode continuous wave outputs are obtained and the output powers are measured and their stabilities are analyzed respectively. The spectral characteristic of the outputs is measured. The interfering pattern of the bidirectional longitudinal mode outputs is obtained and analyzed in the condition of the ring cavity with rotation velocity. The movement of the interfering fringe of the multi-longitudinal modes is very sensitive to the deformation of the setup base and the fluctuation of the intracavity air, but is stationary or randomly dithers when the stage is rotating.

  15. Effects of gustatory stimulants of salivary secretion on salivary pH and flow in patients with Sjögren's syndrome: a randomized controlled trial.

    PubMed

    da Silva Marques, Duarte Nuno; da Mata, António Duarte Sola Pereira; Patto, José Maria Vaz; Barcelos, Filipe Alexandre Duarte; de Almeida Rato Amaral, João Pedro; de Oliveira, Miguel Constantino Mendes; Ferreira, Cristina Gutierrez Castanheira

    2011-11-01

    To compare salivary pH changes and stimulation efficacy of two different gustatory stimulants of salivary secretion (GSSS) in patients with primary Sjögren syndrome. Portuguese Institute for Rheumatological Diseases. Double-blind randomized controlled trial. Eighty patients were randomized to two intervention groups. Sample size was calculated using an alpha error of 0.05 and a beta of 0.20. Participants were randomly assigned to receive a new GSSS containing a weaker malic acid, fluoride and xylitol or a traditionally citric acid-based one. Saliva collection was obtained by established methods at different times. The salivary pH of the samples was determined with a pH meter and a microelectrode. Salivary pH variations and counts of subjects with pH below 4.5 for over 1 min and stimulated salivary flow were the main outcome measures. Both GSSS significantly stimulated salivary output without significant differences between the two groups. The new gustatory stimulant of salivary secretion presented an absolute risk reduction of 52.78% [33.42-72.13 (95% CI)] when compared with the traditional one. In Xerostomic Primary Sjögren syndrome patients, gustatory stimulants of salivary secretion based on acid mail only with fluoride and xylitol present similar salivary stimulation capacity when compared to citric acid-based ones, besides significantly reducing the number of salivary pH drops below 4.5. This could be related to a diminished risk for dental erosion and should be confirmed with further studies. © 2011 John Wiley & Sons A/S.

  16. Enhancement of DRPE performance with a novel scheme based on new RAC: Principle, security analysis and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.

    2016-02-01

    The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.

  17. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  18. Differential-Mode Biosensor Using Dual Extended-Gate Metal-Oxide-Semiconductor Field-Effect Transistors

    NASA Astrophysics Data System (ADS)

    Choi, Jinhyeon; Lee, Hee Ho; Ahn, Jungil; Seo, Sang-Ho; Shin, Jang-Kyoo

    2012-06-01

    In this paper, we present a differential-mode biosensor using dual extended-gate metal-oxide-semiconductor field-effect transistors (MOSFETs), which possesses the advantages of both the extended-gate structure and the differential-mode operation. The extended-gate MOSFET was fabricated using a 0.6 µm standard complementary metal oxide semiconductor (CMOS) process. The Au extended gate is the sensing gate on which biomolecules are immobilized, while the Pt extended gate is the dummy gate for use in the differential-mode detection circuit. The differential-mode operation offers many advantages such as insensitivity to the variation of temperature and light, as well as low noise. The outputs were measured using a semiconductor parameter analyzer in a phosphate buffered saline (PBS; pH 7.4) solution. A standard Ag/AgCl reference electrode was used to apply the gate bias. We measured the variation of output voltage with time, temperature, and light intensity. The bindings of self-assembled monolayer (SAM), streptavidin, and biotin caused a variation in the output voltage of the differential-mode detection circuit and this was confirmed by surface plasmon resonance (SPR) experiment. Biotin molecules could be detected up to a concentration of as low as 0.001 µg/ml.

  19. Behavioral and Single-Neuron Sensitivity to Millisecond Variations in Temporally Patterned Communication Signals

    PubMed Central

    Baker, Christa A.; Ma, Lisa; Casareale, Chelsea R.

    2016-01-01

    In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8–12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. SIGNIFICANCE STATEMENT The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals. PMID:27559179

  20. Behavioral and Single-Neuron Sensitivity to Millisecond Variations in Temporally Patterned Communication Signals.

    PubMed

    Baker, Christa A; Ma, Lisa; Casareale, Chelsea R; Carlson, Bruce A

    2016-08-24

    In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8-12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals. Copyright © 2016 the authors 0270-6474/16/368985-16$15.00/0.

  1. Regulation of the Output Voltage of an Inverter in Case of Load Variation

    NASA Astrophysics Data System (ADS)

    Diouri, Omar; Errahimi, Fatima; Es-Sbai, Najia

    2018-05-01

    In a DC/AC photovoltaic application, the stability of the output voltage of the inverter plays a very important role in the electrical systems. Such a photovoltaic system is constituted by an inverter, which makes it possible to convert the continuous energy to the alternative energy used in systems which operate under a voltage of 230V. The output of this inverter can be connected to a single load or more, at which time a second load is added in parallel with the first load. In this case, it proves a voltage drop at the output of the inverter. This problem influences the proper functioning of the electrical loads. Therefore, our contribution is to give a solution to this by compensating this voltage drop using a boost converter at the input of the inverter. This boost converter will play the role of the compensator that will provide the necessary voltage to the inverter in order to increase the voltage across the loads. But the use of this boost without controlling it is not enough because it generates a voltage that depends on the duty cycle of the control signal. To stabilize the output voltage of the inverter, we used a Proportional, Integral, and Derivative control (PID), which makes it possible to generate the necessary control signal for the voltage boost in order to have a good regulation of the output voltage of the inverter. Finally, we have solved the problem of the voltage drop even though there is loads variation.

  2. Experimentally generated randomness certified by the impossibility of superluminal signals.

    PubMed

    Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K

    2018-04-01

    From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.

  3. The Impact of Pushed Output on Accuracy and Fluency of Iranian EFL Learners' Speaking

    ERIC Educational Resources Information Center

    Sadeghi Beniss, Aram Reza; Edalati Bazzaz, Vahid

    2014-01-01

    The current study attempted to establish baseline quantitative data on the impacts of pushed output on two components of speaking (i.e., accuracy and fluency). To achieve this purpose, 30 female EFL learners were selected from a whole population pool of 50 based on the standard test of IELTS interview and were randomly assigned into an…

  4. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  5. Self-correcting random number generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Pooser, Raphael C.

    2016-09-06

    A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, tomore » provide a random number according to one or more performance criteria.« less

  6. A randomized controlled trial of levosimendan to reduce mortality in high-risk cardiac surgery patients (CHEETAH): Rationale and design.

    PubMed

    Zangrillo, Alberto; Alvaro, Gabriele; Pisano, Antonio; Guarracino, Fabio; Lobreglio, Rosetta; Bradic, Nikola; Lembo, Rosalba; Gianni, Stefano; Calabrò, Maria Grazia; Likhvantsev, Valery; Grigoryev, Evgeny; Buscaglia, Giuseppe; Pala, Giovanni; Auci, Elisabetta; Amantea, Bruno; Monaco, Fabrizio; De Vuono, Giovanni; Corcione, Antonio; Galdieri, Nicola; Cariello, Claudia; Bove, Tiziana; Fominskiy, Evgeny; Auriemma, Stefano; Baiocchi, Massimo; Bianchi, Alessandro; Frontini, Mario; Paternoster, Gianluca; Sangalli, Fabio; Wang, Chew-Yin; Zucchetti, Maria Chiara; Biondi-Zoccai, Giuseppe; Gemma, Marco; Lipinski, Michael J; Lomivorotov, Vladimir V; Landoni, Giovanni

    2016-07-01

    Patients undergoing cardiac surgery are at risk of perioperative low cardiac output syndrome due to postoperative myocardial dysfunction. Myocardial dysfunction in patients undergoing cardiac surgery is a potential indication for the use of levosimendan, a calcium sensitizer with 3 beneficial cardiovascular effects (inotropic, vasodilatory, and anti-inflammatory), which appears effective in improving clinically relevant outcomes. Double-blind, placebo-controlled, multicenter randomized trial. Tertiary care hospitals. Cardiac surgery patients (n = 1,000) with postoperative myocardial dysfunction (defined as patients with intraaortic balloon pump and/or high-dose standard inotropic support) will be randomized to receive a continuous infusion of either levosimendan (0.05-0.2 μg/[kg min]) or placebo for 24-48 hours. The primary end point will be 30-day mortality. Secondary end points will be mortality at 1 year, time on mechanical ventilation, acute kidney injury, decision to stop the study drug due to adverse events or to start open-label levosimendan, and length of intensive care unit and hospital stay. We will test the hypothesis that levosimendan reduces 30-day mortality in cardiac surgery patients with postoperative myocardial dysfunction. This trial is planned to determine whether levosimendan could improve survival in patients with postoperative low cardiac output syndrome. The results of this double-blind, placebo-controlled randomized trial may provide important insights into the management of low cardiac output in cardiac surgery. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Lagrangian Turbulence and Transport in Semi-enclosed Basins and Coastal Regions

    DTIC Science & Technology

    2007-09-30

    Adriatic Sea using NCOM. Ocean Modelling, 17, 68-91 Taillandier V., A. Griffa, P.M. Poulain, R. Signell, J. Chiggiato , S. Carniel. Variational...A. Griffa, P.M. Poulain, R. Signell, J. Chiggiato , S. Carniel. Variational analysis of drifter positions and model outputs for the reconstruction of

  8. Does player unavailability affect football teams' match physical outputs? A two-season study of the UEFA champions league.

    PubMed

    Windt, Johann; Ekstrand, Jan; Khan, Karim M; McCall, Alan; Zumbo, Bruno D

    2018-05-01

    Player unavailability negatively affects team performance in elite football. However, whether player unavailability and its concomitant performance decrement is mediated by any changes in teams' match physical outputs is unknown. We examined whether the number of players injured (i.e. unavailable for match selection) was associated with any changes in teams' physical outputs. Prospective cohort study. Between-team variation was calculated by correlating average team availability with average physical outputs. Within-team variation was quantified using linear mixed modelling, using physical outputs - total distance, sprint count (efforts over 20km/h), and percent of distance covered at high speeds (>14km/h) - as outcome variables, and player unavailability as the independent variable of interest. To control for other factors that may influence match physical outputs, stage (group stage/knockout), venue (home/away), score differential, ball possession (%), team ranking (UEFA Club Coefficient), and average team age were all included as covariates. Teams' average player unavailability was positively associated with the average number of sprints they performed in matches across two seasons. Multilevel models similarly demonstrated that having 4 unavailable players was associated with 20.8 more sprints during matches in 2015/2016, and with an estimated 0.60-0.77% increase in the proportion of total distance run above 14km/h in both seasons. Player unavailability had a possibly positive and likely positive association with total match distances in the two respective seasons. Having more players injured and unavailable for match selection was associated with an increase in teams' match physical outputs. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  9. Generation of standard gas mixtures of halogenated, aliphatic, and aromatic compounds and prediction of the individual output rates based on molecular formula and boiling point.

    PubMed

    Thorenz, Ute R; Kundel, Michael; Müller, Lars; Hoffmann, Thorsten

    2012-11-01

    In this work, we describe a simple diffusion capillary device for the generation of various organic test gases. Using a set of basic equations the output rate of the test gas devices can easily be predicted only based on the molecular formula and the boiling point of the compounds of interest. Since these parameters are easily accessible for a large number of potential analytes, even for those compounds which are typically not listed in physico-chemical handbooks or internet databases, the adjustment of the test gas source to the concentration range required for the individual analytical application is straightforward. The agreement of the predicted and measured values is shown to be valid for different groups of chemicals, such as halocarbons, alkanes, alkenes, and aromatic compounds and for different dimensions of the diffusion capillaries. The limits of the predictability of the output rates are explored and observed to result in an underprediction of the output rates when very thin capillaries are used. It is demonstrated that pressure variations are responsible for the observed deviation of the output rates. To overcome the influence of pressure variations and at the same time to establish a suitable test gas source for highly volatile compounds, also the usability of permeation sources is explored, for example for the generation of molecular bromine test gases.

  10. Control logic to track the outputs of a command generator or randomly forced target

    NASA Technical Reports Server (NTRS)

    Trankle, T. L.; Bryson, A. E., Jr.

    1977-01-01

    A procedure is presented for synthesizing time-invariant control logic to cause the outputs of a linear plant to track the outputs of an unforced (or randomly forced) linear dynamic system. The control logic uses feed-forward of the reference system state variables and feedback of the plant state variables. The feed-forward gains are obtained from the solution of a linear algebraic matrix equation of the Liapunov type. The feedback gains are the usual regulator gains, determined to stabilize (or augment the stability of) the plant, possibly including integral control. The method is applied here to the design of control logic for a second-order servomechanism to follow a linearly increasing (ramp) signal, an unstable third-order system with two controls to track two separate ramp signals, and a sixth-order system with two controls to track a constant signal and an exponentially decreasing signal (aircraft landing-flare or glide-slope-capture with constant velocity).

  11. Motion control of musculoskeletal systems with redundancy.

    PubMed

    Park, Hyunjoo; Durand, Dominique M

    2008-12-01

    Motion control of musculoskeletal systems for functional electrical stimulation (FES) is a challenging problem due to the inherent complexity of the systems. These include being highly nonlinear, strongly coupled, time-varying, time-delayed, and redundant. The redundancy in particular makes it difficult to find an inverse model of the system for control purposes. We have developed a control system for multiple input multiple output (MIMO) redundant musculoskeletal systems with little prior information. The proposed method separates the steady-state properties from the dynamic properties. The dynamic control uses a steady-state inverse model and is implemented with both a PID controller for disturbance rejection and an artificial neural network (ANN) feedforward controller for fast trajectory tracking. A mechanism to control the sum of the muscle excitation levels is also included. To test the performance of the proposed control system, a two degree of freedom ankle-subtalar joint model with eight muscles was used. The simulation results show that separation of steady-state and dynamic control allow small output tracking errors for different reference trajectories such as pseudo-step, sinusoidal and filtered random signals. The proposed control method also demonstrated robustness against system parameter and controller parameter variations. A possible application of this control algorithm is FES control using multiple contact cuff electrodes where mathematical modeling is not feasible and the redundancy makes the control of dynamic movement difficult.

  12. Statistics, Uncertainty, and Transmitted Variation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, Joanne Roth

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  13. Position sensor for a fuel injection element in an internal combustion engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulkerson, D.E.; Geske, M.L.

    1987-08-18

    This patent describes an electronic circuit for dynamically sensing and processing signals representative of changes in a magnet field, the circuit comprising: means for sensing a change in a magnetic field external to the circuit and providing an output representative of the change; circuit means electronically coupled with the output of the sensing means for providing an output indicating the presence of the magnetic field change; and a nulling circuit coupled with the output of the sensing means and across the indicating circuit means for nulling the electronic circuit responsive to the sensing means output, to thereby avoid ambient magneticmore » fields temperature and process variations, and wherein the nulling circuit comprises a capacitor coupled to the output of the nulling circuit, means for charging and discharging the capacitor responsive to any imbalance in the input to the nulling circuit, and circuit means coupling the capacitor with the output of the sensing means for nulling any imbalance during the charging or discharging of the capacitor.« less

  14. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  15. Sex differences in salivary cortisol reactivity to the Trier Social Stress Test (TSST): A meta-analysis.

    PubMed

    Liu, Jenny J W; Ein, Natalie; Peck, Katlyn; Huang, Vivian; Pruessner, Jens C; Vickers, Kristin

    2017-08-01

    Some, but not all studies using the Trier Social Stress Test (TSST) have demonstrated evidence in support of sex differences in salivary cortisol. The aim of the current meta-analysis is to examine sex differences in salivary cortisol following exposure to the TSST. We further explored the effects of modifications to the TSST protocol and procedural variations as potential moderators. We searched articles published from January, 1993 to February, 2016 in MedLine, PsychINFO, and ProQuest Theses and Dissertations. This meta-analysis is based on 34 studies, with a total sample size of 1350 individuals (640 women and 710 men). Using a random effects model, we found significant heterogeneity in salivary cortisol output across sexes, such that men were observed to have higher cortisol values at peak and recovery following the TSST compared to women. Modifications to the sampling trajectory of cortisol (i.e., duration of acclimation, peak sampling time, and duration of recovery) significantly moderated the heterogeneity across both sexes. Further, there are observed sex differences at various time points of the reactive cortisol following the TSST. Lastly, current results suggest that these sex differences can be, at least in part, attributed to variations in methodological considerations across studies. Future research could advance this line of inquiry by using other methods of analyses (e.g., area under the curve; AUC), in order to better understand the effects of methodological variations and their implications for research design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A domain analysis approach to clear-air turbulence forecasting using high-density in-situ measurements

    NASA Astrophysics Data System (ADS)

    Abernethy, Jennifer A.

    Pilots' ability to avoid clear-air turbulence (CAT) during flight affects the safety of the millions of people who fly commercial airlines and other aircraft, and turbulence costs millions in injuries and aircraft maintenance every year. Forecasting CAT is not straightforward, however; microscale features like the turbulence eddies that affect aircraft (100m) are below the current resolution of operational numerical weather prediction (NWP) models, and the only evidence of CAT episodes, until recently, has been sparse, subjective reports from pilots known as PIREPs. To forecast CAT, researchers use a simple weighted sum of top-performing turbulence indicators derived from NWP model outputs---termed diagnostics---based on their agreement with current PIREPs. However, a new, quantitative source of observation data---high-density measurements made by sensor equipment and software on aircraft, called in-situ measurements---is now available. The main goal of this thesis is to develop new data analysis and processing techniques to apply to the model and new observation data, in order to improve CAT forecasting accuracy. This thesis shows that using in-situ data improves forecasting accuracy and that automated machine learning algorithms such as support vector machines (SVM), logistic regression, and random forests, can match current performance while eliminating almost all hand-tuning. Feature subset selection is paired with the new algorithms to choose diagnostics that predict well as a group rather than individually. Specializing forecasts and choice of diagnostics by geographic region further improves accuracy because of the geographic variation in turbulence sources. This work uses random forests to find climatologically-relevant regions based on these variations and implements a forecasting system testbed which brings these techniques together to rapidly prototype new, regionalized versions of operational CAT forecasting systems.

  17. A Method of Reducing Random Drift in the Combined Signal of an Array of Inertial Sensors

    DTIC Science & Technology

    2015-09-30

    stability of the collective output, Bayard et al, US Patent 6,882,964. The prior art methods rely upon the use of Kalman filtering and averaging...including scale-factor errors, quantization effects, temperature effects, random drift, and additive noise. A comprehensive account of all of these

  18. Reduction of thermal damage in photodynamic therapy by laser irradiation techniques.

    PubMed

    Lim, Hyun Soo

    2012-12-01

    General application of continuous-wave (CW) laser irradiation modes in photodynamic therapy can cause thermal damage to normal tissues in addition to tumors. A new photodynamic laser therapy system using a pulse irradiation mode was optimized to reduce nonspecific thermal damage. In in vitro tissue specimens, tissue energy deposition rates were measured in three irradiation modes, CW, pulse, and burst-pulse. In addition, methods were tested for reducing variations in laser output and specific wavelength shifts using a thermoelectric cooler and thermistor. The average temperature elevation per 10 J/cm2 was 0.27°C, 0.09°C, and 0.08°C using the three methods, respectively, in pig muscle tissue. Variations in laser output were controlled within ± 0.2%, and specific wavelength shift was limited to ± 3 nm. Thus, optimization of a photodynamic laser system was achieved using a new pulse irradiation mode and controlled laser output to reduce potential thermal damage during conventional CW-based photodynamic therapy.

  19. Explaining outputs of primary health care: population and practice factors.

    PubMed Central

    Baker, D; Klein, R

    1991-01-01

    OBJECTIVE--To examine whether variations in the activities of general practice among family health service authorities can be explained by the populations characteristics and the organisation and resourcing of general practice. DESIGN--The family health services authorities were treated as discrete primary health care systems. Nineteen performance indicators reflecting the size, distribution, and characteristics of the population served; the organisation of general practice (inputs); and the activities generated by general practitioners and their staff (output) were analysed by stepwise regression. SETTING--90 family health services authorities in England. MAIN OUTCOME MEASURES--Rates of cervical smear testing, immunisation, prescribing, and night visiting. RESULTS--53% of the variation in uptake of cervical cytology was accounted for by Jarman score (t = -3.3), list inflation (-0.41), the proportion of practitioners over 65 (-0.64), the number of ancillary staff per practitioner (2.5), and 70% of the variation in immunisation rates by standardised mortality ratios (-6.6), the proportion of practitioners aged over 65 (-4.8), and the number of practice nurses per practitioner (3.5). Standardised mortality ratios (8.4), the number of practitioners (2.3), and the proportion over 65 (2.2), and the number of ancillary staff per practitioner (-3.1) accounted for 69% of variation in prescribing rates. 54% of the variation in night visiting was explained by standardised mortality ratios (7.1), the proportion of practitioners with lists sizes below 1000 (-2.2), the proportion aged over 65 (-0.4), and the number of practice nurses per practitioner (-2.5). CONCLUSIONS--Family health services authorities are appropriate systems for studying output of general practice. Their performance indicators need to be refined and to be linked to other relevant factors, notably the performance of hospital, community, and social services. PMID:1653065

  20. Advanced thermopower wave in novel ZnO nanostructures/fuel composite.

    PubMed

    Lee, Kang Yeol; Hwang, Hayoung; Choi, Wonjoon

    2014-09-10

    Thermopower wave is a new concept of energy conversion from chemical to thermal to electrical energy, produced from the chemical reaction in well-designed hybrid structures between nanomaterials and combustible fuels. The enhancement and optimization of energy generation is essential to make it useful for future applications. In this study, we demonstrate that simple solution-based synthesized zinc oxide (ZnO) nanostructures, such as nanorods and nanoparticles are capable of generating high output voltage from thermopower waves. In particular, an astonishing improvement in the output voltage (up to 3 V; average 2.3 V) was achieved in a ZnO nanorods-based composite film with a solid fuel (collodion, 5% nitrocellulose), which generated an exothermic chemical reaction. Detailed analyses of thermopower waves in ZnO nanorods- and cube-like nanoparticles-based hybrid composites have been reported in which nanostructures, output voltage profile, wave propagation velocities, and surface temperature have been characterized. The average combustion velocities for a ZnO nanorods/fuel and a ZnO cube-like nanoparticles/fuel composites were 40.3 and 30.0 mm/s, while the average output voltages for these composites were 2.3 and 1.73 V. The high output voltage was attributed to the amplified temperature in intermixed composite of ZnO nanostructures and fuel due to the confined diffusive heat transfer in nanostructures. Moreover, the extended interfacial areas between ZnO nanorods and fuel induced large amplification in the dynamic change of the chemical potential, and it resulted in the enhanced output voltage. The differences of reaction velocity and the output voltage between ZnO nanorods- and ZnO cube-like nanoparticles-based composites were attributed to variations in electron mobility and grain boundary, as well as thermal conductivities of ZnO nanorods and particles. Understanding this astonishing increase and the variation of the output voltage and reaction velocity, precise ZnO nanostructures, will help in formulating specific strategies for obtaining enhanced energy generation from thermopower waves.

  1. False Operation of Static Random Access Memory Cells under Alternating Current Power Supply Voltage Variation

    NASA Astrophysics Data System (ADS)

    Sawada, Takuya; Takata, Hidehiro; Nii, Koji; Nagata, Makoto

    2013-04-01

    Static random access memory (SRAM) cores exhibit susceptibility against power supply voltage variation. False operation is investigated among SRAM cells under sinusoidal voltage variation on power lines introduced by direct RF power injection. A standard SRAM core of 16 kbyte in a 90 nm 1.5 V technology is diagnosed with built-in self test and on-die noise monitor techniques. The sensitivity of bit error rate is shown to be high against the frequency of injected voltage variation, while it is not greatly influenced by the difference in frequency and phase against SRAM clocking. It is also observed that the distribution of false bits is substantially random in a cell array.

  2. Surface applicator calibration and commissioning of an electronic brachytherapy system for nonmelanoma skin cancer treatment.

    PubMed

    Rong, Yi; Welsh, James S

    2010-10-01

    The Xoft Axxent x-ray source has been used for treating nonmelanoma skin cancer since the surface applicators became clinically available in 2009. The authors report comprehensive calibration procedures for the electronic brachytherapy (eBx) system with the surface applicators. The Xoft miniature tube (model S700) generates 50 kVp low-energy x rays. The new surface applicators are available in four sizes of 10, 20, 35, and 50 mm in diameter. The authors' tests include measurements of dose rate, air-gap factor, output stability, depth dose verification, beam flatness and symmetry, and treatment planning with patient specific cutout factors. The TG-61 in-air method was used as a guideline for acquiring nominal dose-rate output at the skin surface. A soft x-ray parallel-plate chamber (PTW T34013) and electrometer was used for the output commissioning. GafChromic EBT films were used for testing the properties of the treatment fields with the skin applicators. Solid water slabs were used to verify the depth dose and cutout factors. Patients with basal cell or squamous cell carcinoma were treated with eBx using a calibrated Xoft system with the low-energy x-ray source and the skin applicators. The average nominal dose-rate output at the skin surface for the 35 mm applicator is 1.35 Gy/min with +/- 5% variation for 16 sources. The dose-rate output and stability (within +/- 5% variation) were also measured for the remaining three applicators. For the same source, the output variation is within 2%. The effective source-surface distance was calculated based on the air-gap measurements for four applicator sizes. The field flatness and symmetry are well within 5%. Percentage depth dose in water was provided by factory measurements and can be verified using solid water slabs. Treatment duration was calculated based on the nominal dose rate, the prescription fraction size, the depth dose percentage, and the cutout factor. The output factor needs to be measured for each case with varying shapes of cutouts. Together with TG-61, the authors' methodology provides comprehensive calibration procedures for medical physicists for using the Xoft eBx system and skin applicators for nonmelanoma skin cancer treatments.

  3. Realization of a Quantum Random Generator Certified with the Kochen-Specker Theorem

    NASA Astrophysics Data System (ADS)

    Kulikov, Anatoly; Jerger, Markus; Potočnik, Anton; Wallraff, Andreas; Fedorov, Arkady

    2017-12-01

    Random numbers are required for a variety of applications from secure communications to Monte Carlo simulation. Yet randomness is an asymptotic property, and no output string generated by a physical device can be strictly proven to be random. We report an experimental realization of a quantum random number generator (QRNG) with randomness certified by quantum contextuality and the Kochen-Specker theorem. The certification is not performed in a device-independent way but through a rigorous theoretical proof of each outcome being value indefinite even in the presence of experimental imperfections. The analysis of the generated data confirms the incomputable nature of our QRNG.

  4. Realization of a Quantum Random Generator Certified with the Kochen-Specker Theorem.

    PubMed

    Kulikov, Anatoly; Jerger, Markus; Potočnik, Anton; Wallraff, Andreas; Fedorov, Arkady

    2017-12-15

    Random numbers are required for a variety of applications from secure communications to Monte Carlo simulation. Yet randomness is an asymptotic property, and no output string generated by a physical device can be strictly proven to be random. We report an experimental realization of a quantum random number generator (QRNG) with randomness certified by quantum contextuality and the Kochen-Specker theorem. The certification is not performed in a device-independent way but through a rigorous theoretical proof of each outcome being value indefinite even in the presence of experimental imperfections. The analysis of the generated data confirms the incomputable nature of our QRNG.

  5. Quantifying the performance of in vivo portal dosimetry in detecting four types of treatment parameter variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bojechko, C.; Ford, E. C., E-mail: eford@uw.edu

    Purpose: To quantify the ability of electronic portal imaging device (EPID) dosimetry used during treatment (in vivo) in detecting variations that can occur in the course of patient treatment. Methods: Images of transmitted radiation from in vivo EPID measurements were converted to a 2D planar dose at isocenter and compared to the treatment planning dose using a prototype software system. Using the treatment planning system (TPS), four different types of variability were modeled: overall dose scaling, shifting the positions of the multileaf collimator (MLC) leaves, shifting of the patient position, and changes in the patient body contour. The gamma passmore » rate was calculated for the modified and unmodified plans and used to construct a receiver operator characteristic (ROC) curve to assess the detectability of the different parameter variations. The detectability is given by the area under the ROC curve (AUC). The TPS was also used to calculate the impact of the variations on the target dose–volume histogram. Results: Nine intensity modulation radiation therapy plans were measured for four different anatomical sites consisting of 70 separate fields. Results show that in vivo EPID dosimetry was most sensitive to variations in the machine output, AUC = 0.70 − 0.94, changes in patient body habitus, AUC = 0.67 − 0.88, and systematic shifts in the MLC bank positions, AUC = 0.59 − 0.82. These deviations are expected to have a relatively small clinical impact [planning target volume (PTV) D{sub 99} change <7%]. Larger variations have even higher detectability. Displacements in the patient’s position and random variations in MLC leaf positions were not readily detectable, AUC < 0.64. The D{sub 99} of the PTV changed by up to 57% for the patient position shifts considered here. Conclusions: In vivo EPID dosimetry is able to detect relatively small variations in overall dose, systematic shifts of the MLC’s, and changes in the patient habitus. Shifts in the patient’s position which can introduce large changes in the target dose coverage were not readily detected.« less

  6. Identification of the structure parameters using short-time non-stationary stochastic excitation

    NASA Astrophysics Data System (ADS)

    Jarczewska, Kamila; Koszela, Piotr; Śniady, PaweŁ; Korzec, Aleksandra

    2011-07-01

    In this paper, we propose an approach to the flexural stiffness or eigenvalue frequency identification of a linear structure using a non-stationary stochastic excitation process. The idea of the proposed approach lies within time domain input-output methods. The proposed method is based on transforming the dynamical problem into a static one by integrating the input and the output signals. The output signal is the structure reaction, i.e. structure displacements due to the short-time, irregular load of random type. The systems with single and multiple degrees of freedom, as well as continuous systems are considered.

  7. Randomness Amplification under Minimal Fundamental Assumptions on the Devices

    NASA Astrophysics Data System (ADS)

    Ramanathan, Ravishankar; Brandão, Fernando G. S. L.; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Wojewódka, Hanna

    2016-12-01

    Recently, the physically realistic protocol amplifying the randomness of Santha-Vazirani sources producing cryptographically secure random bits was proposed; however, for reasons of practical relevance, the crucial question remained open regarding whether this can be accomplished under the minimal conditions necessary for the task. Namely, is it possible to achieve randomness amplification using only two no-signaling components and in a situation where the violation of a Bell inequality only guarantees that some outcomes of the device for specific inputs exhibit randomness? Here, we solve this question and present a device-independent protocol for randomness amplification of Santha-Vazirani sources using a device consisting of two nonsignaling components. We show that the protocol can amplify any such source that is not fully deterministic into a fully random source while tolerating a constant noise rate and prove the composable security of the protocol against general no-signaling adversaries. Our main innovation is the proof that even the partial randomness certified by the two-party Bell test [a single input-output pair (u* , x* ) for which the conditional probability P (x*|u*) is bounded away from 1 for all no-signaling strategies that optimally violate the Bell inequality] can be used for amplification. We introduce the methodology of a partial tomographic procedure on the empirical statistics obtained in the Bell test that ensures that the outputs constitute a linear min-entropy source of randomness. As a technical novelty that may be of independent interest, we prove that the Santha-Vazirani source satisfies an exponential concentration property given by a recently discovered generalized Chernoff bound.

  8. Sunspot random walk and 22-year variation

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua

    2012-01-01

    We examine two stochastic models for consistency with observed long-term secular trends in sunspot number and a faint, but semi-persistent, 22-yr signal: (1) a null hypothesis, a simple one-parameter random-walk model of sunspot-number cycle-to-cycle change, and, (2) an alternative hypothesis, a two-parameter random-walk model with an imposed 22-yr alternating amplitude. The observed secular trend in sunspots, seen from solar cycle 5 to 23, would not be an unlikely result of the accumulation of multiple random-walk steps. Statistical tests show that a 22-yr signal can be resolved in historical sunspot data; that is, the probability is low that it would be realized from random data. On the other hand, the 22-yr signal has a small amplitude compared to random variation, and so it has a relatively small effect on sunspot predictions. Many published predictions for cycle 24 sunspots fall within the dispersion of previous cycle-to-cycle sunspot differences. The probability is low that the Sun will, with the accumulation of random steps over the next few cycles, walk down to a Dalton-like minimum. Our models support published interpretations of sunspot secular variation and 22-yr variation resulting from cycle-to-cycle accumulation of dynamo-generated magnetic energy.

  9. Output Beam Polarisation of X-ray Lasers with Transient Inversion

    NASA Astrophysics Data System (ADS)

    Janulewicz, K. A.; Kim, C. M.; Matouš, B.; Stiel, H.; Nishikino, M.; Hasegawa, N.; Kawachi, T.

    It is commonly accepted that X-ray lasers, as the devices based on amplified spontaneous emission (ASE), did not show any specific polarization in the output beam. The theoretical analysis within the uniform (single-mode) approximation suggested that the output radiation should show some defined polarization feature, but randomly changing from shot-to-shot. This hypothesis has been verified by experiment using traditional double-pulse scheme of transient inversion. Membrane beam-splitter was used as a polarization selector. It was found that the output radiation has a significant component of p-polarisation in each shot. To explain the effect and place it in the line with available, but scarce data, propagation and kinetic effects in the non-uniform plasma have been analysed.

  10. Open-Access Mega-Journals: A Bibliometric Profile.

    PubMed

    Wakeling, Simon; Willett, Peter; Creaser, Claire; Fry, Jenny; Pinfield, Stephen; Spezi, Valérie

    2016-01-01

    In this paper we present the first comprehensive bibliometric analysis of eleven open-access mega-journals (OAMJs). OAMJs are a relatively recent phenomenon, and have been characterised as having four key characteristics: large size; broad disciplinary scope; a Gold-OA business model; and a peer-review policy that seeks to determine only the scientific soundness of the research rather than evaluate the novelty or significance of the work. Our investigation focuses on four key modes of analysis: journal outputs (the number of articles published and changes in output over time); OAMJ author characteristics (nationalities and institutional affiliations); subject areas (the disciplinary scope of OAMJs, and variations in sub-disciplinary output); and citation profiles (the citation distributions of each OAMJ, and the impact of citing journals). We found that while the total output of the eleven mega-journals grew by 14.9% between 2014 and 2015, this growth is largely attributable to the increased output of Scientific Reports and Medicine. We also found substantial variation in the geographical distribution of authors. Several journals have a relatively high proportion of Chinese authors, and we suggest this may be linked to these journals' high Journal Impact Factors (JIFs). The mega-journals were also found to vary in subject scope, with several journals publishing disproportionately high numbers of articles in certain sub-disciplines. Our citation analsysis offers support for Björk & Catani's suggestion that OAMJs's citation distributions can be similar to those of traditional journals, while noting considerable variation in citation rates across the eleven titles. We conclude that while the OAMJ term is useful as a means of grouping journals which share a set of key characteristics, there is no such thing as a "typical" mega-journal, and we suggest several areas for additional research that might help us better understand the current and future role of OAMJs in scholarly communication.

  11. Open-Access Mega-Journals: A Bibliometric Profile

    PubMed Central

    Willett, Peter; Creaser, Claire; Fry, Jenny; Pinfield, Stephen; Spezi, Valérie

    2016-01-01

    In this paper we present the first comprehensive bibliometric analysis of eleven open-access mega-journals (OAMJs). OAMJs are a relatively recent phenomenon, and have been characterised as having four key characteristics: large size; broad disciplinary scope; a Gold-OA business model; and a peer-review policy that seeks to determine only the scientific soundness of the research rather than evaluate the novelty or significance of the work. Our investigation focuses on four key modes of analysis: journal outputs (the number of articles published and changes in output over time); OAMJ author characteristics (nationalities and institutional affiliations); subject areas (the disciplinary scope of OAMJs, and variations in sub-disciplinary output); and citation profiles (the citation distributions of each OAMJ, and the impact of citing journals). We found that while the total output of the eleven mega-journals grew by 14.9% between 2014 and 2015, this growth is largely attributable to the increased output of Scientific Reports and Medicine. We also found substantial variation in the geographical distribution of authors. Several journals have a relatively high proportion of Chinese authors, and we suggest this may be linked to these journals’ high Journal Impact Factors (JIFs). The mega-journals were also found to vary in subject scope, with several journals publishing disproportionately high numbers of articles in certain sub-disciplines. Our citation analsysis offers support for Björk & Catani’s suggestion that OAMJs’s citation distributions can be similar to those of traditional journals, while noting considerable variation in citation rates across the eleven titles. We conclude that while the OAMJ term is useful as a means of grouping journals which share a set of key characteristics, there is no such thing as a “typical” mega-journal, and we suggest several areas for additional research that might help us better understand the current and future role of OAMJs in scholarly communication. PMID:27861511

  12. Population pressure and agricultural productivity in Bangladesh.

    PubMed

    Chaudhury, R H

    1983-01-01

    The relationship between population pressure or density and agricultural productivity is examined by analyzing the changes in the land-man ratio and the changes in the level of land yield in the 17 districts of Bangladesh from 1961-64 and 1974-77. The earlier years were pre-Green Revolution, whereas in the later years new technology had been introduced in some parts of the country. Net sown area, value of total agricultural output, and number of male agricultural workers were the main variables. For the country as a whole, agricultural output grew by 1.2%/year during 1961-64 to 1974-77, while the number of male agricultural workers grew at 1.5%/year. The major source of agricultural growth during the 1960s was found to be increased land-yield associated with a higher ratio of labor to land. The findings imply that a more intensified pattern of land use, resulting in both higher yield and higher labor input/unit of land, is the main source of growth of output and employment in agriculture. There is very little scope for extending the arable area in Bangladesh; increased production must come from multiple cropping, especially through expansion of irrigation and drainage, and from increases in per acre yields, principly through adoption of high yield variants, which explained 87% of the variation in output per acre during the 1970s. Regional variation in output was also associated with variation in cropping intensity and proportion of land given to high yield variants. There is considerable room for modernizing agricultural technology in Bangladesh: in 1975-76 less than 9% of total crop land was irrigated and only 12% of total acreage was under high yield variants. The adoption of new food-grain technology and increased use of high yield variants in Bangladesh's predominantly subsistence-based agriculture would require far-reaching institutional and organizational changes and more capital. Without effective population control, expansion of area under high yield variants would not improve the employment situation in the foreseeable future.

  13. Datasets for supplier selection and order allocation with green criteria, all-unit quantity discounts and varying number of suppliers.

    PubMed

    Hamdan, Sadeque; Cheaitou, Ali

    2017-08-01

    This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.

  14. An Evaluation of Compressed Work Schedules and Their Impact on Electricity Use

    DTIC Science & Technology

    2010-03-01

    problems by introducing uncertainty to the known parameters of a given process ( Sobol , 1975). The MCS output represents approximate values of the...process within the observed parameters; the output is provided within a statistical distribution of likely outcomes ( Sobol , 1975). 31 In this...The Monte Carlo method is appropriate for “any process whose development is affected by random factors” ( Sobol , 1975:10). MCS introduces

  15. Applications of Probabilistic Combiners on Linear Feedback Shift Register Sequences

    DTIC Science & Technology

    2016-12-01

    on the resulting output strings show a drastic increase in complexity, while simultaneously passing the stringent randomness tests required by the...a three-variable function. Our tests on the resulting output strings show a drastic increase in complex- ity, while simultaneously passing the...10001101 01000010 11101001 Decryption of a message that has been encrypted using bitwise XOR is quite simple. Since each bit is its own additive inverse

  16. Software Obfuscation With Symmetric Cryptography

    DTIC Science & Technology

    2008-03-01

    of y = a * b + c Against Random Functions ...............84 Appendix C: Black-box Analysis of Fibonacci Against Random Functions...Metric ................... 67 Figure 19. Standard Deviations of All Fibonacci Output Bits by Metric ........................ 67 Figure 20...caveat to encryption strength is that what may be strong presently may not always be strong; the Data Encryption Standard ( DES ) was once considered

  17. Accuracy of indirect estimation of power output from uphill performance in cycling.

    PubMed

    Millet, Grégoire P; Tronche, Cyrille; Grappe, Frédéric

    2014-09-01

    To use measurement by cycling power meters (Pmes) to evaluate the accuracy of commonly used models for estimating uphill cycling power (Pest). Experiments were designed to explore the influence of wind speed and steepness of climb on accuracy of Pest. The authors hypothesized that the random error in Pest would be largely influenced by the windy conditions, the bias would be diminished in steeper climbs, and windy conditions would induce larger bias in Pest. Sixteen well-trained cyclists performed 15 uphill-cycling trials (range: length 1.3-6.3 km, slope 4.4-10.7%) in a random order. Trials included different riding position in a group (lead or follow) and different wind speeds. Pmes was quantified using a power meter, and Pest was calculated with a methodology used by journalists reporting on the Tour de France. Overall, the difference between Pmes and Pest was -0.95% (95%CI: -10.4%, +8.5%) for all trials and 0.24% (-6.1%, +6.6%) in conditions without wind (<2 m/s). The relationship between percent slope and the error between Pest and Pmes were considered trivial. Aerodynamic drag (affected by wind velocity and orientation, frontal area, drafting, and speed) is the most confounding factor. The mean estimated values are close to the power-output values measured by power meters, but the random error is between ±6% and ±10%. Moreover, at the power outputs (>400 W) produced by professional riders, this error is likely to be higher. This observation calls into question the validity of releasing individual values without reporting the range of random errors.

  18. N-state random switching based on quantum tunnelling

    NASA Astrophysics Data System (ADS)

    Bernardo Gavito, Ramón; Jiménez Urbanos, Fernando; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J.; Woodhead, Christopher S.; Missous, Mohamed; Roedig, Utz; Young, Robert J.

    2017-08-01

    In this work, we show how the hysteretic behaviour of resonant tunnelling diodes (RTDs) can be exploited for new functionalities. In particular, the RTDs exhibit a stochastic 2-state switching mechanism that could be useful for random number generation and cryptographic applications. This behaviour can be scaled to N-bit switching, by connecting various RTDs in series. The InGaAs/AlAs RTDs used in our experiments display very sharp negative differential resistance (NDR) peaks at room temperature which show hysteresis cycles that, rather than having a fixed switching threshold, show a probability distribution about a central value. We propose to use this intrinsic uncertainty emerging from the quantum nature of the RTDs as a source of randomness. We show that a combination of two RTDs in series results in devices with three-state outputs and discuss the possibility of scaling to N-state devices by subsequent series connections of RTDs, which we demonstrate for the up to the 4-state case. In this work, we suggest using that the intrinsic uncertainty in the conduction paths of resonant tunnelling diodes can behave as a source of randomness that can be integrated into current electronics to produce on-chip true random number generators. The N-shaped I-V characteristic of RTDs results in a two-level random voltage output when driven with current pulse trains. Electrical characterisation and randomness testing of the devices was conducted in order to determine the validity of the true randomness assumption. Based on the results obtained for the single RTD case, we suggest the possibility of using multi-well devices to generate N-state random switching devices for their use in random number generation or multi-valued logic devices.

  19. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weimar, Mark R.; Daly, Don S.; Wood, Thomas W.

    Both nuclear power and nuclear weapons programs should have (related) economic signatures which are detectible at some scale. We evaluated this premise in a series of studies using national economic input/output (IO) data. Statistical discrimination models using economic IO tables predict with a high probability whether a country with an unknown predilection for nuclear weapons proliferation is in fact engaged in nuclear power development or nuclear weapons proliferation. We analyzed 93 IO tables, spanning the years 1993 to 2005 for 37 countries that are either members or associates of the Organization for Economic Cooperation and Development (OECD). The 2009 OECDmore » input/output tables featured 48 industrial sectors based on International Standard Industrial Classification (ISIC) Revision 3, and described the respective economies in current country-of-origin valued currency. We converted and transformed these reported values to US 2005 dollars using appropriate exchange rates and implicit price deflators, and addressed discrepancies in reported industrial sectors across tables. We then classified countries with Random Forest using either the adjusted or industry-normalized values. Random Forest, a classification tree technique, separates and categorizes countries using a very small, select subset of the 2304 individual cells in the IO table. A nation’s efforts in nuclear power, be it for electricity or nuclear weapons, are an enterprise with a large economic footprint -- an effort so large that it should discernibly perturb coarse country-level economics data such as that found in yearly input-output economic tables. The neoclassical economic input-output model describes a country’s or region’s economy in terms of the requirements of industries to produce the current level of economic output. An IO table row shows the distribution of an industry’s output to the industrial sectors while a table column shows the input required of each industrial sector by a given industry.« less

  1. Speech Output Technologies in Interventions for Individuals with Autism Spectrum Disorders: A Scoping Review.

    PubMed

    Schlosser, Ralf W; Koul, Rajinder K

    2015-01-01

    The purpose of this scoping review was to (a) map the research evidence on the effectiveness of augmentative and alternative communication (AAC) interventions using speech output technologies (e.g., speech-generating devices, mobile technologies with AAC-specific applications, talking word processors) for individuals with autism spectrum disorders, (b) identify gaps in the existing literature, and (c) posit directions for future research. Outcomes related to speech, language, and communication were considered. A total of 48 studies (47 single case experimental designs and 1 randomized control trial) involving 187 individuals were included. Results were reviewed in terms of three study groupings: (a) studies that evaluated the effectiveness of treatment packages involving speech output, (b) studies comparing one treatment package with speech output to other AAC modalities, and (c) studies comparing the presence with the absence of speech output. The state of the evidence base is discussed and several directions for future research are posited.

  2. Theoretical modeling, simulation and experimental study of hybrid piezoelectric and electromagnetic energy harvester

    NASA Astrophysics Data System (ADS)

    Li, Ping; Gao, Shiqiao; Cong, Binglong

    2018-03-01

    In this paper, performances of vibration energy harvester combined piezoelectric (PE) and electromagnetic (EM) mechanism are studied by theoretical analysis, simulation and experimental test. For the designed harvester, electromechanical coupling modeling is established, and expressions of vibration response, output voltage, current and power are derived. Then, performances of the harvester are simulated and tested; moreover, the power charging rechargeable battery is realized through designed energy storage circuit. By the results, it's found that compared with piezoelectric-only and electromagnetic-only energy harvester, the hybrid energy harvester can enhance the output power and harvesting efficiency; furthermore, at the harmonic excitation, output power of harvester linearly increases with acceleration amplitude increasing; while it enhances with acceleration spectral density increasing at the random excitation. In addition, the bigger coupling strength, the bigger output power is, and there is the optimal load resistance to make the harvester output the maximal power.

  3. Near real-time analysis of extrinsic Fabry-Perot interferometric sensors under damped vibration using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Dua, Rohit; Watkins, Steve E.

    2009-03-01

    Strain analysis due to vibration can provide insight into structural health. An Extrinsic Fabry-Perot Interferometric (EFPI) sensor under vibrational strain generates a non-linear modulated output. Advanced signal processing techniques, to extract important information such as absolute strain, are required to demodulate this non-linear output. Past research has employed Artificial Neural Networks (ANN) and Fast Fourier Transforms (FFT) to demodulate the EFPI sensor for limited conditions. These demodulation systems could only handle variations in absolute value of strain and frequency of actuation during a vibration event. This project uses an ANN approach to extend the demodulation system to include the variation in the damping coefficient of the actuating vibration, in a near real-time vibration scenario. A computer simulation provides training and testing data for the theoretical output of the EFPI sensor to demonstrate the approaches. FFT needed to be performed on a window of the EFPI output data. A small window of observation is obtained, while maintaining low absolute-strain prediction errors, heuristically. Results are obtained and compared from employing different ANN architectures including multi-layered feedforward ANN trained using Backpropagation Neural Network (BPNN), and Generalized Regression Neural Networks (GRNN). A two-layered algorithm fusion system is developed and tested that yields better results.

  4. Hemostatic techniques following multilevel posterior lumbar spine surgery: a randomized control trial.

    PubMed

    Wu, Jian; Jin, Yongming; Zhang, Jun; Shao, Haiyu; Yang, Di; Chen, Jinping

    2014-12-01

    This was a prospective, randomized controlled clinical study. To determine the efficacy of absorbable gelatin sponge in reducing blood loss, as well as shortening the length of hospital stay in patients undergoing multilevel posterior lumbar spinal surgery. Absorbable gelatin sponge is reported to decrease postoperative drain output and the length of hospital stay after multilevel posterior cervical spine surgery. However, there is a dearth of literature on prospective study of the efficacy of absorbable gelatin sponge in reducing postoperative blood loss, as well as shortening the length of hospital stay in patients undergoing multilevel posterior lumbar spinal surgery. A total of 82 consecutive patients who underwent multilevel posterior lumbar fusion or posterior lumbar interbody fusion between June 2011 and June 2012 were prospectively randomized into one of the 2 groups according to whether absorbable gelatin sponge for postoperative blood management was used or not. Demographic distribution, total drain output, blood transfusion rate, the length of stay, the number of readmissions, and postoperative complications were analyzed. Total drain output averaged 173 mL in the study group and 392 mL in the control group (P=0.000). Perioperative allogeneic blood transfusion rate were lower in the Gelfoam group (34.1% vs. 58.5%, P=0.046); moreover, length of stay in patients with the use of absorbable gelatin sponge (12.58 d) was significantly shorter (P=0.009) than the patients in the control group (14.46 d). No patient developed adverse reactions attributable to the absorbable gelatin sponge. Application of absorbable gelatin sponge at the end of multilevel posterior lumbar fusion can significantly decrease postoperative drain output and length of hospital stay.

  5. Robustness, evolvability, and the logic of genetic regulation.

    PubMed

    Payne, Joshua L; Moore, Jason H; Wagner, Andreas

    2014-01-01

    In gene regulatory circuits, the expression of individual genes is commonly modulated by a set of regulating gene products, which bind to a gene's cis-regulatory region. This region encodes an input-output function, referred to as signal-integration logic, that maps a specific combination of regulatory signals (inputs) to a particular expression state (output) of a gene. The space of all possible signal-integration functions is vast and the mapping from input to output is many-to-one: For the same set of inputs, many functions (genotypes) yield the same expression output (phenotype). Here, we exhaustively enumerate the set of signal-integration functions that yield identical gene expression patterns within a computational model of gene regulatory circuits. Our goal is to characterize the relationship between robustness and evolvability in the signal-integration space of regulatory circuits, and to understand how these properties vary between the genotypic and phenotypic scales. Among other results, we find that the distributions of genotypic robustness are skewed, so that the majority of signal-integration functions are robust to perturbation. We show that the connected set of genotypes that make up a given phenotype are constrained to specific regions of the space of all possible signal-integration functions, but that as the distance between genotypes increases, so does their capacity for unique innovations. In addition, we find that robust phenotypes are (i) evolvable, (ii) easily identified by random mutation, and (iii) mutationally biased toward other robust phenotypes. We explore the implications of these latter observations for mutation-based evolution by conducting random walks between randomly chosen source and target phenotypes. We demonstrate that the time required to identify the target phenotype is independent of the properties of the source phenotype.

  6. Robustness, Evolvability, and the Logic of Genetic Regulation

    PubMed Central

    Moore, Jason H.; Wagner, Andreas

    2014-01-01

    In gene regulatory circuits, the expression of individual genes is commonly modulated by a set of regulating gene products, which bind to a gene’s cis-regulatory region. This region encodes an input-output function, referred to as signal-integration logic, that maps a specific combination of regulatory signals (inputs) to a particular expression state (output) of a gene. The space of all possible signal-integration functions is vast and the mapping from input to output is many-to-one: for the same set of inputs, many functions (genotypes) yield the same expression output (phenotype). Here, we exhaustively enumerate the set of signal-integration functions that yield idential gene expression patterns within a computational model of gene regulatory circuits. Our goal is to characterize the relationship between robustness and evolvability in the signal-integration space of regulatory circuits, and to understand how these properties vary between the genotypic and phenotypic scales. Among other results, we find that the distributions of genotypic robustness are skewed, such that the majority of signal-integration functions are robust to perturbation. We show that the connected set of genotypes that make up a given phenotype are constrained to specific regions of the space of all possible signal-integration functions, but that as the distance between genotypes increases, so does their capacity for unique innovations. In addition, we find that robust phenotypes are (i) evolvable, (ii) easily identified by random mutation, and (iii) mutationally biased toward other robust phenotypes. We explore the implications of these latter observations for mutation-based evolution by conducting random walks between randomly chosen source and target phenotypes. We demonstrate that the time required to identify the target phenotype is independent of the properties of the source phenotype. PMID:23373974

  7. Metastable dynamical patterns and their stabilization in arrays of bidirectionally coupled sigmoidal neurons

    NASA Astrophysics Data System (ADS)

    Horikawa, Yo

    2013-12-01

    Transient patterns in a bistable ring of bidirectionally coupled sigmoidal neurons were studied. When the system had a pair of spatially uniform steady solutions, the instability of unstable spatially nonuniform steady solutions decreased exponentially with the number of neurons because of the symmetry of the system. As a result, transient spatially nonuniform patterns showed dynamical metastability: Their duration increased exponentially with the number of neurons and the duration of randomly generated patterns obeyed a power-law distribution. However, these metastable dynamical patterns were easily stabilized in the presence of small variations in coupling strength. Metastable rotating waves and their pinning in the presence of asymmetry in the direction of coupling and the disappearance of metastable dynamical patterns due to asymmetry in the output function of a neuron were also examined. Further, in a two-dimensional array of neurons with nearest-neighbor coupling, intrinsically one-dimensional patterns were dominant in transients, and self-excitation in these neurons affected the metastable dynamical patterns.

  8. An ultra-low power output capacitor-less low-dropout regulator with slew-rate-enhanced circuit

    NASA Astrophysics Data System (ADS)

    Cheng, Xin; Zhang, Yu; Xie, Guangjun; Yang, Yizhong; Zhang, Zhang

    2018-03-01

    An ultra-low power output-capacitorless low-dropout (LDO) regulator with a slew-rate-enhanced (SRE) circuit is introduced. The increased slew rate is achieved by sensing the transient output voltage of the LDO and then charging (or discharging) the gate capacitor quickly. In addition, a buffer with ultra-low output impedance is presented to improve line and load regulations. This design is fabricated by SMIC 0.18 μm CMOS technology. Experimental results show that, the proposed LDO regulator only consumes an ultra-low quiescent current of 1.2 μA. The output current range is from 10 μA to 200 mA and the corresponding variation of output voltage is less than 40 mV. Moreover, the measured line regulation and load regulation are 15.38 mV/V and 0.4 mV/mA respectively. Project supported by the National Natural Science Foundation of China (Nos. 61401137, 61404043, 61674049).

  9. A Solar-luminosity Model and Climate

    NASA Technical Reports Server (NTRS)

    Perry, Charles A.

    1990-01-01

    Although the mechanisms of climatic change are not completely understood, the potential causes include changes in the Sun's luminosity. Solar activity in the form of sunspots, flares, proton events, and radiation fluctuations has displayed periodic tendencies. Two types of proxy climatic data that can be related to periodic solar activity are varved geologic formations and freshwater diatom deposits. A model for solar luminosity was developed by using the geometric progression of harmonic cycles that is evident in solar and geophysical data. The model assumes that variation in global energy input is a result of many periods of individual solar-luminosity variations. The 0.1-percent variation of the solar constant measured during the last sunspot cycle provided the basis for determining the amplitude of each luminosity cycle. Model output is a summation of the amplitudes of each cycle of a geometric progression of harmonic sine waves that are referenced to the 11-year average solar cycle. When the last eight cycles in Emiliani's oxygen-18 variations from deep-sea cores were standardized to the average length of glaciations during the Pleistocene (88,000 years), correlation coefficients with the model output ranged from 0.48 to 0.76. In order to calibrate the model to real time, model output was graphically compared to indirect records of glacial advances and retreats during the last 24,000 years and with sea-level rises during the Holocene. Carbon-14 production during the last millenium and elevations of the Great Salt Lake for the last 140 years demonstrate significant correlations with modeled luminosity. Major solar flares during the last 90 years match well with the time-calibrated model.

  10. Temperature compensated and self-calibrated current sensor

    DOEpatents

    Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane

    2007-09-25

    A method is described to provide temperature compensation and reduction of drift due to aging for a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. The offset voltage signal generated by each magnetic field sensor is used to correct variations in the output signal due to temperature variations and aging.

  11. Flame detector operable in presence of proton radiation

    NASA Technical Reports Server (NTRS)

    Walker, D. J.; Turnage, J. E.; Linford, R. M. F.; Cornish, S. D. (Inventor)

    1974-01-01

    A detector of ultraviolet radiation for operation in a space vehicle which orbits through high intensity radiation areas is described. Two identical ultraviolet sensor tubes are mounted within a shield which limits to acceptable levels the amount of proton radiation reaching the sensor tubes. The shield has an opening which permits ultraviolet radiation to reach one of the sensing tubes. The shield keeps ultraviolet radiation from reaching the other sensor tube, designated the reference tube. The circuitry of the detector subtracts the output of the reference tube from the output of the sensing tube, and any portion of the output of the sensing tube which is due to proton radiation is offset by the output of the reference tube. A delay circuit in the detector prevents false alarms by keeping statistical variations in the proton radiation sensed by the two sensor tubes from developing an output signal.

  12. Optical Limiting Using the Two-Photon Absorption Electrical Modulation Effect in HgCdTe Photodiode

    PubMed Central

    Cui, Haoyang; Yang, Junjie; Zeng, Jundong; Tang, Zhong

    2013-01-01

    The electrical modulation properties of the output intensity of two-photon absorption (TPA) pumping were analyzed in this paper. The frequency dispersion dependence of TPA and the electric field dependence of TPA were calculated using Wherrett theory model and Garcia theory model, respectively. Both predicted a dramatic variation of TPA coefficient which was attributed into the increasing of the transition rate. The output intensity of the laser pulse propagation in the pn junction device was calculated by using function-transfer method. It shows that the output intensity increases nonlinearly with increasing intensity of incident light and eventually reaches saturation. The output saturation intensity depends on the electric field strength; the greater the electric field, the smaller the output intensity. Consequently, the clamped saturation intensity can be controlled by the electric field. The prior advantage of electrical modulation is that the TPA can be varied extremely continuously, thus adjusting the output intensity in a wide range. This large change provides a manipulate method to control steady output intensity of TPA by adjusting electric field. PMID:24198721

  13. Spatial, Temporal, and Density-Dependent Components of Habitat Quality for a Desert Owl

    PubMed Central

    Flesch, Aaron D.; Hutto, Richard L.; van Leeuwen, Willem J. D.; Hartfield, Kyle; Jacobs, Sky

    2015-01-01

    Spatial variation in resources is a fundamental driver of habitat quality but the realized value of resources at any point in space may depend on the effects of conspecifics and stochastic factors, such as weather, which vary through time. We evaluated the relative and combined effects of habitat resources, weather, and conspecifics on habitat quality for ferruginous pygmy-owls (Glaucidium brasilianum) in the Sonoran Desert of northwest Mexico by monitoring reproductive output and conspecific abundance over 10 years in and around 107 territory patches. Variation in reproductive output was much greater across space than time, and although habitat resources explained a much greater proportion of that variation (0.70) than weather (0.17) or conspecifics (0.13), evidence for interactions among each of these components of the environment was strong. Relative to habitat that was persistently low in quality, high-quality habitat buffered the negative effects of conspecifics and amplified the benefits of favorable weather, but did not buffer the disadvantages of harsh weather. Moreover, the positive effects of favorable weather at low conspecific densities were offset by intraspecific competition at high densities. Although realized habitat quality declined with increasing conspecific density suggesting interference mechanisms associated with an Ideal Free Distribution, broad spatial heterogeneity in habitat quality persisted. Factors linked to food resources had positive effects on reproductive output but only where nest cavities were sufficiently abundant to mitigate the negative effects of heterospecific enemies. Annual precipitation and brooding-season temperature had strong multiplicative effects on reproductive output, which declined at increasing rates as drought and temperature increased, reflecting conditions predicted to become more frequent with climate change. Because the collective environment influences habitat quality in complex ways, integrated approaches that consider habitat resources, stochastic factors, and conspecifics are necessary to accurately assess habitat quality. PMID:25786257

  14. Spatial, temporal, and density-dependent components of habitat quality for a desert owl.

    PubMed

    Flesch, Aaron D; Hutto, Richard L; van Leeuwen, Willem J D; Hartfield, Kyle; Jacobs, Sky

    2015-01-01

    Spatial variation in resources is a fundamental driver of habitat quality but the realized value of resources at any point in space may depend on the effects of conspecifics and stochastic factors, such as weather, which vary through time. We evaluated the relative and combined effects of habitat resources, weather, and conspecifics on habitat quality for ferruginous pygmy-owls (Glaucidium brasilianum) in the Sonoran Desert of northwest Mexico by monitoring reproductive output and conspecific abundance over 10 years in and around 107 territory patches. Variation in reproductive output was much greater across space than time, and although habitat resources explained a much greater proportion of that variation (0.70) than weather (0.17) or conspecifics (0.13), evidence for interactions among each of these components of the environment was strong. Relative to habitat that was persistently low in quality, high-quality habitat buffered the negative effects of conspecifics and amplified the benefits of favorable weather, but did not buffer the disadvantages of harsh weather. Moreover, the positive effects of favorable weather at low conspecific densities were offset by intraspecific competition at high densities. Although realized habitat quality declined with increasing conspecific density suggesting interference mechanisms associated with an Ideal Free Distribution, broad spatial heterogeneity in habitat quality persisted. Factors linked to food resources had positive effects on reproductive output but only where nest cavities were sufficiently abundant to mitigate the negative effects of heterospecific enemies. Annual precipitation and brooding-season temperature had strong multiplicative effects on reproductive output, which declined at increasing rates as drought and temperature increased, reflecting conditions predicted to become more frequent with climate change. Because the collective environment influences habitat quality in complex ways, integrated approaches that consider habitat resources, stochastic factors, and conspecifics are necessary to accurately assess habitat quality.

  15. Narrow-linewidth Q-switched random distributed feedback fiber laser.

    PubMed

    Xu, Jiangming; Ye, Jun; Xiao, Hu; Leng, Jinyong; Wu, Jian; Zhang, Hanwei; Zhou, Pu

    2016-08-22

    A narrow-linewidth Q-switched random fiber laser (RFL) based on a half-opened cavity, which is realized by narrow-linewidth fiber Bragg grating (FBG) and a section of 3 km passive fiber, has been proposed and experimentally investigated. The narrow-linewidth lasing is generated by the spectral filtering of three FBGs with linewidth of 1.21 nm, 0.56 nm, and 0.12 nm, respectively. The Q switching of the distributed cavity is achieved by placing an acousto-optical modulator (AOM) between the FBG and the passive fiber. The maximal output powers of the narrow-linewidth RFLs with the three different FBGs are 0.54 W, 0.27 W, and 0.08 W, respectively. Furthermore, the repetition rates of the output pulses are 500 kHz, and the pulse durations are about 500 ns. The corresponding pulse energies are about 1.08 μJ, 0.54 μJ, and 0.16 μJ, accordingly. The linewidth of FBG can influence the output characteristics in full scale. The narrower the FBG, the higher the pump threshold; the lower the output power at the same pump level, the more serious the linewidth broadening; and thus the higher the proportion of the CW-ground exists in the output pulse trains. Thanks to the assistance of the band-pass filter (BPF), the proportion of the CW-ground of narrow-linewidth Q-switched RFL under the relative high-pump-low-output condition can be reduced effectively. The experimental results indicate that it is challenging to demonstrate a narrow-linewidth Q-switched RFL with high quality output. But further power scaling and linewidth narrowing is possible in the case of operating parameters, optimization efforts, and a more powerful pump source. To the best of our knowledge, this is the first demonstration of narrow-linewidth generation in a Q-switched RFL.

  16. Does an L-glutamine-containing, Glucose-free, Oral Rehydration Solution Reduce Stool Output and Time to Rehydrate in Children with Acute Diarrhoea? A Double-blind Randomized Clinical Trial

    PubMed Central

    Gutiérrez, Claudia; Villa, Sofía; Mota, Felipe R.; Calva, Juan J.

    2007-01-01

    This study assessed whether an oral rehydration solution (ORS) in which glucose is replaced by L-glutamine (L-glutamine ORS) is more effective than the standard glucose-based rehydration solution recommended by the World Health Organization (WHO-ORS) in reducing the stool volume and time to rehydrate in acute diarrhoea. In a double-blind, randomized controlled trial in a Mexican hospital, 147 dehydrated children, aged 1–60 month(s), were assigned either to the WHO-ORS (74 children), or to the L-glutamine ORS (73 children) and followed until successful rehydration. There were no significant differences between the groups in stool output during the first four hours, time to successful rehydration, volume of ORS required for rehydration, urinary output, and vomiting. This was independent of rotavirus-associated infection. An L-glutamine-containing glucose-free ORS seems not to offer greater clinical benefit than the standard WHO-ORS in mildly-to-moderately-dehydrated children with acute non-cholera diarrhoea. PMID:18330060

  17. Labor Supply and Consumption of Food in a Closed Economy under a Range of Fixed- and Random-Ratio Schedules: Tests of Unit Price

    ERIC Educational Resources Information Center

    Madden, Gregory J.; Dake, Jamie M.; Mauel, Ellie C.; Rowe, Ryan R.

    2005-01-01

    The behavioral economic concept of unit price predicts that consumption and response output (labor supply) are determined by the unit price at which a good is available regardless of the value of the cost and benefit components of the unit price ratio. Experiment 1 assessed 4 pigeons' consumption and response output at a range of unit prices. In…

  18. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  19. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  20. ELECTRONIC TRIGGER CIRCUIT

    DOEpatents

    Russell, J.A.G.

    1958-01-01

    An electronic trigger circuit is described of the type where an output pulse is obtained only after an input voltage has cqualed or exceeded a selected reference voltage. In general, the invention comprises a source of direct current reference voltage in series with an impedance and a diode rectifying element. An input pulse of preselected amplitude causes the diode to conduct and develop a signal across the impedance. The signal is delivered to an amplifier where an output pulse is produced and part of the output is fed back in a positive manner to the diode so that the amplifier produces a steep wave front trigger pulsc at the output. The trigger point of the described circuit is not subject to variation due to the aging, etc., of multi-electrode tabes, since the diode circuit essentially determines the trigger point.

  1. Compensation for Lithography Induced Process Variations during Physical Design

    NASA Astrophysics Data System (ADS)

    Chin, Eric Yiow-Bing

    This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.

  2. Beam Output Audit results within the EORTC Radiation Oncology Group network.

    PubMed

    Hurkmans, Coen W; Christiaens, Melissa; Collette, Sandra; Weber, Damien Charles

    2016-12-15

    Beam Output Auditing (BOA) is one key process of the EORTC radiation therapy quality assurance program. Here the results obtained between 2005 and 2014 are presented and compared to previous results.For all BOA reports the following parameters were scored: centre, country, date of audit, beam energies and treatment machines audited, auditing organisation, percentage of agreement between stated and measured dose.Four-hundred and sixty-one BOA reports were analyzed containing the results of 1790 photon and 1366 electron beams, delivered by 755 different treatment machines. The majority of beams (91.1%) were within the optimal limit of ≤ 3%. Only 13 beams (0.4%; n = 9 electrons; n = 4 photons), were out of the range of acceptance of ≤ 5%. Previous reviews reported a much higher percentage of 2.5% or more of the BOAs with >5% deviation.The majority of EORTC centres present beam output variations within the 3% tolerance cutoff value and only 0.4% of audited beams presented with variations of more than 5%. This is an important improvement compared to previous BOA results.

  3. An agent-based model of dialect evolution in killer whales.

    PubMed

    Filatova, Olga A; Miller, Patrick J O

    2015-05-21

    The killer whale is one of the few animal species with vocal dialects that arise from socially learned group-specific call repertoires. We describe a new agent-based model of killer whale populations and test a set of vocal-learning rules to assess which mechanisms may lead to the formation of dialect groupings observed in the wild. We tested a null model with genetic transmission and no learning, and ten models with learning rules that differ by template source (mother or matriline), variation type (random errors or innovations) and type of call change (no divergence from kin vs. divergence from kin). The null model without vocal learning did not produce the pattern of group-specific call repertoires we observe in nature. Learning from either mother alone or the entire matriline with calls changing by random errors produced a graded distribution of the call phenotype, without the discrete call types observed in nature. Introducing occasional innovation or random error proportional to matriline variance yielded more or less discrete and stable call types. A tendency to diverge from the calls of related matrilines provided fast divergence of loose call clusters. A pattern resembling the dialect diversity observed in the wild arose only when rules were applied in combinations and similar outputs could arise from different learning rules and their combinations. Our results emphasize the lack of information on quantitative features of wild killer whale dialects and reveal a set of testable questions that can draw insights into the cultural evolution of killer whale dialects. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Power output measurement during treadmill cycling.

    PubMed

    Coleman, D A; Wiles, J D; Davison, R C R; Smith, M F; Swaine, I L

    2007-06-01

    The study aim was to consider the use of a motorised treadmill as a cycling ergometry system by assessing predicted and recorded power output values during treadmill cycling. Fourteen male cyclists completed repeated cycling trials on a motorised treadmill whilst riding their own bicycle fitted with a mobile ergometer. The speed, gradient and loading via an external pulley system were recorded during 20-s constant speed trials and used to estimate power output with an assumption about the contribution of rolling resistance. These values were then compared with mobile ergometer measurements. To assess the reliability of measured power output values, four repeated trials were conducted on each cyclist. During level cycling, the recorded power output was 257.2 +/- 99.3 W compared to the predicted power output of 258.2 +/- 99.9 W (p > 0.05). For graded cycling, there was no significant difference between measured and predicted power output, 268.8 +/- 109.8 W vs. 270.1 +/- 111.7 W, p > 0.05, SEE 1.2 %. The coefficient of variation for mobile ergometer power output measurements during repeated trials ranged from 1.5 % (95 % CI 1.2 - 2.0 %) to 1.8 % (95 % CI 1.5 - 2.4 %). These results indicate that treadmill cycling can be used as an ergometry system to assess power output in cyclists with acceptable accuracy.

  5. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  6. Seafloor Age-Stacking Reveals No Evidence for Milankovitch Cycle Influence on Abyssal Hills at Intermediate, Fast and Super-Fast Spreading Rates

    NASA Astrophysics Data System (ADS)

    Goff, J.; Zahirovic, S.; Müller, D.

    2017-12-01

    Recently published spectral analyses of seafloor bathymetry concluded that abyssal hills, highly linear ridges that are formed along seafloor spreading centers, exhibit periodicities that correspond to Milankovitch cycles - variations in Earth's orbit that affect climate on periods of 23, 41 and 100 thousand years. These studies argue that this correspondence could be explained by modulation of volcanic output at the mid-ocean ridge due to lithostatic pressure variations associated with rising and falling sea level. If true, then the implications are substantial: mapping the topography of the seafloor with sonar could be used as a way to investigate past climate change. This "Milankovitch cycle" hypothesis predicts that the rise and fall of abyssal hills will be correlated to crustal age, which can be tested by stacking, or averaging, bathymetry as a function of age; stacking will enhance any age-dependent signal while suppressing random components, such as fault-generated topography. We apply age-stacking to data flanking the Southeast Indian Ridge ( 3.6 cm/yr half rate), northern East Pacific Rise ( 5.4 cm/yr half rate) and southern East Pacific Rise ( 7.8 cm/yr half rate), where multibeam bathymetric coverage is extensive on the ridge flanks. At the greatest precision possible given magnetic anomaly data coverage, we have revised digital crustal age models in these regions with updated axis and magnetic anomaly traces. We also utilize known 2nd-order spatial statistical properties of abyssal hills to predict the variability of the age-stack under the null hypothesis that abyssal hills are entirely random with respect to crustal age; the age-stacked profile is significantly different from zero only if it exceeds this expected variability by a large margin. Our results indicate, however, that the null hypothesis satisfactorily explains the age-stacking results in all three regions of study, thus providing no support for the Milankovitch cycle hypothesis. The random nature of abyssal hills is consistent with a primarily faulted origin. .

  7. Effects of closed-loop stimulation vs. DDD pacing on haemodynamic variations and occurrence of syncope induced by head-up tilt test in older patients with refractory cardioinhibitory vasovagal syncope: the Tilt test-Induced REsponse in Closed-loop Stimulation multicentre, prospective, single blind, randomized study.

    PubMed

    Palmisano, Pietro; Dell'Era, Gabriele; Russo, Vincenzo; Zaccaria, Maria; Mangia, Rolando; Bortnik, Miriam; De Vecchi, Federica; Giubertoni, Ailia; Patti, Fabiana; Magnani, Andrea; Nigro, Gerardo; Rago, Anna; Occhetta, Eraldo; Accogli, Michele

    2018-05-01

    Closed-loop stimulation (CLS) seemed promising in preventing the recurrence of vasovagal syncope (VVS) in patients with a cardioinhibitory response to head-up tilt test (HUTT) compared with conventional pacing. We hypothesized that the better results of this algorithm are due to its quick reaction in high-rate pacing delivered in the early phase of vasovagal reflex, which increase the cardiac output and the blood pressure preventing loss of consciousness. This prospective, randomized, single-blind, multicentre study was designed as an intra-patient comparison and enrolled 30 patients (age 62.2 ± 13.5 years, males 60.0%) with cardioinhibitory VVS, carrying a dual-chamber pacemaker incorporating CLS algorithm. Two HUTTs were performed one week apart: one during DDD-CLS 60-130/min pacing and the other during DDD 60/min pacing; patients were randomly and blindly assigned to two groups: in one the first HUTT was performed in DDD-CLS (n = 15), in the other in DDD (n = 15). Occurrence of syncope and haemodynamic variations induced by HUTT was recorded during the tests. Compared with DDD, DDD-CLS significantly reduced the occurrence of syncope induced by HUTT (30.0% vs. 76.7%; P < 0.001). In the patients who had syncope in both DDD and DDD-CLS mode, DDD-CLS significantly delayed the onset of syncope during HUTT (from 20.8 ± 3.9 to 24.8 ± 0.9 min; P = 0.032). The maximum fall in systolic blood pressure recorded during HUTT was significantly lower in DDD-CLS compared with DDD (43.2 ± 30.3 vs. 65.1 ± 25.8 mmHg; P = 0.004). In patients with cardioinhibitory VVS, CLS reduces the occurrence of syncope induced by HUTT, compared with DDD pacing. When CLS is not able to abort the vasovagal reflex, it seems to delay the onset of syncope.

  8. Factors affecting the output pulse flatness of the linear transformer driver cavity systems with 5th harmonics

    DOE PAGES

    Alexeenko, V. M.; Mazarakis, M. G.; Kim, A. A.; ...

    2016-09-19

    Here, we describe the study we have undertaken to evaluate the effect of component tolerances in obtaining a voltage output flat top for a linear transformer driver (LTD) cavity containing 3rd and 5th harmonic bricks [A. A. Kim et al., in Proc. IEEE Pulsed Power and Plasma Science PPPS2013 (San Francisco, California, USA, 2013), pp. 1354–1356.] and for 30 cavity voltage adder. Our goal was to define the necessary component value precision in order to obtain a voltage output flat top with no more than ±0.5% amplitude variation.

  9. Factors affecting the output pulse flatness of the linear transformer driver cavity systems with 5th harmonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexeenko, V. M.; Mazarakis, M. G.; Kim, A. A.

    Here, we describe the study we have undertaken to evaluate the effect of component tolerances in obtaining a voltage output flat top for a linear transformer driver (LTD) cavity containing 3rd and 5th harmonic bricks [A. A. Kim et al., in Proc. IEEE Pulsed Power and Plasma Science PPPS2013 (San Francisco, California, USA, 2013), pp. 1354–1356.] and for 30 cavity voltage adder. Our goal was to define the necessary component value precision in order to obtain a voltage output flat top with no more than ±0.5% amplitude variation.

  10. Effect of a perioperative, cardiac output-guided hemodynamic therapy algorithm on outcomes following major gastrointestinal surgery: a randomized clinical trial and systematic review.

    PubMed

    Pearse, Rupert M; Harrison, David A; MacDonald, Neil; Gillies, Michael A; Blunt, Mark; Ackland, Gareth; Grocott, Michael P W; Ahern, Aoife; Griggs, Kathryn; Scott, Rachael; Hinds, Charles; Rowan, Kathryn

    2014-06-04

    Small trials suggest that postoperative outcomes may be improved by the use of cardiac output monitoring to guide administration of intravenous fluid and inotropic drugs as part of a hemodynamic therapy algorithm. To evaluate the clinical effectiveness of a perioperative, cardiac output-guided hemodynamic therapy algorithm. OPTIMISE was a pragmatic, multicenter, randomized, observer-blinded trial of 734 high-risk patients aged 50 years or older undergoing major gastrointestinal surgery at 17 acute care hospitals in the United Kingdom. An updated systematic review and meta-analysis were also conducted including randomized trials published from 1966 to February 2014. Patients were randomly assigned to a cardiac output-guided hemodynamic therapy algorithm for intravenous fluid and inotrope (dopexamine) infusion during and 6 hours following surgery (n=368) or to usual care (n=366). The primary outcome was a composite of predefined 30-day moderate or major complications and mortality. Secondary outcomes were morbidity on day 7; infection, critical care-free days, and all-cause mortality at 30 days; all-cause mortality at 180 days; and length of hospital stay. Baseline patient characteristics, clinical care, and volumes of intravenous fluid were similar between groups. Care was nonadherent to the allocated treatment for less than 10% of patients in each group. The primary outcome occurred in 36.6% of intervention and 43.4% of usual care participants (relative risk [RR], 0.84 [95% CI, 0.71-1.01]; absolute risk reduction, 6.8% [95% CI, -0.3% to 13.9%]; P = .07). There was no significant difference between groups for any secondary outcomes. Five intervention patients (1.4%) experienced cardiovascular serious adverse events within 24 hours compared with none in the usual care group. Findings of the meta-analysis of 38 trials, including data from this study, suggest that the intervention is associated with fewer complications (intervention, 488/1548 [31.5%] vs control, 614/1476 [41.6%]; RR, 0.77 [95% CI, 0.71-0.83]) and a nonsignificant reduction in hospital, 28-day, or 30-day mortality (intervention, 159/3215 deaths [4.9%] vs control, 206/3160 deaths [6.5%]; RR, 0.82 [95% CI, 0.67-1.01]) and mortality at longest follow-up (intervention, 267/3215 deaths [8.3%] vs control, 327/3160 deaths [10.3%]; RR, 0.86 [95% CI, 0.74-1.00]). In a randomized trial of high-risk patients undergoing major gastrointestinal surgery, use of a cardiac output-guided hemodynamic therapy algorithm compared with usual care did not reduce a composite outcome of complications and 30-day mortality. However, inclusion of these data in an updated meta-analysis indicates that the intervention was associated with a reduction in complication rates. isrctn.org Identifier: ISRCTN04386758.

  11. Systems for controlling the intensity variations in a laser beam and for frequency conversion thereof

    DOEpatents

    Skupsky, S.; Craxton, R.S.; Soures, J.

    1990-10-02

    In order to control the intensity of a laser beam so that its intensity varies uniformly and provides uniform illumination of a target, such as a laser fusion target, a broad bandwidth laser pulse is spectrally dispersed spatially so that the frequency components thereof are spread apart. A disperser (grating) provides an output beam which varies spatially in wavelength in at least one direction transverse to the direction of propagation of the beam. Temporal spread (time delay) across the beam is corrected by using a phase delay device (a time delay compensation echelon). The dispersed beam may be amplified with laser amplifiers and frequency converted (doubled, tripled or quadrupled in frequency) with nonlinear optical elements (birefringent crystals). The spectral variation across the beam is compensated by varying the angle of incidence on one of the crystals with respect to the crystal optical axis utilizing a lens which diverges the beam. Another lens after the frequency converter may be used to recollimate the beam. The frequency converted beam is recombined so that portions of different frequency interfere and, unlike interference between waves of the same wavelength, there results an intensity pattern with rapid temporal oscillations which average out rapidly in time thereby producing uniform illumination on target. A distributed phase plate (also known as a random phase mask), through which the spectrally dispersed beam is passed and then focused on a target, is used to provide the interference pattern which becomes nearly modulation free and uniform in intensity in the direction of the spectral variation. 16 figs.

  12. Systems for controlling the intensity variations in a laser beam and for frequency conversion thereof

    DOEpatents

    Skupsky, Stanley; Craxton, R. Stephen; Soures, John

    1990-01-01

    In order to control the intensity of a laser beam so that its intensity varies uniformly and provides uniform illumination of a target, such as a laser fusion target, a broad bandwidth laser pulse is spectrally dispersed spatially so that the frequency components thereof are spread apart. A disperser (grating) provides an output beam which varies spatially in wavelength in at least one direction transverse to the direction of propagation of the beam. Temporal spread (time delay) across the beam is corrected by using a phase delay device (a time delay compensation echelon). The dispersed beam may be amplified with laser amplifiers and frequency converted (doubled, tripled or quadrupled in frequency) with nonlinear optical elements (birefringent crystals). The spectral variation across the beam is compensated by varying the angle of incidence on one of the crystals with respect to the crystal optical axis utilizing a lens which diverges the beam. Another lens after the frequency converter may be used to recollimate the beam. The frequency converted beam is recombined so that portions of different frequency interfere and, unlike interference between waves of the same wavelength, there results an intensity pattern with rapid temoral oscillations which average out rapidly in time thereby producing uniform illumination on target. A distributed phase plate (also known as a random phase mask), through which the spectrally dispersed beam is passed and then focused on a target, is used to provide the interference pattern which becomes nearly modulation free and uniform in intensity in the direction of the spectral variation.

  13. Temperature compensated and self-calibrated current sensor using reference current

    DOEpatents

    Yakymyshyn, Christopher Paul [Seminole, FL; Brubaker, Michael Allen [Loveland, CO; Yakymyshyn, Pamela Jane [Seminole, FL

    2008-01-22

    A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference electrical current carried by a conductor positioned within the sensing window of the current sensor is used to correct variations in the output signal due to temperature variations and aging.

  14. Isotopic evidence for variations in the marine calcium cycle over the Cenozoic.

    PubMed

    De La Rocha, C L; DePaolo, D J

    2000-08-18

    Significant variations in the isotopic composition of marine calcium have occurred over the last 80 million years. These variations reflect deviations in the balance between inputs of calcium to the ocean from weathering and outputs due to carbonate sedimentation, processes that are important in controlling the concentration of carbon dioxide in the atmosphere and, hence, global climate. The calcium isotopic ratio of paleo-seawater is an indicator of past changes in atmospheric carbon dioxide when coupled with determinations of paleo-pH.

  15. Comparison of Urine Output among Patients Treated with More Intensive Versus Less Intensive RRT: Results from the Acute Renal Failure Trial Network Study

    PubMed Central

    Asafu-Adjei, Josephine; Betensky, Rebecca A.; Palevsky, Paul M.; Waikar, Sushrut S.

    2016-01-01

    Background and objectives Intensive RRT may have adverse effects that account for the absence of benefit observed in randomized trials of more intensive versus less intensive RRT. We wished to determine the association of more intensive RRT with changes in urine output as a marker of worsening residual renal function in critically ill patients with severe AKI. Design, setting, participants, & measurements The Acute Renal Failure Trial Network Study (n=1124) was a multicenter trial that randomized critically ill patients requiring initiation of RRT to more intensive (hemodialysis or sustained low–efficiency dialysis six times per week or continuous venovenous hemodiafiltration at 35 ml/kg per hour) versus less intensive (hemodialysis or sustained low–efficiency dialysis three times per week or continuous venovenous hemodiafiltration at 20 ml/kg per hour) RRT. Mixed linear regression models were fit to estimate the association of RRT intensity with change in daily urine output in survivors through day 7 (n=871); Cox regression models were fit to determine the association of RRT intensity with time to ≥50% decline in urine output in all patients through day 28. Results Mean age of participants was 60±15 years old, 72% were men, and 30% were diabetic. In unadjusted models, among patients who survived ≥7 days, mean urine output was, on average, 31.7 ml/d higher (95% confidence interval, 8.2 to 55.2 ml/d) for the less intensive group compared with the more intensive group (P=0.01). More intensive RRT was associated with 29% greater unadjusted risk of decline in urine output of ≥50% (hazard ratio, 1.29; 95% confidence interval, 1.10 to 1.51). Conclusions More intensive versus less intensive RRT is associated with a greater reduction in urine output during the first 7 days of therapy and a greater risk of developing a decline in urine output of ≥50% in critically ill patients with severe AKI. PMID:27449661

  16. Direct variational data assimilation algorithm for atmospheric chemistry data with transport and transformation model

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir; Nuterman, Roman; Baklanov, Alexander; Mahura, Alexander

    2015-11-01

    Atmospheric chemistry dynamics is studied with convection-diffusion-reaction model. The numerical Data Assimilation algorithm presented is based on the additive-averaged splitting schemes. It carries out ''fine-grained'' variational data assimilation on the separate splitting stages with respect to spatial dimensions and processes i.e. the same measurement data is assimilated to different parts of the split model. This design has efficient implementation due to the direct data assimilation algorithms of the transport process along coordinate lines. Results of numerical experiments with chemical data assimilation algorithm of in situ concentration measurements on real data scenario have been presented. In order to construct the scenario, meteorological data has been taken from EnviroHIRLAM model output, initial conditions from MOZART model output and measurements from Airbase database.

  17. Optical fiber voltage sensor based on Michelsion interferometer using Fabry-Perot demodulation interferometer

    NASA Astrophysics Data System (ADS)

    Chen, Xinwei; He, Shengnan; Li, Dandan; Wang, Kai; Fan, Yan'en; Wu, Shuai

    2014-11-01

    We present an optical fiber voltage sensor by Michelsion interferometer (MI) employing a Fabry-Perot (F-P) interferometer and the DC phase tracking (DCPT) signal processing method. By mounting a MI fabricated by an optical fiber coupler on a piezoelectric (PZT) transducer bar, a dynamic strain would be generated to change the optical path difference (OPD) of the interferometer when the measured voltage was applied on the PZT. Applying an F-P interferometer to demodulate the optical intensity variation output of the MI, the voltage can be obtained. The experiment results show that the relationship between the optical intensity variation and the voltage applied on the PZT is approximately linear. Furthermore, the phase generate carrier (PGC) algorithm was applied to demodulate the output of the sensor also.

  18. A randomized trial of the effect of automated ventricular capture on device longevity and threshold measurement in pacemaker patients.

    PubMed

    Koplan, Bruce A; Gilligan, David M; Nguyen, Luc S; Lau, Theodore K; Thackeray, Lisa M; Berg, Kellie Chase

    2008-11-01

    An automatic capture (AC) algorithm adjusts ventricular pacing output to capture the ventricle while optimizing output to 0.5 V above threshold. AC maintains this output and confirms capture on a beat-to-beat basis in bipolar and unipolar pacing and sensing. To assess the AC algorithm and its impact on device longevity. Patients implanted with a pacemaker were randomized 1:1 to have the AC feature on or off for 12 months. Two threshold tests were conducted at each visit- automatic threshold and manual threshold. Average ventricular voltage output and projected device longevity were compared between AC on and off using nonparametric tests. Nine hundred ten patients were enrolled and underwent device implantation. Average ventricular voltage output was 1.6 V for the AC on arm (n = 444) and 3.1 V for the AC off arm (n = 446) (P < 0.001). Projected device longevity was 10.3 years for AC on and 8.9 years for AC off (P < 0.0001), or a 16% increase in longevity for AC on. The proportion of patients in whom there was a difference between automatic threshold and manual threshold of

  19. Improved protein hydrogen/deuterium exchange mass spectrometry platform with fully automated data processing.

    PubMed

    Zhang, Zhongqi; Zhang, Aming; Xiao, Gang

    2012-06-05

    Protein hydrogen/deuterium exchange (HDX) followed by protease digestion and mass spectrometric (MS) analysis is accepted as a standard method for studying protein conformation and conformational dynamics. In this article, an improved HDX MS platform with fully automated data processing is described. The platform significantly reduces systematic and random errors in the measurement by introducing two types of corrections in HDX data analysis. First, a mixture of short peptides with fast HDX rates is introduced as internal standards to adjust the variations in the extent of back exchange from run to run. Second, a designed unique peptide (PPPI) with slow intrinsic HDX rate is employed as another internal standard to reflect the possible differences in protein intrinsic HDX rates when protein conformations at different solution conditions are compared. HDX data processing is achieved with a comprehensive HDX model to simulate the deuterium labeling and back exchange process. The HDX model is implemented into the in-house developed software MassAnalyzer and enables fully unattended analysis of the entire protein HDX MS data set starting from ion detection and peptide identification to final processed HDX output, typically within 1 day. The final output of the automated data processing is a set (or the average) of the most possible protection factors for each backbone amide hydrogen. The utility of the HDX MS platform is demonstrated by exploring the conformational transition of a monoclonal antibody by increasing concentrations of guanidine.

  20. Measurements of output factors with different detector types and Monte Carlo calculations of stopping-power ratios for degraded electron beams.

    PubMed

    Björk, Peter; Knöös, Tommy; Nilsson, Per

    2004-10-07

    The aim of the present study was to investigate three different detector types (a parallel-plate ionization chamber, a p-type silicon diode and a diamond detector) with regard to output factor measurements in degraded electron beams, such as those encountered in small-electron-field radiotherapy and intraoperative radiation therapy (IORT). The Monte Carlo method was used to calculate mass collision stopping-power ratios between water and the different detector materials for these complex electron beams (nominal energies of 6, 12 and 20 MeV). The diamond detector was shown to exhibit excellent properties for output factor measurements in degraded beams and was therefore used as a reference. The diode detector was found to be well suited for practical measurements of output factors, although the water-to-silicon stopping-power ratio was shown to vary slightly with treatment set-up and irradiation depth (especially for lower electron energies). Application of ionization-chamber-based dosimetry, according to international dosimetry protocols, will introduce uncertainties smaller than 0.3% into the output factor determination for conventional IORT beams if the variation of the water-to-air stopping-power ratio is not taken into account. The IORT system at our department includes a 0.3 cm thin plastic scatterer inside the therapeutic beam, which furthermore increases the energy degradation of the electrons. By ignoring the change in the water-to-air stopping-power ratio due to this scatterer, the output factor could be underestimated by up to 1.3%. This was verified by the measurements. In small-electron-beam dosimetry, the water-to-air stopping-power ratio variation with field size could mostly be ignored. For fields with flat lateral dose profiles (>3 x 3 cm2), output factors determined with the ionization chamber were found to be in close agreement with the results of the diamond detector. For smaller field sizes the lateral extension of the ionization chamber hampers its use. We therefore recommend that the readily available silicon diode detector should be used for output factor measurements in complex electron fields.

  1. Extended observability of linear time-invariant systems under recurrent loss of output data

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok; Halevi, Yoram

    1989-01-01

    Recurrent loss of sensor data in integrated control systems of an advanced aircraft may occur under different operating conditions that include detected frame errors and queue saturation in computer networks, and bad data suppression in signal processing. This paper presents an extension of the concept of observability based on a set of randomly selected nonconsecutive outputs in finite-dimensional, linear, time-invariant systems. Conditions for testing extended observability have been established.

  2. SU-E-P-32: Adapting An MMLC to a Conventional Linac to Perform Stereotactic Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emam, I; Hosini, M

    2015-06-15

    Purpose: Micro-MLCs minimizes beam scalloping effects caused by conventional-MLCs and facilitates conformal dynamic treatment delivery. But their effect on dosimetric parameters require careful investigations. Physical and dosimetric parameters and Linac mechanical stability with mMLC (net weight 30 Kg) attached to the gantry are to be investigated. Moreover, output study along with recommended jaws offsets are studied. Adaptation of an mMLC to our 16-years old conventional Linac is investigated in this work Methods: BrainLab mMLC (m3) mounted in a detachable chassis to the Philips SL-15 Linac (30kg). Gantry and collimator spoke shots measurements are made using a calibrated film in amore » solid phantom and compared with pin-point measurements. Leaf penumbra, transmission, leakage between the leaves, percentage depth dose (PDD) are measured using IBA pin-point ion chamber at 6 and 10 MV. For output measurements (using brass build-up cap), jaws are modified continuously regarding to m3-fields while output factor are compared with fixed jaws situation, while the mMLC leaf configuration is modified for different m3-fields Results: Mean transmission through leaves is 1.9±0.1% and mean leakage between leaves is 2.8±0.15%. Between opposing leaves abutting along the central beam-axis mean transmission is 15±3%, but it is reduced to 4.5±0.6% by moving the abutment position 4.5cm off-axis. The penumbra was sharper for m3 -fields than jaws-fields (maximum difference is 1.51±0.2%). m3-fields PDD show ∼3% variation from those of jaws-fields. m3-fields output factors show large variations (<4%) from Jaws defined fields. Output for m3-rectangular fields show slight variation in case of leaf-end&leaf-side as well as X-jaw&Y-jaw exchange. Circular m3-fields output factors shows close agreement with their corresponding square jaws-defined fields using 2mm Jaws offsets, If jaws are retracted to m3 limits, differences become <5%. Conclusion: BrainLab m3 is successfully adapted to our 16 old Philips-SL-15 Linac. Dosimetric properties should be taken into account for treatment planning considerations.« less

  3. Differences in Pedaling Technique in Cycling: A Cluster Analysis.

    PubMed

    Lanferdini, Fábio J; Bini, Rodrigo R; Figueiredo, Pedro; Diefenthaeler, Fernando; Mota, Carlos B; Arndt, Anton; Vaz, Marco A

    2016-10-01

    To employ cluster analysis to assess if cyclists would opt for different strategies in terms of neuromuscular patterns when pedaling at the power output of their second ventilatory threshold (PO VT2 ) compared with cycling at their maximal power output (PO MAX ). Twenty athletes performed an incremental cycling test to determine their power output (PO MAX and PO VT2 ; first session), and pedal forces, muscle activation, muscle-tendon unit length, and vastus lateralis architecture (fascicle length, pennation angle, and muscle thickness) were recorded (second session) in PO MAX and PO VT2 . Athletes were assigned to 2 clusters based on the behavior of outcome variables at PO VT2 and PO MAX using cluster analysis. Clusters 1 (n = 14) and 2 (n = 6) showed similar power output and oxygen uptake. Cluster 1 presented larger increases in pedal force and knee power than cluster 2, without differences for the index of effectiveness. Cluster 1 presented less variation in knee angle, muscle-tendon unit length, pennation angle, and tendon length than cluster 2. However, clusters 1 and 2 showed similar muscle thickness, fascicle length, and muscle activation. When cycling at PO VT2 vs PO MAX , cyclists could opt for keeping a constant knee power and pedal-force production, associated with an increase in tendon excursion and a constant fascicle length. Increases in power output lead to greater variations in knee angle, muscle-tendon unit length, tendon length, and pennation angle of vastus lateralis for a similar knee-extensor activation and smaller pedal-force changes in cyclists from cluster 2 than in cluster 1.

  4. Evaluation of some random effects methodology applicable to bird ringing data

    USGS Publications Warehouse

    Burnham, K.P.; White, Gary C.

    2002-01-01

    Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.

  5. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  6. Phenotypic variation and fitness in a metapopulation of tubeworms (Ridgeia piscesae Jones) at hydrothermal vents.

    PubMed

    Tunnicliffe, Verena; St Germain, Candice; Hilário, Ana

    2014-01-01

    We examine the nature of variation in a hot vent tubeworm, Ridgeia piscesae, to determine how phenotypes are maintained and how reproductive potential is dictated by habitat. This foundation species at northeast Pacific hydrothermal sites occupies a wide habitat range in a highly heterogeneous environment. Where fluids supply high levels of dissolved sulphide for symbionts, the worm grows rapidly in a "short-fat" phenotype characterized by lush gill plumes; when plumes are healthy, sperm package capture is higher. This form can mature within months and has a high fecundity with continuous gamete output and a lifespan of about three years in unstable conditions. Other phenotypes occupy low fluid flux habitats that are more stable and individuals grow very slowly; however, they have low reproductive readiness that is hampered further by small, predator cropped branchiae, thus reducing fertilization and metabolite uptake. Although only the largest worms were measured, only 17% of low flux worms were reproductively competent compared to 91% of high flux worms. A model of reproductive readiness illustrates that tube diameter is a good predictor of reproductive output and that few low flux worms reached critical reproductive size. We postulate that most of the propagules for the vent fields originate from the larger tubeworms that live in small, unstable habitat patches. The large expanses of worms in more stable low flux habitat sustain a small, but long-term, reproductive output. Phenotypic variation is an adaptation that fosters both morphological and physiological responses to differences in chemical milieu and predator pressure. This foundation species forms a metapopulation with variable growth characteristics in a heterogeneous environment where a strategy of phenotypic variation bestows an advantage over specialization.

  7. Phenotypic Variation and Fitness in a Metapopulation of Tubeworms (Ridgeia piscesae Jones) at Hydrothermal Vents

    PubMed Central

    Tunnicliffe, Verena; St. Germain, Candice; Hilário, Ana

    2014-01-01

    We examine the nature of variation in a hot vent tubeworm, Ridgeia piscesae, to determine how phenotypes are maintained and how reproductive potential is dictated by habitat. This foundation species at northeast Pacific hydrothermal sites occupies a wide habitat range in a highly heterogeneous environment. Where fluids supply high levels of dissolved sulphide for symbionts, the worm grows rapidly in a “short-fat” phenotype characterized by lush gill plumes; when plumes are healthy, sperm package capture is higher. This form can mature within months and has a high fecundity with continuous gamete output and a lifespan of about three years in unstable conditions. Other phenotypes occupy low fluid flux habitats that are more stable and individuals grow very slowly; however, they have low reproductive readiness that is hampered further by small, predator cropped branchiae, thus reducing fertilization and metabolite uptake. Although only the largest worms were measured, only 17% of low flux worms were reproductively competent compared to 91% of high flux worms. A model of reproductive readiness illustrates that tube diameter is a good predictor of reproductive output and that few low flux worms reached critical reproductive size. We postulate that most of the propagules for the vent fields originate from the larger tubeworms that live in small, unstable habitat patches. The large expanses of worms in more stable low flux habitat sustain a small, but long-term, reproductive output. Phenotypic variation is an adaptation that fosters both morphological and physiological responses to differences in chemical milieu and predator pressure. This foundation species forms a metapopulation with variable growth characteristics in a heterogeneous environment where a strategy of phenotypic variation bestows an advantage over specialization. PMID:25337895

  8. Variation among genotypes in responses to increasing temperature in a marine parasite: evolutionary potential in the face of global warming?

    PubMed

    Berkhout, Boris W; Lloyd, Melanie M; Poulin, Robert; Studer, Anja

    2014-11-01

    Climates are changing worldwide, and populations are under selection to adapt to these changes. Changing temperature, in particular, can directly impact ectotherms and their parasites, with potential consequences for whole ecosystems. The potential of parasite populations to adapt to climate change largely depends on the amount of genetic variation they possess in their responses to environmental fluctuations. This study is, to our knowledge, the first to look at differences among parasite genotypes in response to temperature, with the goal of quantifying the extent of variation among conspecifics in their responses to increasing temperature. Snails infected with single genotypes of the trematode Maritrema novaezealandensis were sequentially acclimatised to two different temperatures, 'current' (15°C) and 'elevated' (20°C), over long periods. These temperatures are based on current average field conditions in the natural habitat and those predicted to occur during the next few decades. The output and activity of cercariae (free-swimming infective stages emerging from snails) were assessed for each genotype at each temperature. The results indicate that, on average, both cercarial output and activity are higher at the elevated acclimation temperature. More importantly, the output and activity of cercariae are strongly influenced by a genotype-by-temperature interaction, such that different genotypes show different responses to increasing temperature. Both the magnitude and direction (increase or decrease) of responses to temperature varied widely among genotypes. Therefore, there is much potential for natural selection to act on this variation, and predicting how the trematode M. novaezealandensis will respond to the climate changes predicted for the next century will prove challenging. Copyright © 2014 Australian Society for Parasitology Inc. Published by Elsevier Ltd. All rights reserved.

  9. Particle-in-cell simulations for virtual cathode oscillator including foil ablation effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Gursharn; Chaturvedi, S.

    2011-06-15

    We have performed two- and three-dimensional, relativistic, electromagnetic, particle-in-cell simulations of an axially extracted virtual cathode oscillator (vircator). The simulations include, for the first time, self-consistent dynamics of the anode foil under the influence of the intense electron beam. This yields the variation of microwave output power as a function of time, including the role of anode ablation and anode-cathode gap closure. These simulations have been done using locally developed particle-in-cell (PIC) codes. The codes have been validated using two vircator designs available from the literature. The simulations reported in the present paper take account of foil ablation due tomore » the intense electron flux, the resulting plasma expansion and shorting of the anode-cathode gap. The variation in anode transparency due to plasma formation is automatically taken into account. We find that damage is generally higher near the axis. Also, at all radial positions, there is little damage in the early stages, followed by a period of rapid erosion, followed in turn by low damage rates. A physical explanation has been given for these trends. As a result of gap closure due to plasma formation from the foil, the output microwave power initially increases, reaches a near-flat-top and then decreases steadily, reaching a minimum around 230 ns. This is consistent with a typical plasma expansion velocity of {approx}2 cm/{mu}s reported in the literature. We also find a significant variation in the dominant output frequency, from 6.3 to 7.6 GHz. This variation is small as long as the plasma density is small, up to {approx}40 ns. As the AK gap starts filling with plasma, there is a steady increase in this frequency.« less

  10. Theoretical principles for biology: Variation.

    PubMed

    Montévil, Maël; Mossio, Matteo; Pocheville, Arnaud; Longo, Giuseppe

    2016-10-01

    Darwin introduced the concept that random variation generates new living forms. In this paper, we elaborate on Darwin's notion of random variation to propose that biological variation should be given the status of a fundamental theoretical principle in biology. We state that biological objects such as organisms are specific objects. Specific objects are special in that they are qualitatively different from each other. They can undergo unpredictable qualitative changes, some of which are not defined before they happen. We express the principle of variation in terms of symmetry changes, where symmetries underlie the theoretical determination of the object. We contrast the biological situation with the physical situation, where objects are generic (that is, different objects can be assumed to be identical) and evolve in well-defined state spaces. We derive several implications of the principle of variation, in particular, biological objects show randomness, historicity and contextuality. We elaborate on the articulation between this principle and the two other principles proposed in this special issue: the principle of default state and the principle of organization. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Local randomness: Examples and application

    NASA Astrophysics Data System (ADS)

    Fu, Honghao; Miller, Carl A.

    2018-03-01

    When two players achieve a superclassical score at a nonlocal game, their outputs must contain intrinsic randomness. This fact has many useful implications for quantum cryptography. Recently it has been observed [C. Miller and Y. Shi, Quantum Inf. Computat. 17, 0595 (2017)] that such scores also imply the existence of local randomness—that is, randomness known to one player but not to the other. This has potential implications for cryptographic tasks between two cooperating but mistrustful players. In the current paper we bring this notion toward practical realization, by offering near-optimal bounds on local randomness for the CHSH game, and also proving the security of a cryptographic application of local randomness (single-bit certified deletion).

  12. Geophysical, archaeological and historical evidence support a solar-output model for climate change

    USGS Publications Warehouse

    Perry, C.A.; Hsu, K.J.

    2000-01-01

    Although the processes of climate change are not completely understood, an important causal candidate is variation in total solar output. Reported cycles in various climate-proxy data show a tendency to emulate a fundamental harmonic sequence of a basic solar-cycle length (11 years) multiplied by 2(N) (where N equals a positive or negative integer). A simple additive model for total solar-output variations was developed by superimposing a progression of fundamental harmonic cycles with slightly increasing amplitudes. The timeline of the model was calibrated to the Pleistocene/Holocene boundary at 9,000 years before present. The calibrated model was compared with geophysical, archaeological, and historical evidence of warm or cold climates during the Holocene. The evidence of periods of several centuries of cooler climates worldwide called 'little ice ages,' similar to the period anno Domini (A.D.) 1280-1860 and reoccurring approximately every 1,300 years, corresponds well with fluctuations in modeled solar output. A more detailed examination of the climate sensitive history of the last 1,000 years further supports the model. Extrapolation of the model into the future suggests a gradual cooling during the next few centuries with intermittent minor warmups and a return to near little-ice-age conditions within the next 500 years. This cool period then may be followed approximately 1,500 years from now by a return to altithermal conditions similar to the previous Holocene Maximum.

  13. Fiber-optic temperature sensors based on differential spectral transmittance/reflectivity and multiplexed sensing systems.

    PubMed

    Wang, A; Wang, G Z; Murphy, K A; Claus, R O

    1995-05-01

    A concept for optical temperature sensing based on the differential spectral reflectivity/transmittance from a multilayer dielectric edge filter is described and demonstrated. Two wavelengths, λ(1) and λ(2), from the spectrum of a broadband light source are selected so that they are located on the sloped and flat regions of the reflection or transmission spectrum of the filter, respectively. As temperature variations shift the reflection or transmission spectrum of the filter, they change the output power of the light at λ(1), but the output power of the light at λ(2) is insensitive to the shift and therefore to the temperature variation. The temperature information can be extracted from the ratio of the light powers at λ(1) to the light at λ(2). This ratio is immune to changes in the output power of the light source, fiber losses induced by microbending, and hence modal-power distribution fluctuations. The best resolution of 0.2 °C has been obtained over a range of 30-120 °C. Based on such a basic temperature-sensing concept, a wavelength-division-multiplexed, temperature-sensing system is constructed by cascading three sensing-edge filters that have different cutoff wavelengths along a multimode fiber. The signals from the three sensors are resolved by detecting the correspondent outputs at different wavelengths.

  14. The constant current loop: A new paradigm for resistance signal conditioning

    NASA Astrophysics Data System (ADS)

    Anderson, Karl F.

    1994-02-01

    A practical single constant current loop circuit for the signal conditioning of variable-resistance transducers has been synthesized, analyzed, and demonstrated. The strain gage and the resistance temperature detector are examples of variable-resistance sensors. Lead wires connect variable-resistance sensors to remotely located signal-conditioning hardware. The presence of lead wires in the conventional Wheatstone bridge signal-conditioning circuit introduces undesired effects that reduce the quality of the data from the remote sensors. A practical approach is presented for suppressing essentially all lead wire resistance effects while indicating only the change in resistance value. Theoretical predictions supported by laboratory testing confirm the following features of the approach: (1) dc response; (2) the electrical output is unaffected by extremely large variation in the resistance of any or all lead wires; (3) the electrical output remains zero for no change in gage resistance; (4) the electrical output is inherently linear with respect to gage resistance change; (5) the sensitivity is double that of a Wheatstone bridge circuit; and (6) the same excitation wires can serve multiple independent gages. An adaptation of current loop circuit is presented that simultaneously provides an output signal voltage directly proportional to transducer resistance change and provides temperature information that is unaffected by transducer and lead wire resistance variations. These innovations are the subject of NASA patent applications.

  15. The constant current loop: A new paradigm for resistance signal conditioning

    NASA Astrophysics Data System (ADS)

    Anderson, Karl F.

    1992-10-01

    A practical single constant current loop circuit for the signal conditioning of variable resistance transducers has been synthesized, analyzed, and demonstrated. The strain gage and the resistance temperature device are examples of variable resistance sensors. Lead wires connect variable resistance sensors to remotely located signal conditioning hardware. The presence of lead wires in the conventional Wheatstone bridge signal conditioning circuit introduces undesired effects that reduce the quality of the data from the remote sensors. A practical approach is presented for suppressing essentially all lead wire resistance effects while indicating only the change in resistance value. Theoretical predictions supported by laboratory testing confirm the following features of the approach: (1) dc response; (2) the electrical output is unaffected by extremely large variations in the resistance of any or all lead wires; (3) the electrical output remains zero for no change in gage resistance; (4) the electrical output is inherently linear with respect to gage resistance change; (5) the sensitivity is double that of a Wheatstone bridge circuit; and (6) the same excitation wires can serve multiple independent gages. An adaptation of current loop circuit is presented that simultaneously provides an output signal voltage directly proportional to transducer resistance change and provides temperature information that is unaffected by transducer and lead wire resistance variations. These innovations are the subject of NASA patent applications.

  16. The constant current loop: A new paradigm for resistance signal conditioning

    NASA Astrophysics Data System (ADS)

    Anderson, Karl F.

    A practical, single, constant-current loop circuit for the signal conditioning of variable-resistance transducers was synthesized, analyzed, and demonstrated. The strain gage and the resistance temperature device are examples of variable-resistance sensors. Lead wires connect variable-resistance sensors to remotely located signal-conditioning hardware. The presence of lead wires in the conventional Wheatstone bridge signal-conditioning circuit introduces undesired effects that reduce the quality of the data from the remote sensors. A practical approach is presented for suppressing essentially all lead wire resistance effects while indicating only the change in resistance value. Theoretical predictions supported by laboratory testing confirm the following features of the approach: (1) the dc response; (2) the electrical output is unaffected by extremely large variations in the resistance of any or all lead wires; (3) the electrical output remains zero for no change in gage resistance; (4) the electrical output is inherently linear with respect to gage resistance change; (5) the sensitivity is double that of a Wheatstone bridge circuit; and (6) the same excitation and sense wires can serve multiple independent gages. An adaptation of the current loop circuit is presented that simultaneously provides an output signal voltage directly proportional to transducer resistance change and provides temperature information that is unaffected by transducer and lead wire resistance variations. These innovations are the subject of NASA patent applications.

  17. The constant current loop: A new paradigm for resistance signal conditioning

    NASA Technical Reports Server (NTRS)

    Anderson, Karl F.

    1994-01-01

    A practical single constant current loop circuit for the signal conditioning of variable-resistance transducers has been synthesized, analyzed, and demonstrated. The strain gage and the resistance temperature detector are examples of variable-resistance sensors. Lead wires connect variable-resistance sensors to remotely located signal-conditioning hardware. The presence of lead wires in the conventional Wheatstone bridge signal-conditioning circuit introduces undesired effects that reduce the quality of the data from the remote sensors. A practical approach is presented for suppressing essentially all lead wire resistance effects while indicating only the change in resistance value. Theoretical predictions supported by laboratory testing confirm the following features of the approach: (1) dc response; (2) the electrical output is unaffected by extremely large variation in the resistance of any or all lead wires; (3) the electrical output remains zero for no change in gage resistance; (4) the electrical output is inherently linear with respect to gage resistance change; (5) the sensitivity is double that of a Wheatstone bridge circuit; and (6) the same excitation wires can serve multiple independent gages. An adaptation of current loop circuit is presented that simultaneously provides an output signal voltage directly proportional to transducer resistance change and provides temperature information that is unaffected by transducer and lead wire resistance variations. These innovations are the subject of NASA patent applications.

  18. The constant current loop: A new paradigm for resistance signal conditioning

    NASA Technical Reports Server (NTRS)

    Anderson, Karl F.

    1993-01-01

    A practical, single, constant-current loop circuit for the signal conditioning of variable-resistance transducers was synthesized, analyzed, and demonstrated. The strain gage and the resistance temperature device are examples of variable-resistance sensors. Lead wires connect variable-resistance sensors to remotely located signal-conditioning hardware. The presence of lead wires in the conventional Wheatstone bridge signal-conditioning circuit introduces undesired effects that reduce the quality of the data from the remote sensors. A practical approach is presented for suppressing essentially all lead wire resistance effects while indicating only the change in resistance value. Theoretical predictions supported by laboratory testing confirm the following features of the approach: (1) the dc response; (2) the electrical output is unaffected by extremely large variations in the resistance of any or all lead wires; (3) the electrical output remains zero for no change in gage resistance; (4) the electrical output is inherently linear with respect to gage resistance change; (5) the sensitivity is double that of a Wheatstone bridge circuit; and (6) the same excitation and sense wires can serve multiple independent gages. An adaptation of the current loop circuit is presented that simultaneously provides an output signal voltage directly proportional to transducer resistance change and provides temperature information that is unaffected by transducer and lead wire resistance variations. These innovations are the subject of NASA patent applications.

  19. High-order random Raman lasing in a PM fiber with ultimate efficiency and narrow bandwidth

    PubMed Central

    Babin, Sergey A.; Zlobina, Ekaterina A.; Kablukov, Sergey I.; Podivilov, Evgeniy V.

    2016-01-01

    Random Raman lasers attract now a great deal of attention as they operate in non-active turbid or transparent scattering media. In the last case, single mode fibers with feedback via Rayleigh backscattering generate a high-quality unidirectional laser beam. However, such fiber lasers have rather poor spectral and polarization properties, worsening with increasing power and Stokes order. Here we demonstrate a linearly-polarized cascaded random Raman lasing in a polarization-maintaining fiber. The quantum efficiency of converting the pump (1.05 μm) into the output radiation is almost independent of the Stokes order, amounting to 79%, 83%, and 77% for the 1st (1.11 μm), 2nd (1.17 μm) and 3rd (1.23 μm) order, respectively, at the polarization extinction ratio >22 dB for all orders. The laser bandwidth grows with increasing order, but it is almost independent of power in the 1–10 W range, amounting to ~1, ~2 and ~3 nm for orders 1–3, respectively. So, the random Raman laser exhibits no degradation of output characteristics with increasing Stokes order. A theory adequately describing the unique laser features has been developed. Thus, a full picture of the cascaded random Raman lasing in fibers is shown. PMID:26940082

  20. Neutron monitor generated data distributions in quantum variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  1. How MAP kinase modules function as robust, yet adaptable, circuits.

    PubMed

    Tian, Tianhai; Harding, Angus

    2014-01-01

    Genetic and biochemical studies have revealed that the diversity of cell types and developmental patterns evident within the animal kingdom is generated by a handful of conserved, core modules. Core biological modules must be robust, able to maintain functionality despite perturbations, and yet sufficiently adaptable for random mutations to generate phenotypic variation during evolution. Understanding how robust, adaptable modules have influenced the evolution of eukaryotes will inform both evolutionary and synthetic biology. One such system is the MAP kinase module, which consists of a 3-tiered kinase circuit configuration that has been evolutionarily conserved from yeast to man. MAP kinase signal transduction pathways are used across eukaryotic phyla to drive biological functions that are crucial for life. Here we ask the fundamental question, why do MAPK modules follow a conserved 3-tiered topology rather than some other number? Using computational simulations, we identify a fundamental 2-tiered circuit topology that can be readily reconfigured by feedback loops and scaffolds to generate diverse signal outputs. When this 2-kinase circuit is connected to proximal input kinases, a 3-tiered modular configuration is created that is both robust and adaptable, providing a biological circuit that can regulate multiple phenotypes and maintain functionality in an uncertain world. We propose that the 3-tiered signal transduction module has been conserved through positive selection, because it facilitated the generation of phenotypic variation during eukaryotic evolution.

  2. How MAP kinase modules function as robust, yet adaptable, circuits

    PubMed Central

    Tian, Tianhai; Harding, Angus

    2014-01-01

    Genetic and biochemical studies have revealed that the diversity of cell types and developmental patterns evident within the animal kingdom is generated by a handful of conserved, core modules. Core biological modules must be robust, able to maintain functionality despite perturbations, and yet sufficiently adaptable for random mutations to generate phenotypic variation during evolution. Understanding how robust, adaptable modules have influenced the evolution of eukaryotes will inform both evolutionary and synthetic biology. One such system is the MAP kinase module, which consists of a 3-tiered kinase circuit configuration that has been evolutionarily conserved from yeast to man. MAP kinase signal transduction pathways are used across eukaryotic phyla to drive biological functions that are crucial for life. Here we ask the fundamental question, why do MAPK modules follow a conserved 3-tiered topology rather than some other number? Using computational simulations, we identify a fundamental 2-tiered circuit topology that can be readily reconfigured by feedback loops and scaffolds to generate diverse signal outputs. When this 2-kinase circuit is connected to proximal input kinases, a 3-tiered modular configuration is created that is both robust and adaptable, providing a biological circuit that can regulate multiple phenotypes and maintain functionality in an uncertain world. We propose that the 3-tiered signal transduction module has been conserved through positive selection, because it facilitated the generation of phenotypic variation during eukaryotic evolution. PMID:25483189

  3. Ecological and evolutionary consequences of tri-trophic interactions: Spatial variation and effects of plant density.

    PubMed

    Abdala-Roberts, Luis; Parra-Tabla, Víctor; Moreira, Xoaquín; Ramos-Zapata, José

    2017-02-01

    The factors driving variation in species interactions are often unknown, and few studies have made a link between changes in interactions and the strength of selection. We report on spatial variation in functional responses by a seed predator (SP) and its parasitic wasps associated with the herb Ruellia nudiflora . We assessed the influence of plant density on consumer responses and determined whether density effects and spatial variation in functional responses altered natural selection by these consumers on the plant. We established common gardens at two sites in Yucatan, Mexico, and planted R. nudiflora at two densities in each garden. We recorded fruit output and SP and parasitoid attack; calculated relative fitness (seed number) under scenarios of three trophic levels (accounting for SP and parasitoid effects), two trophic levels (accounting for SP but not parasitoid effects), and one trophic level (no consumer effects); and compared selection strength on fruit number under these scenarios across sites and densities. There was spatial variation in SP recruitment, whereby the SP functional response was negatively density-dependent at one site but density-independent at the other; parasitoid responses were density-independent and invariant across sites. Site variation in SP attack led, in turn, to differences in SP selection on fruit output, and parasitoids did not alter SP selection. There were no significant effects of density at either site. Our results provide a link between consumer functional responses and consumer selection on plants, which deepens our understanding of geographic variation in the evolutionary outcomes of multitrophic interactions. © 2017 Botanical Society of America.

  4. An optimal control approach to the design of moving flight simulators

    NASA Technical Reports Server (NTRS)

    Sivan, R.; Ish-Shalom, J.; Huang, J.-K.

    1982-01-01

    An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.

  5. A Scheme for Obtaining Secure S-Boxes Based on Chaotic Baker's Map

    NASA Astrophysics Data System (ADS)

    Gondal, Muhammad Asif; Abdul Raheem; Hussain, Iqtadar

    2014-09-01

    In this paper, a method for obtaining cryptographically strong 8 × 8 substitution boxes (S-boxes) is presented. The method is based on chaotic baker's map and a "mini version" of a new block cipher with block size 8 bits and can be easily and efficiently performed on a computer. The cryptographic strength of some 8 × 8 S-boxes randomly produced by the method is analyzed. The results show (1) all of them are bijective; (2) the nonlinearity of each output bit of them is usually about 100; (3) all of them approximately satisfy the strict avalanche criterion and output bits independence criterion; (4) they all have an almost equiprobable input/output XOR distribution.

  6. Temperature compensated and self-calibrated current sensor using reference magnetic field

    DOEpatents

    Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane

    2007-10-09

    A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference magnetic field generated within the current sensor housing is detected by the magnetic field sensors and is used to correct variations in the output signal due to temperature variations and aging.

  7. Temperature compensated current sensor using reference magnetic field

    DOEpatents

    Yakymyshyn, Christopher Paul; Brubaker, Michael Allen; Yakymyshyn, Pamela Jane

    2007-10-09

    A method is described to provide temperature compensation and self-calibration of a current sensor based on a plurality of magnetic field sensors positioned around a current carrying conductor. A reference magnetic field generated within the current sensor housing is detected by a separate but identical magnetic field sensor and is used to correct variations in the output signal due to temperature variations and aging.

  8. General stochastic variational formulation for the oligopolistic market equilibrium problem with excesses

    NASA Astrophysics Data System (ADS)

    Barbagallo, Annamaria; Di Meglio, Guglielmo; Mauro, Paolo

    2017-07-01

    The aim of the paper is to study, in a Hilbert space setting, a general random oligopolistic market equilibrium problem in presence of both production and demand excesses and to characterize the random Cournot-Nash equilibrium principle by means of a stochastic variational inequality. Some existence results are presented.

  9. Comparison for 1030nm DBR-tapered diode lasers with 10W central lobe output power and different grating layouts for wavelength stabilization and lateral spatial mode filtering

    NASA Astrophysics Data System (ADS)

    Müller, André; Zink, Christof; Fricke, Jörg; Bugge, Frank; Erbert, Götz; Sumpf, Bernd; Tränkle, Günther

    2018-02-01

    1030 nm DBR tapered diode lasers with different lateral layouts are presented. The layout comparison includes lasers with straight waveguide and grating, tapered waveguide and straight grating, and straight waveguide and tapered grating. The lasers provide narrowband emission and optical output powers up to 15 W. The highest diffraction-limited central lobe output power of 10.5 W is obtained for lasers with tapered gratings only. Small variations in central lobe output power with RW injection current density also indicate the robustness of that layout. For lasers with tapered waveguides, high RW injection current densities up to 150 A/mm2 have to be applied in order to obtain high central lobe output powers. Lasers with straight waveguide and grating operate best at low RW injection current densities, 50 A/mm2 applied in this study. Using the layout optimizations discussed in this study may help to increase the application potential of DBR tapered diode lasers.

  10. Measurements and Modeling of Total Solar Irradiance in X-class Solar Flares

    NASA Technical Reports Server (NTRS)

    Moore, Christopher S.; Chamberlin, Phillip Clyde; Hock, Rachel

    2014-01-01

    The Total Irradiance Monitor (TIM) from NASA's SOlar Radiation and Climate Experiment can detect changes in the total solar irradiance (TSI) to a precision of 2 ppm, allowing observations of variations due to the largest X-class solar flares for the first time. Presented here is a robust algorithm for determining the radiative output in the TIM TSI measurements, in both the impulsive and gradual phases, for the four solar flares presented in Woods et al., as well as an additional flare measured on 2006 December 6. The radiative outputs for both phases of these five flares are then compared to the vacuum ultraviolet (VUV) irradiance output from the Flare Irradiance Spectral Model (FISM) in order to derive an empirical relationship between the FISM VUV model and the TIM TSI data output to estimate the TSI radiative output for eight other X-class flares. This model provides the basis for the bolometric energy estimates for the solar flares analyzed in the Emslie et al. study.

  11. Integrating SAS and GIS software to improve habitat-use estimates from radiotelemetry data

    USGS Publications Warehouse

    Kenow, K.P.; Wright, R.G.; Samuel, M.D.; Rasmussen, P.W.

    2001-01-01

    Radiotelemetry has been used commonly to remotely determine habitat use by a variety of wildlife species. However, habitat misclassification can occur because the true location of a radiomarked animal can only be estimated. Analytical methods that provide improved estimates of habitat use from radiotelemetry location data using a subsampling approach have been proposed previously. We developed software, based on these methods, to conduct improved habitat-use analyses. A Statistical Analysis System (SAS)-executable file generates a random subsample of points from the error distribution of an estimated animal location and formats the output into ARC/INFO-compatible coordinate and attribute files. An associated ARC/INFO Arc Macro Language (AML) creates a coverage of the random points, determines the habitat type at each random point from an existing habitat coverage, sums the number of subsample points by habitat type for each location, and outputs tile results in ASCII format. The proportion and precision of habitat types used is calculated from the subsample of points generated for each radiotelemetry location. We illustrate the method and software by analysis of radiotelemetry data for a female wild turkey (Meleagris gallopavo).

  12. Cognitive Jointly Optimal Code-Division Channelization and Routing Over Cooperative Links

    DTIC Science & Technology

    2014-04-01

    i List of Figures Fig. 1: Comparison between code-division channelization and FDM. Fig. 2: Secondary receiver SINR as a function of the iteration step...transmission percentage as a function of the number of active links under Cases rank(X′′) = 1 and > 1 (the study includes also the random code assignment...scheme); (b) Instantaneous output SINR of a primary signal against primary SINR-QoS threshold SINRthPU (thick line) and instanta- neous output SINR of

  13. High-Resolution Radar Waveforms Based on Randomized Latin Square Sequences

    DTIC Science & Technology

    2017-04-18

    familiar Costas sequence [17]. The ambiguity function first introduced by Woodward in [13] is used to evaluate the matched filter output of a Radar waveform...the zero-delay cut that the result takes the shape of a sinc function which shows, even for significant Doppler shifts, the matched filter output...bad feature as the high ridge of the LFM waveform will still result in a large matched filter response from the target, just not at the correct delay

  14. Multiple channel programmable coincidence counter

    DOEpatents

    Arnone, Gaetano J.

    1990-01-01

    A programmable digital coincidence counter having multiple channels and featuring minimal dead time. Neutron detectors supply electrical pulses to a synchronizing circuit which in turn inputs derandomized pulses to an adding circuit. A random access memory circuit connected as a programmable length shift register receives and shifts the sum of the pulses, and outputs to a serializer. A counter is input by the adding circuit and downcounted by the seralizer, one pulse at a time. The decoded contents of the counter after each decrement is output to scalers.

  15. Alkali Halide Microstructured Optical Fiber for X-Ray Detection

    NASA Technical Reports Server (NTRS)

    DeHaven, S. L.; Wincheski, R. A.; Albin, S.

    2014-01-01

    Microstructured optical fibers containing alkali halide scintillation materials of CsI(Na), CsI(Tl), and NaI(Tl) are presented. The scintillation materials are grown inside the microstructured fibers using a modified Bridgman-Stockbarger technique. The x-ray photon counts of these fibers, with and without an aluminum film coating are compared to the output of a collimated CdTe solid state detector over an energy range from 10 to 40 keV. The photon count results show significant variations in the fiber output based on the materials. The alkali halide fiber output can exceed that of the CdTe detector, dependent upon photon counter efficiency and fiber configuration. The results and associated materials difference are discussed.

  16. Quantum random number generator based on quantum nature of vacuum fluctuations

    NASA Astrophysics Data System (ADS)

    Ivanova, A. E.; Chivilikhin, S. A.; Gleim, A. V.

    2017-11-01

    Quantum random number generator (QRNG) allows obtaining true random bit sequences. In QRNG based on quantum nature of vacuum, optical beam splitter with two inputs and two outputs is normally used. We compare mathematical descriptions of spatial beam splitter and fiber Y-splitter in the quantum model for QRNG, based on homodyne detection. These descriptions were identical, that allows to use fiber Y-splitters in practical QRNG schemes, simplifying the setup. Also we receive relations between the input radiation and the resulting differential current in homodyne detector. We experimentally demonstrate possibility of true random bits generation by using QRNG based on homodyne detection with Y-splitter.

  17. Modeling stimulus variation in three common implicit attitude tasks.

    PubMed

    Wolsiefer, Katie; Westfall, Jacob; Judd, Charles M

    2017-08-01

    We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the statistical treatment of random stimulus materials, but the recommendations from this literature have not been applied to the social psychological literature on implicit attitudes. This is partly because of inherent complications in applying crossed random-effect models to some of the most common implicit attitude tasks, and partly because no work to date has demonstrated that random stimulus variation is in fact consequential in implicit attitude measurement. We addressed this problem by laying out statistically appropriate and practically feasible crossed random-effect models for three of the most commonly used implicit attitude measures-the Implicit Association Test, affect misattribution procedure, and evaluative priming task-and then applying these models to large datasets (average N = 3,206) that assess participants' implicit attitudes toward race, politics, and self-esteem. We showed that the test statistics from the traditional analyses are substantially (about 60 %) inflated relative to the more-appropriate analyses that incorporate stimulus variation. Because all three tasks used the same stimulus words and faces, we could also meaningfully compare the relative contributions of stimulus variation across the tasks. In an appendix, we give syntax in R, SAS, and SPSS for fitting the recommended crossed random-effects models to data from all three tasks, as well as instructions on how to structure the data file.

  18. Precise measurement of the performance of thermoelectric modules

    NASA Astrophysics Data System (ADS)

    Díaz-Chao, Pablo; Muñiz-Piniella, Andrés; Selezneva, Ekaterina; Cuenat, Alexandre

    2016-08-01

    The potential exploitation of thermoelectric modules into mass market applications such as exhaust gas heat recovery in combustion engines requires an accurate knowledge of their performance. Further expansion of the market will also require confidence on the results provided by suppliers to end-users. However, large variation in performance and maximum operating point is observed for identical modules when tested by different laboratories. Here, we present the first metrological study of the impact of mounting and testing procedures on the precision of thermoelectric modules measurement. Variability in the electrical output due to mechanical pressure or type of thermal interface materials is quantified for the first time. The respective contribution of the temperature difference and the mean temperature to the variation in the output performance is quantified. The contribution of these factors to the total uncertainties in module characterisation is detailed.

  19. Logarithmic circuit with wide dynamic range

    NASA Technical Reports Server (NTRS)

    Wiley, P. H.; Manus, E. A. (Inventor)

    1978-01-01

    A circuit deriving an output voltage that is proportional to the logarithm of a dc input voltage susceptible to wide variations in amplitude includes a constant current source which forward biases a diode so that the diode operates in the exponential portion of its voltage versus current characteristic, above its saturation current. The constant current source includes first and second, cascaded feedback, dc operational amplifiers connected in negative feedback circuit. An input terminal of the first amplifier is responsive to the input voltage. A circuit shunting the first amplifier output terminal includes a resistor in series with the diode. The voltage across the resistor is sensed at the input of the second dc operational feedback amplifier. The current flowing through the resistor is proportional to the input voltage over the wide range of variations in amplitude of the input voltage.

  20. Serial-position effects for items and relations in short-term memory.

    PubMed

    Jones, Tim; Oberauer, Klaus

    2013-04-01

    Two experiments used immediate probed recall of words to investigate serial-position effects. Item memory was tested through probing with a semantic category. Relation memory was tested through probing with the word's spatial location of presentation. Input order and output order were deconfounded by presenting and probing items in different orders. Primacy and recency effects over input position were found for both item memory and relation memory. Both item and relation memory declined over output position. The finding of a U-shaped input position function for item memory rules out an explanation purely in terms of positional confusions (e.g., edge effects). Either these serial-position effects arise from variations in the intrinsic memory strength of the items, or they arise from variations in the strength of item-position bindings, together with retrieval by scanning.

  1. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  2. Facilitation of learning induced by both random and gradual visuomotor task variation

    PubMed Central

    Braun, Daniel A.; Wolpert, Daniel M.

    2012-01-01

    Motor task variation has been shown to be a key ingredient in skill transfer, retention, and structural learning. However, many studies only compare training of randomly varying tasks to either blocked or null training, and it is not clear how experiencing different nonrandom temporal orderings of tasks might affect the learning process. Here we study learning in human subjects who experience the same set of visuomotor rotations, evenly spaced between −60° and +60°, either in a random order or in an order in which the rotation angle changed gradually. We compared subsequent learning of three test blocks of +30°→−30°→+30° rotations. The groups that underwent either random or gradual training showed significant (P < 0.01) facilitation of learning in the test blocks compared with a control group who had not experienced any visuomotor rotations before. We also found that movement initiation times in the random group during the test blocks were significantly (P < 0.05) lower than for the gradual or the control group. When we fit a state-space model with fast and slow learning processes to our data, we found that the differences in performance in the test block were consistent with the gradual or random task variation changing the learning and retention rates of only the fast learning process. Such adaptation of learning rates may be a key feature of ongoing meta-learning processes. Our results therefore suggest that both gradual and random task variation can induce meta-learning and that random learning has an advantage in terms of shorter initiation times, suggesting less reliance on cognitive processes. PMID:22131385

  3. Conditional random fields for pattern recognition applied to structured data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Skurikhin, Alexei

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  4. Identifying Active Travel Behaviors in Challenging Environments Using GPS, Accelerometers, and Machine Learning Algorithms.

    PubMed

    Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline

    2014-01-01

    Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.

  5. Conditional random fields for pattern recognition applied to structured data

    DOE PAGES

    Burr, Tom; Skurikhin, Alexei

    2015-07-14

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  6. Random discrete linear canonical transform.

    PubMed

    Wei, Deyun; Wang, Ruikui; Li, Yuan-Min

    2016-12-01

    Linear canonical transforms (LCTs) are a family of integral transforms with wide applications in optical, acoustical, electromagnetic, and other wave propagation problems. In this paper, we propose the random discrete linear canonical transform (RDLCT) by randomizing the kernel transform matrix of the discrete linear canonical transform (DLCT). The RDLCT inherits excellent mathematical properties from the DLCT along with some fantastic features of its own. It has a greater degree of randomness because of the randomization in terms of both eigenvectors and eigenvalues. Numerical simulations demonstrate that the RDLCT has an important feature that the magnitude and phase of its output are both random. As an important application of the RDLCT, it can be used for image encryption. The simulation results demonstrate that the proposed encryption method is a security-enhanced image encryption scheme.

  7. [A proposal for a new definition of excess mortality associated with influenza-epidemics and its estimation].

    PubMed

    Takahashi, M; Tango, T

    2001-05-01

    As methods for estimating excess mortality associated with influenza-epidemic, the Serfling's cyclical regression model and the Kawai and Fukutomi model with seasonal indices have been proposed. Excess mortality under the old definition (i.e., the number of deaths actually recorded in excess of the number expected on the basis of past seasonal experience) covers the random error for that portion of variation regarded as due to chance. In addition, it disregards the range of random variation of mortality with the season. In this paper, we propose a new definition of excess mortality associated with influenza-epidemics and a new estimation method, considering these questions with the Kawai and Fukutomi method. The new definition of excess mortality and a novel method for its estimation were generated as follows. Factors bringing about variation in mortality in months with influenza-epidemics may be divided into two groups: 1. Influenza itself, 2. others (practically random variation). The range of variation of mortality due to the latter (normal range) can be estimated from the range for months in the absence of influenza-epidemics. Excess mortality is defined as death over the normal range. A new definition of excess mortality associated with influenza-epidemics and an estimation method are proposed. The new method considers variation in mortality in months in the absence of influenza-epidemics. Consequently, it provides reasonable estimates of excess mortality by separating the portion of random variation. Further, it is a characteristic that the proposed estimate can be used as a criterion of statistical significance test.

  8. Random-order fractional bistable system and its stochastic resonance

    NASA Astrophysics Data System (ADS)

    Gao, Shilong; Zhang, Li; Liu, Hui; Kan, Bixia

    2017-01-01

    In this paper, the diffusion motion of Brownian particles in a viscous liquid suffering from stochastic fluctuations of the external environment is modeled as a random-order fractional bistable equation, and as a typical nonlinear dynamic behavior, the stochastic resonance phenomena in this system are investigated. At first, the derivation process of the random-order fractional bistable system is given. In particular, the random-power-law memory is deeply discussed to obtain the physical interpretation of the random-order fractional derivative. Secondly, the stochastic resonance evoked by random-order and external periodic force is mainly studied by numerical simulation. In particular, the frequency shifting phenomena of the periodical output are observed in SR induced by the excitation of the random order. Finally, the stochastic resonance of the system under the double stochastic excitations of the random order and the internal color noise is also investigated.

  9. Randomness in nonlocal games between mistrustful players

    PubMed Central

    Miller, Carl A.; Shi, Yaoyun

    2017-01-01

    If two quantum players at a nonlocal game G achieve a superclassical score, then their measurement outcomes must be at least partially random from the perspective of any third player. This is the basis for device-independent quantum cryptography. In this paper we address a related question: does a superclassical score at G guarantee that one player has created randomness from the perspective of the other player? We show that for complete-support games, the answer is yes: even if the second player is given the first player’s input at the conclusion of the game, he cannot perfectly recover her output. Thus some amount of local randomness (i.e., randomness possessed by only one player) is always obtained when randomness is certified from nonlocal games with quantum strategies. This is in contrast to non-signaling game strategies, which may produce global randomness without any local randomness. We discuss potential implications for cryptographic protocols between mistrustful parties. PMID:29643748

  10. Randomness in nonlocal games between mistrustful players.

    PubMed

    Miller, Carl A; Shi, Yaoyun

    2017-06-01

    If two quantum players at a nonlocal game G achieve a superclassical score, then their measurement outcomes must be at least partially random from the perspective of any third player. This is the basis for device-independent quantum cryptography. In this paper we address a related question: does a superclassical score at G guarantee that one player has created randomness from the perspective of the other player? We show that for complete-support games, the answer is yes: even if the second player is given the first player's input at the conclusion of the game, he cannot perfectly recover her output. Thus some amount of local randomness (i.e., randomness possessed by only one player) is always obtained when randomness is certified from nonlocal games with quantum strategies. This is in contrast to non-signaling game strategies, which may produce global randomness without any local randomness. We discuss potential implications for cryptographic protocols between mistrustful parties.

  11. GUI for Computational Simulation of a Propellant Mixer

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Richter, Hanz; Barbieri, Enrique; Granger, Jamie

    2005-01-01

    Control Panel is a computer program that generates a graphical user interface (GUI) for computational simulation of a rocket-test-stand propellant mixer in which gaseous hydrogen (GH2) is injected into flowing liquid hydrogen (LH2) to obtain a combined flow having desired thermodynamic properties. The GUI is used in conjunction with software that models the mixer as a system having three inputs (the positions of the GH2 and LH2 inlet valves and an outlet valve) and three outputs (the pressure inside the mixer and the outlet flow temperature and flow rate). The user can specify valve characteristics and thermodynamic properties of the input fluids via userfriendly dialog boxes. The user can enter temporally varying input values or temporally varying desired output values. The GUI provides (1) a set-point calculator function for determining fixed valve positions that yield desired output values and (2) simulation functions that predict the response of the mixer to variations in the properties of the LH2 and GH2 and manual- or feedback-control variations in valve positions. The GUI enables scheduling of a sequence of operations that includes switching from manual to feedback control when a certain event occurs.

  12. Does the diurnal increase in central temperature interact with pre-cooling or passive warm-up of the leg?

    PubMed

    Racinais, Sébastien; Blonc, Stephen; Oksa, Juha; Hue, Olivier

    2009-01-01

    Seven male subjects volunteered to participate in an investigation of whether the diurnal increase in core temperature influences the effects of pre-cooling or passive warm-up on muscular power. Morning (07:00-09:00h) and afternoon (17:00-19:00h) evaluation of maximal power output during a cycling sprint was performed on different days in a control condition (room at 21.8 degrees C, 69% rh), after 30min of pre-cooling in a cold bath (16 degrees C), or after 30min of passive warm-up in a hot bath (38 degrees C). Despite an equivalent increase from morning to afternoon in core temperature in all conditions (+0.4 degrees C, P<0.05), power output displayed a diurnal increase in control condition only. A local cooling or heating of the leg in a neutral environment blunted the diurnal variation in muscular power. Because pre-cooling decreases muscle power, force and velocity irrespective of time-of-day, athletes should strictly avoid any cooling before a sprint exercise. In summary, diurnal variation in muscle power output seems to be more influenced by muscle rather than core temperature.

  13. Development of a Tri-Axial Cutting Force Sensor for the Milling Process

    PubMed Central

    Li, Yingxue; Zhao, Yulong; Fei, Jiyou; Zhao, You; Li, Xiuyuan; Gao, Yunxiang

    2016-01-01

    This paper presents a three-component fixed dynamometer based on a strain gauge, which reduces output errors produced by the cutting force imposed on different milling positions of the workpiece. A reformative structure of tri-layer cross beams is proposed, sensitive areas were selected, and corresponding measuring circuits were arranged to decrease the inaccuracy brought about by positional variation. To simulate the situation with a milling cutter moving on the workpiece and validate the function of reducing the output errors when the milling position changes, both static calibration and dynamic milling tests were implemented on different parts of the workpiece. Static experiment results indicate that with standard loads imposed, the maximal deviation between the measured forces and the standard inputs is 4.87%. The results of the dynamic milling test illustrate that with identical machining parameters, the differences in output variation between the developed sensor and standard dynamometer are no larger than 6.61%. Both static and dynamic experimental results demonstrate that the developed dynamometer is suitable for measuring milling force imposed on different positions of the workpiece, which shows potential applicability in machining a monitoring system. PMID:27007374

  14. Undesirable Choice Biases with Small Differences in the Spatial Structure of Chance Stimulus Sequences.

    PubMed

    Herrera, David; Treviño, Mario

    2015-01-01

    In two-alternative discrimination tasks, experimenters usually randomize the location of the rewarded stimulus so that systematic behavior with respect to irrelevant stimuli can only produce chance performance on the learning curves. One way to achieve this is to use random numbers derived from a discrete binomial distribution to create a 'full random training schedule' (FRS). When using FRS, however, sporadic but long laterally-biased training sequences occur by chance and such 'input biases' are thought to promote the generation of laterally-biased choices (i.e., 'output biases'). As an alternative, a 'Gellerman-like training schedule' (GLS) can be used. It removes most input biases by prohibiting the reward from appearing on the same location for more than three consecutive trials. The sequence of past rewards obtained from choosing a particular discriminative stimulus influences the probability of choosing that same stimulus on subsequent trials. Assuming that the long-term average ratio of choices matches the long-term average ratio of reinforcers, we hypothesized that a reduced amount of input biases in GLS compared to FRS should lead to a reduced production of output biases. We compared the choice patterns produced by a 'Rational Decision Maker' (RDM) in response to computer-generated FRS and GLS training sequences. To create a virtual RDM, we implemented an algorithm that generated choices based on past rewards. Our simulations revealed that, although the GLS presented fewer input biases than the FRS, the virtual RDM produced more output biases with GLS than with FRS under a variety of test conditions. Our results reveal that the statistical and temporal properties of training sequences interacted with the RDM to influence the production of output biases. Thus, discrete changes in the training paradigms did not translate linearly into modifications in the pattern of choices generated by a RDM. Virtual RDMs could be further employed to guide the selection of proper training schedules for perceptual decision-making studies.

  15. Random Variation in Student Performance by Class Size: Implications of NCLB in Rural Pennsylvania

    ERIC Educational Resources Information Center

    Goetz, Stephan J.

    2005-01-01

    Schools that fail to make "adequate yearly progress" under NCLB face sanctions and may lose students to other schools. In smaller schools, random yearly variation in innate student ability and behavior can cause changes in scores that are beyond the influence of teachers. This study examines changes in reading and math scores across…

  16. High-speed true random number generation based on paired memristors for security electronics

    NASA Astrophysics Data System (ADS)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  17. High-speed true random number generation based on paired memristors for security electronics.

    PubMed

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-10

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ∼30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  18. A Primary Care Workload Production Model for Estimating Relative Value Unit Output

    DTIC Science & Technology

    2011-03-01

    for Medicare and Medicaid Services, Office of the Actuary , National Health Statistics Group; and U.S. Department of Commerce, Bureau of Economic...The systematic variation in a relationship can be represented by a mathematical expression, whereas stochastic variation cannot. Further, stochastic...expressed mathematically as an equation, whereby a response variable Y is fitted to a function of “regressor variables and parameters” (SAS©, 2010). A

  19. Effects of the magnetic field variation on the spin wave interference in a magnetic cross junction

    NASA Astrophysics Data System (ADS)

    Balynskiy, M.; Chiang, H.; Kozhevnikov, A.; Dudko, G.; Filimonov, Y.; Balandin, A. A.; Khitun, A.

    2018-05-01

    This article reports results of the investigation of the effect of the external magnetic field variation on the spin wave interference in a magnetic cross junction. The experiments were performed using a micrometer scale Y3Fe5O12 cross structure with a set of micro-antennas fabricated on the edges of the cross arms. Two of the antennas were used for the spin wave excitation while a third antenna was used for detecting the inductive voltage produced by the interfering spin waves. It was found that a small variation of the bias magnetic field may result in a significant change of the output inductive voltage. The effect is most prominent under the destructive interference condition. The maximum response exceeds 30 dB per 0.1 Oe at room temperature. It takes a relatively small bias magnetic field variation of about 1 Oe to drive the system from the destructive to the constructive interference conditions. The switching is accompanied by a significant, up to 50 dB, change in the output voltage. The obtained results demonstrate a feasibility of the efficient spin wave interference control by an external magnetic field, which may be utilized for engineering novel type of magnetometers and magnonic logic devices.

  20. Quantitative Resistance: More Than Just Perception of a Pathogen

    PubMed Central

    2017-01-01

    Molecular plant pathology has focused on studying large-effect qualitative resistance loci that predominantly function in detecting pathogens and/or transmitting signals resulting from pathogen detection. By contrast, less is known about quantitative resistance loci, particularly the molecular mechanisms controlling variation in quantitative resistance. Recent studies have provided insight into these mechanisms, showing that genetic variation at hundreds of causal genes may underpin quantitative resistance. Loci controlling quantitative resistance contain some of the same causal genes that mediate qualitative resistance, but the predominant mechanisms of quantitative resistance extend beyond pathogen recognition. Indeed, most causal genes for quantitative resistance encode specific defense-related outputs such as strengthening of the cell wall or defense compound biosynthesis. Extending previous work on qualitative resistance to focus on the mechanisms of quantitative resistance, such as the link between perception of microbe-associated molecular patterns and growth, has shown that the mechanisms underlying these defense outputs are also highly polygenic. Studies that include genetic variation in the pathogen have begun to highlight a potential need to rethink how the field considers broad-spectrum resistance and how it is affected by genetic variation within pathogen species and between pathogen species. These studies are broadening our understanding of quantitative resistance and highlighting the potentially vast scale of the genetic basis of quantitative resistance. PMID:28302676

  1. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  2. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  3. A proposed Kalman filter algorithm for estimation of unmeasured output variables for an F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Alag, Gurbux S.; Gilyard, Glenn B.

    1990-01-01

    To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.

  4. Room-Temperature Spin Polariton Diode Laser

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Aniruddha; Baten, Md Zunaid; Iorsh, Ivan; Frost, Thomas; Kavokin, Alexey; Bhattacharya, Pallab

    2017-08-01

    A spin-polarized laser offers inherent control of the output circular polarization. We have investigated the output polarization characteristics of a bulk GaN-based microcavity polariton diode laser at room temperature with electrical injection of spin-polarized electrons via a FeCo /MgO spin injector. Polariton laser operation with a spin-polarized current is characterized by a threshold of ˜69 A / cm2 in the light-current characteristics, a significant reduction of the electroluminescence linewidth and blueshift of the emission peak. A degree of output circular polarization of ˜25 % is recorded under remanent magnetization. A second threshold, due to conventional photon lasing, is observed at an injection of ˜7.2 kA /cm2 . The variation of output circular and linear polarization with spin-polarized injection current has been analyzed with the carrier and exciton rate equations and the Gross-Pitaevskii equations for the condensate and there is good agreement between measured and calculated data.

  5. Fiber laser refractometer based on tunable bandpass filter tailored FBG reflection

    NASA Astrophysics Data System (ADS)

    Zhao, Junfa; Wang, Juan; Zhang, Cheng; Xu, Wei; Sun, Xiaodong; Bai, Hua; Chen, Liying

    2018-02-01

    A fiber laser refractometer based on single-mode-no-core-single-mode (SNS) structure cascaded with a FBG is proposed and experimentally demonstrated. The output wavelength of the fiber laser keeps constant because the oscillating wavelength is only determined by the central wavelength of the FBG which is insensitive to the surrounding refractive index (SRI). However, the output power is sensitive to the SRI because the intracavity loss of the fiber laser varies with the SRI. A cost-effective power detection refractometer with reflective operation can be realized through measuring the variation of the fiber laser's output power. The refractometer has a sensitivity of 195.52 dB/RIU and 365.52 dB/RIU in the RI range of 1.3330-1.3687 and 1.3687-1.4135, respectively. Moreover, the refractometer can also be used for temperature measurement through discriminating the output wavelength of the fiber laser.

  6. Flight performance of the Pioneer Venus Orbiter solar array

    NASA Technical Reports Server (NTRS)

    Goldhammer, L. J.; Powe, J. S.; Smith, Marcie

    1987-01-01

    The Pioneer Venus Orbiter (PVO) solar panel power output capability has degraded much more severely than has the power output capability of solar panels that have operated in earth-orbiting spacecraft for comparable periods of time. The incidence of solar proton events recorded by the spacecraft's scientific instruments accounts for this phenomenon only in part. It cannot explain two specific forms of anomalous behavior observed: 1) a variation of output per spin with roll angle, and 2) a gradual degradation of the maximum output. Analysis indicates that the most probable cause of the first anomaly is that the solar cells underneath the spacecraft's magnetometer boom have been damaged by a reverse biasing of the cells that occurs during pulsed shadowing of the cells by the boom as the spacecraft rotates. The second anomaly might be caused by the effects on the solar array of substances from the upper atmosphere of Venus.

  7. Variational Solutions and Random Dynamical Systems to SPDEs Perturbed by Fractional Gaussian Noise

    PubMed Central

    Zeng, Caibin; Yang, Qigui; Cao, Junfei

    2014-01-01

    This paper deals with the following type of stochastic partial differential equations (SPDEs) perturbed by an infinite dimensional fractional Brownian motion with a suitable volatility coefficient Φ: dX(t) = A(X(t))dt+Φ(t)dB H(t), where A is a nonlinear operator satisfying some monotonicity conditions. Using the variational approach, we prove the existence and uniqueness of variational solutions to such system. Moreover, we prove that this variational solution generates a random dynamical system. The main results are applied to a general type of nonlinear SPDEs and the stochastic generalized p-Laplacian equation. PMID:24574903

  8. Ephedrine fails to accelerate the onset of neuromuscular block by vecuronium.

    PubMed

    Komatsu, Ryu; Nagata, Osamu; Ozaki, Makoto; Sessler, Daniel I

    2003-08-01

    The onset time of neuromuscular blocking drugs is partially determined by circulatory factors, including muscle blood flow and cardiac output. We thus tested the hypothesis that a bolus of ephedrine accelerates the onset of vecuronium neuromuscular block by increasing cardiac output. A prospective, randomized study was conducted in 53 patients scheduled for elective surgery. After the induction of anesthesia, the ulnar nerve was stimulated supramaximally every 10 s, and the evoked twitch response of the adductor pollicis was recorded with accelerometry. Patients were maintained under anesthesia with continuous infusion of propofol for 10 min and then randomly assigned to ephedrine 210 microg/kg (n = 27) or an equivalent volume of saline (n = 26). The test solution was given 1 min before the administration of 0.1 mg/kg of vecuronium. Cardiac output was monitored with impedance cardiography. Ephedrine, but not saline, increased cardiac index (17%; P = 0.003). Nonetheless, the onset of 90% neuromuscular block was virtually identical in the patients given ephedrine (183 +/- 41 s) and saline (181 +/- 47 s). There was no correlation between cardiac index and onset of the blockade. We conclude that the onset of the vecuronium-induced neuromuscular block is primarily determined by factors other than cardiac output. The combination of ephedrine and vecuronium thus cannot be substituted for rapid-acting nondepolarizing muscle relaxants. Ephedrine increased cardiac index but failed to speed onset of neuromuscular block with vecuronium. We conclude that ephedrine administration does not shorten the onset time of vecuronium.

  9. [Toward exploration of morphological diversity of measurable traits of mammalian skull. 2. Scalar and vector parameters of the forms of group variation].

    PubMed

    Lisovskiĭ, A A; Pavlinov, I Ia

    2008-01-01

    Any morphospace is partitioned by the forms of group variation, its structure is described by a set of scalar (range, overlap) and vector (direction) characteristics. They are analyzed quantitatively for the sex and age variations in the sample of 200 skulls of the pine marten described by 14 measurable traits. Standard dispersion and variance components analyses are employed, accompanied with several resampling methods (randomization and bootstrep); effects of changes in the analysis design on results of the above methods are also considered. Maximum likelihood algorithm of variance components analysis is shown to give an adequate estimates of portions of particular forms of group variation within the overall disparity. It is quite stable in respect to changes of the analysis design and therefore could be used in the explorations of the real data with variously unbalanced designs. A new algorithm of estimation of co-directionality of particular forms of group variation within the overall disparity is elaborated, which includes angle measures between eigenvectors of covariation matrices of effects of group variations calculated by dispersion analysis. A null hypothesis of random portion of a given group variation could be tested by means of randomization of the respective grouping variable. A null hypothesis of equality of both portions and directionalities of different forms of group variation could be tested by means of the bootstrep procedure.

  10. Comparison of three controllers applied to helicopter vibration

    NASA Technical Reports Server (NTRS)

    Leyland, Jane A.

    1992-01-01

    A comparison was made of the applicability and suitability of the deterministic controller, the cautious controller, and the dual controller for the reduction of helicopter vibration by using higher harmonic blade pitch control. A randomly generated linear plant model was assumed and the performance index was defined to be a quadratic output metric of this linear plant. A computer code, designed to check out and evaluate these controllers, was implemented and used to accomplish this comparison. The effects of random measurement noise, the initial estimate of the plant matrix, and the plant matrix propagation rate were determined for each of the controllers. With few exceptions, the deterministic controller yielded the greatest vibration reduction (as characterized by the quadratic output metric) and operated with the greatest reliability. Theoretical limitations of these controllers were defined and appropriate candidate alternative methods, including one method particularly suitable to the cockpit, were identified.

  11. Significance testing testate amoeba water table reconstructions

    NASA Astrophysics Data System (ADS)

    Payne, Richard J.; Babeshko, Kirill V.; van Bellen, Simon; Blackford, Jeffrey J.; Booth, Robert K.; Charman, Dan J.; Ellershaw, Megan R.; Gilbert, Daniel; Hughes, Paul D. M.; Jassey, Vincent E. J.; Lamentowicz, Łukasz; Lamentowicz, Mariusz; Malysheva, Elena A.; Mauquoy, Dmitri; Mazei, Yuri; Mitchell, Edward A. D.; Swindles, Graeme T.; Tsyganov, Andrey N.; Turner, T. Edward; Telford, Richard J.

    2016-04-01

    Transfer functions are valuable tools in palaeoecology, but their output may not always be meaningful. A recently-developed statistical test ('randomTF') offers the potential to distinguish among reconstructions which are more likely to be useful, and those less so. We applied this test to a large number of reconstructions of peatland water table depth based on testate amoebae. Contrary to our expectations, a substantial majority (25 of 30) of these reconstructions gave non-significant results (P > 0.05). The underlying reasons for this outcome are unclear. We found no significant correlation between randomTF P-value and transfer function performance, the properties of the training set and reconstruction, or measures of transfer function fit. These results give cause for concern but we believe it would be extremely premature to discount the results of non-significant reconstructions. We stress the need for more critical assessment of transfer function output, replication of results and ecologically-informed interpretation of palaeoecological data.

  12. Statistical evaluation of PACSTAT random number generation capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, G.F.; Toland, M.R.; Harty, H.

    1988-05-01

    This report summarizes the work performed in verifying the general purpose Monte Carlo driver-program PACSTAT. The main objective of the work was to verify the performance of PACSTAT's random number generation capabilities. Secondary objectives were to document (using controlled configuration management procedures) changes made in PACSTAT at Pacific Northwest Laboratory, and to assure that PACSTAT input and output files satisfy quality assurance traceability constraints. Upon receipt of the PRIME version of the PACSTAT code from the Basalt Waste Isolation Project, Pacific Northwest Laboratory staff converted the code to run on Digital Equipment Corporation (DEC) VAXs. The modifications to PACSTAT weremore » implemented using the WITNESS configuration management system, with the modifications themselves intended to make the code as portable as possible. Certain modifications were made to make the PACSTAT input and output files conform to quality assurance traceability constraints. 10 refs., 17 figs., 6 tabs.« less

  13. A novel method for predicting the power outputs of wave energy converters

    NASA Astrophysics Data System (ADS)

    Wang, Yingguang

    2018-03-01

    This paper focuses on realistically predicting the power outputs of wave energy converters operating in shallow water nonlinear waves. A heaving two-body point absorber is utilized as a specific calculation example, and the generated power of the point absorber has been predicted by using a novel method (a nonlinear simulation method) that incorporates a second order random wave model into a nonlinear dynamic filter. It is demonstrated that the second order random wave model in this article can be utilized to generate irregular waves with realistic crest-trough asymmetries, and consequently, more accurate generated power can be predicted by subsequently solving the nonlinear dynamic filter equation with the nonlinearly simulated second order waves as inputs. The research findings demonstrate that the novel nonlinear simulation method in this article can be utilized as a robust tool for ocean engineers in their design, analysis and optimization of wave energy converters.

  14. Vertex centralities in input-output networks reveal the structure of modern economies

    NASA Astrophysics Data System (ADS)

    Blöchl, Florian; Theis, Fabian J.; Vega-Redondo, Fernando; Fisher, Eric O.'N.

    2011-04-01

    Input-output tables describe the flows of goods and services between the sectors of an economy. These tables can be interpreted as weighted directed networks. At the usual level of aggregation, they contain nodes with strong self-loops and are almost completely connected. We derive two measures of node centrality that are well suited for such networks. Both are based on random walks and have interpretations as the propagation of supply shocks through the economy. Random walk centrality reveals the vertices most immediately affected by a shock. Counting betweenness identifies the nodes where a shock lingers longest. The two measures differ in how they treat self-loops. We apply both to data from a wide set of countries and uncover salient characteristics of the structures of these national economies. We further validate our indices by clustering according to sectors’ centralities. This analysis reveals geographical proximity and similar developmental status.

  15. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  16. AirSWOT observations versus hydrodynamic model outputs of water surface elevation and slope in a multichannel river

    NASA Astrophysics Data System (ADS)

    Altenau, Elizabeth H.; Pavelsky, Tamlin M.; Moller, Delwyn; Lion, Christine; Pitcher, Lincoln H.; Allen, George H.; Bates, Paul D.; Calmant, Stéphane; Durand, Michael; Neal, Jeffrey C.; Smith, Laurence C.

    2017-04-01

    Anabranching rivers make up a large proportion of the world's major rivers, but quantifying their flow dynamics is challenging due to their complex morphologies. Traditional in situ measurements of water levels collected at gauge stations cannot capture out of bank flows and are limited to defined cross sections, which presents an incomplete picture of water fluctuations in multichannel systems. Similarly, current remotely sensed measurements of water surface elevations (WSEs) and slopes are constrained by resolutions and accuracies that limit the visibility of surface waters at global scales. Here, we present new measurements of river WSE and slope along the Tanana River, AK, acquired from AirSWOT, an airborne analogue to the Surface Water and Ocean Topography (SWOT) mission. Additionally, we compare the AirSWOT observations to hydrodynamic model outputs of WSE and slope simulated across the same study area. Results indicate AirSWOT errors are significantly lower than model outputs. When compared to field measurements, RMSE for AirSWOT measurements of WSEs is 9.0 cm when averaged over 1 km squared areas and 1.0 cm/km for slopes along 10 km reaches. Also, AirSWOT can accurately reproduce the spatial variations in slope critical for characterizing reach-scale hydraulics, while model outputs of spatial variations in slope are very poor. Combining AirSWOT and future SWOT measurements with hydrodynamic models can result in major improvements in model simulations at local to global scales. Scientists can use AirSWOT measurements to constrain model parameters over long reach distances, improve understanding of the physical processes controlling the spatial distribution of model parameters, and validate models' abilities to reproduce spatial variations in slope. Additionally, AirSWOT and SWOT measurements can be assimilated into lower-complexity models to try and approach the accuracies achieved by higher-complexity models.

  17. Method and apparatus for free-space quantum key distribution in daylight

    DOEpatents

    Hughes, Richard J.; Buttler, William T.; Lamoreaux, Steve K.; Morgan, George L.; Nordholt, Jane E.; Peterson, C. Glen; Kwiat, Paul G.

    2004-06-08

    A quantum cryptography apparatus securely generates a key to be used for secure transmission between a sender and a receiver connected by an atmospheric transmission link. A first laser outputs a timing bright light pulse; other lasers output polarized optical data pulses after having been enabled by a random bit generator. Output optics transmit output light from the lasers that is received by receiving optics. A first beam splitter receives light from the receiving optics, where a received timing bright light pulse is directed to a delay circuit for establishing a timing window for receiving light from the lasers and where an optical data pulse from one of the lasers has a probability of being either transmitted by the beam splitter or reflected by the beam splitter. A first polarizer receives transmitted optical data pulses to output one data bit value and a second polarizer receives reflected optical data pulses to output a second data bit value. A computer receives pulses representing receipt of a timing bright timing pulse and the first and second data bit values, where receipt of the first and second data bit values is indexed by the bright timing pulse.

  18. High level white noise generator

    DOEpatents

    Borkowski, Casimer J.; Blalock, Theron V.

    1979-01-01

    A wide band, stable, random noise source with a high and well-defined output power spectral density is provided which may be used for accurate calibration of Johnson Noise Power Thermometers (JNPT) and other applications requiring a stable, wide band, well-defined noise power spectral density. The noise source is based on the fact that the open-circuit thermal noise voltage of a feedback resistor, connecting the output to the input of a special inverting amplifier, is available at the amplifier output from an equivalent low output impedance caused by the feedback mechanism. The noise power spectral density level at the noise source output is equivalent to the density of the open-circuit thermal noise or a 100 ohm resistor at a temperature of approximately 64,000 Kelvins. The noise source has an output power spectral density that is flat to within 0.1% (0.0043 db) in the frequency range of from 1 KHz to 100 KHz which brackets typical passbands of the signal-processing channels of JNPT's. Two embodiments, one of higher accuracy that is suitable for use as a standards instrument and another that is particularly adapted for ambient temperature operation, are illustrated in this application.

  19. Reduction of dissipation in a thermal engine by means of periodic changes of external constraintsa)

    NASA Astrophysics Data System (ADS)

    Escher, Claus; Ross, John

    1985-03-01

    We consider a thermal engine driven by chemical reactions, which take place in a continuous flow, stirred tank reactor fitted with a movable piston. Work can be produced by means of a heat engine coupled to the products and to an external heat bath, and by the piston. Two modes of operation are compared, each with fixed input rate of chemicals: one with periodic variation of an external constraint [mode (b)], in which we vary the external pressure, and one without such variation [mode (a)]. We derive equations for the total power output in each of the two modes. The power output in mode (b) can be larger than that of mode (a) for the same chemical throughput and for the same average value of the external pressure. For a particularly simple case it is shown that the total power output in mode (b) is larger than that in (a) if work is done by the piston. At the same time the entropy production is decreased and the efficiency is increased. The possibility of an increased power output is due to the proper control of the relative phase of the externally varied constraint and its conjugate variable, the external pressure and the volume. This control is achieved by the coupling of nonlinear kinetics to the externally varied constraint. Details of specific mechanisms and the occurrence of resonance phenomena are presented in the following article.

  20. Geophysical, archaeological, and historical evidence support a solar-output model for climate change

    PubMed Central

    Perry, Charles A.; Hsu, Kenneth J.

    2000-01-01

    Although the processes of climate change are not completely understood, an important causal candidate is variation in total solar output. Reported cycles in various climate-proxy data show a tendency to emulate a fundamental harmonic sequence of a basic solar-cycle length (11 years) multiplied by 2N (where N equals a positive or negative integer). A simple additive model for total solar-output variations was developed by superimposing a progression of fundamental harmonic cycles with slightly increasing amplitudes. The timeline of the model was calibrated to the Pleistocene/Holocene boundary at 9,000 years before present. The calibrated model was compared with geophysical, archaeological, and historical evidence of warm or cold climates during the Holocene. The evidence of periods of several centuries of cooler climates worldwide called “little ice ages,” similar to the period anno Domini (A.D.) 1280–1860 and reoccurring approximately every 1,300 years, corresponds well with fluctuations in modeled solar output. A more detailed examination of the climate sensitive history of the last 1,000 years further supports the model. Extrapolation of the model into the future suggests a gradual cooling during the next few centuries with intermittent minor warmups and a return to near little-ice-age conditions within the next 500 years. This cool period then may be followed approximately 1,500 years from now by a return to altithermal conditions similar to the previous Holocene Maximum. PMID:11050181

  1. A shifting mutational landscape in 6 nutritional states: Stress-induced mutagenesis as a series of distinct stress input-mutation output relationships.

    PubMed

    Maharjan, Ram P; Ferenci, Thomas

    2017-06-01

    Environmental stresses increase genetic variation in bacteria, plants, and human cancer cells. The linkage between various environments and mutational outcomes has not been systematically investigated, however. Here, we established the influence of nutritional stresses commonly found in the biosphere (carbon, phosphate, nitrogen, oxygen, or iron limitation) on both the rate and spectrum of mutations in Escherichia coli. We found that each limitation was associated with a remarkably distinct mutational profile. Overall mutation rates were not always elevated, and nitrogen, iron, and oxygen limitation resulted in major spectral changes but no net increase in rate. Our results thus suggest that stress-induced mutagenesis is a diverse series of stress input-mutation output linkages that is distinct in every condition. Environment-specific spectra resulted in the differential emergence of traits needing particular mutations in these settings. Mutations requiring transpositions were highest under iron and oxygen limitation, whereas base-pair substitutions and indels were highest under phosphate limitation. The unexpected diversity of input-output effects explains some important phenomena in the mutational biases of evolving genomes. The prevalence of bacterial insertion sequence transpositions in the mammalian gut or in anaerobically stored cultures is due to environmentally determined mutation availability. Likewise, the much-discussed genomic bias towards transition base substitutions in evolving genomes can now be explained as an environment-specific output. Altogether, our conclusion is that environments influence genetic variation as well as selection.

  2. A shifting mutational landscape in 6 nutritional states: Stress-induced mutagenesis as a series of distinct stress input–mutation output relationships

    PubMed Central

    Maharjan, Ram P.

    2017-01-01

    Environmental stresses increase genetic variation in bacteria, plants, and human cancer cells. The linkage between various environments and mutational outcomes has not been systematically investigated, however. Here, we established the influence of nutritional stresses commonly found in the biosphere (carbon, phosphate, nitrogen, oxygen, or iron limitation) on both the rate and spectrum of mutations in Escherichia coli. We found that each limitation was associated with a remarkably distinct mutational profile. Overall mutation rates were not always elevated, and nitrogen, iron, and oxygen limitation resulted in major spectral changes but no net increase in rate. Our results thus suggest that stress-induced mutagenesis is a diverse series of stress input–mutation output linkages that is distinct in every condition. Environment-specific spectra resulted in the differential emergence of traits needing particular mutations in these settings. Mutations requiring transpositions were highest under iron and oxygen limitation, whereas base-pair substitutions and indels were highest under phosphate limitation. The unexpected diversity of input–output effects explains some important phenomena in the mutational biases of evolving genomes. The prevalence of bacterial insertion sequence transpositions in the mammalian gut or in anaerobically stored cultures is due to environmentally determined mutation availability. Likewise, the much-discussed genomic bias towards transition base substitutions in evolving genomes can now be explained as an environment-specific output. Altogether, our conclusion is that environments influence genetic variation as well as selection. PMID:28594817

  3. Automatic generation and analysis of solar cell IV curves

    DOEpatents

    Kraft, Steven M.; Jones, Jason C.

    2014-06-03

    A photovoltaic system includes multiple strings of solar panels and a device presenting a DC load to the strings of solar panels. Output currents of the strings of solar panels may be sensed and provided to a computer that generates current-voltage (IV) curves of the strings of solar panels. Output voltages of the string of solar panels may be sensed at the string or at the device presenting the DC load. The DC load may be varied. Output currents of the strings of solar panels responsive to the variation of the DC load are sensed to generate IV curves of the strings of solar panels. IV curves may be compared and analyzed to evaluate performance of and detect problems with a string of solar panels.

  4. Alkali halide microstructured optical fiber for X-ray detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHaven, S. L., E-mail: stanton.l.dehaven@nasa.gov, E-mail: russel.a.wincheski@nasa.gov; Wincheski, R. A., E-mail: stanton.l.dehaven@nasa.gov, E-mail: russel.a.wincheski@nasa.gov; Albin, S., E-mail: salbin@nsu.edu

    Microstructured optical fibers containing alkali halide scintillation materials of CsI(Na), CsI(Tl), and NaI(Tl) are presented. The scintillation materials are grown inside the microstructured fibers using a modified Bridgman-Stockbarger technique. The x-ray photon counts of these fibers, with and without an aluminum film coating are compared to the output of a collimated CdTe solid state detector over an energy range from 10 to 40 keV. The photon count results show significant variations in the fiber output based on the materials. The alkali halide fiber output can exceed that of the CdTe detector, dependent upon photon counter efficiency and fiber configuration. Themore » results and associated materials difference are discussed.« less

  5. [Technical efficiency in primary care for patients with diabetes].

    PubMed

    Salinas-Martínez, Ana María; Amaya-Alemán, María Agustina; Arteaga-García, Julio César; Núñez-Rocha, Georgina Mayela; Garza-Elizondo, María Eugenia

    2009-01-01

    To quantify the technical efficiency of diabetes care in family practice settings, characterize the provision of services and health results, and recognize potential sources of variation. We used data envelopment analysis with inputs and outputs for diabetes care from 47 family units within a social security agency in Nuevo Leon. Tobit regression models were also used. Seven units were technically efficient in providing services and nine in achieving health goals. Only two achieved both outcomes. The metropolitan location and the total number of consultations favored efficiency in the provision of services regardless of patient attributes; and the age of the doctor, the efficiency of health results. Performance varied within and among family units; some were efficient at providing services while others at accomplishing health goals. Sources of variation also differed. It is necessary to include both outputs in the study of efficiency of diabetes care in family practice settings.

  6. Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects: A Hybrid Approach with Fixed Intercepts and A Random Treatment Coefficient

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin

    2017-01-01

    The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…

  7. Lotka-Volterra system in a random environment.

    PubMed

    Dimentberg, Mikhail F

    2002-03-01

    Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic "damping" term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent gamma-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.

  8. Lotka-Volterra system in a random environment

    NASA Astrophysics Data System (ADS)

    Dimentberg, Mikhail F.

    2002-03-01

    Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic ``damping'' term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent γ-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.

  9. Hundred-watt-level high power random distributed feedback Raman fiber laser at 1150 nm and its application in mid-infrared laser generation.

    PubMed

    Zhang, Hanwei; Zhou, Pu; Wang, Xiong; Du, Xueyuan; Xiao, Hu; Xu, Xiaojun

    2015-06-29

    Two kinds of hundred-watt-level random distributed feedback Raman fiber have been demonstrated. The optical efficiency can reach to as high as 84.8%. The reported power and efficiency of the random laser is the highest one as we know. We have also demonstrated that the developed random laser can be further used to pump a Ho-doped fiber laser for mid-infrared laser generation. Finally, 23 W 2050 nm laser is achieved. The presented laser can obtain high power output efficiently and conveniently and opens a new direction for high power laser sources at designed wavelength.

  10. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  11. Goal-directed fluid optimization based on stroke volume variation and cardiac index during one-lung ventilation in patients undergoing thoracoscopy lobectomy operations: a pilot study.

    PubMed

    Zhang, Jian; Chen, Chao Qin; Lei, Xiu Zhen; Feng, Zhi Ying; Zhu, Sheng Mei

    2013-07-01

    This pilot study was designed to utilize stroke volume variation and cardiac index to ensure fluid optimization during one-lung ventilation in patients undergoing thoracoscopic lobectomies. Eighty patients undergoing thoracoscopic lobectomy were randomized into either a goal-directed therapy group or a control group. In the goal-directed therapy group, the stroke volume variation was controlled at 10%±1%, and the cardiac index was controlled at a minimum of 2.5 L.min-1.m-2. In the control group, the MAP was maintained at between 65 mm Hg and 90 mm Hg, heart rate was maintained at between 60 BPM and 100 BPM, and urinary output was greater than 0.5 mL/kg-1/h-1. The hemodynamic variables, arterial blood gas analyses, total administered fluid volume and side effects were recorded. The PaO2/FiO2-ratio before the end of one-lung ventilation in the goal-directed therapy group was significantly higher than that of the control group, but there were no differences between the goal-directed therapy group and the control group for the PaO2/FiO2-ratio or other arterial blood gas analysis indices prior to anesthesia. The extubation time was significantly earlier in the goal-directed therapy group, but there was no difference in the length of hospital stay. Patients in the control group had greater urine volumes, and they were given greater colloid and overall fluid volumes. Nausea and vomiting were significantly reduced in the goal-directed therapy group. The results of this study demonstrated that an optimization protocol, based on stroke volume variation and cardiac index obtained with a FloTrac/Vigileo device, increased the PaO2/FiO2-ratio and reduced the overall fluid volume, intubation time and postoperative complications (nausea and vomiting) in thoracic surgery patients requiring one-lung ventilation.

  12. Implementation of a quantum random number generator based on the optimal clustering of photocounts

    NASA Astrophysics Data System (ADS)

    Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.

    2017-10-01

    To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.

  13. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  14. A neural network approach for enhancing information extraction from multispectral image data

    USGS Publications Warehouse

    Liu, J.; Shao, G.; Zhu, H.; Liu, S.

    2005-01-01

    A back-propagation artificial neural network (ANN) was applied to classify multispectral remote sensing imagery data. The classification procedure included four steps: (i) noisy training that adds minor random variations to the sampling data to make the data more representative and to reduce the training sample size; (ii) iterative or multi-tier classification that reclassifies the unclassified pixels by making a subset of training samples from the original training set, which means the neural model can focus on fewer classes; (iii) spectral channel selection based on neural network weights that can distinguish the relative importance of each channel in the classification process to simplify the ANN model; and (iv) voting rules that adjust the accuracy of classification and produce outputs of different confidence levels. The Purdue Forest, located west of Purdue University, West Lafayette, Indiana, was chosen as the test site. The 1992 Landsat thematic mapper imagery was used as the input data. High-quality airborne photographs of the same Lime period were used for the ground truth. A total of 11 land use and land cover classes were defined, including water, broadleaved forest, coniferous forest, young forest, urban and road, and six types of cropland-grassland. The experiment, indicated that the back-propagation neural network application was satisfactory in distinguishing different land cover types at US Geological Survey levels II-III. The single-tier classification reached an overall accuracy of 85%. and the multi-tier classification an overall accuracy of 95%. For the whole test, region, the final output of this study reached an overall accuracy of 87%. ?? 2005 CASI.

  15. Storage filters upland suspended sediment signals delivered from watersheds

    USGS Publications Warehouse

    Pizzuto, James E.; Keeler, Jeremy; Skalak, Katherine; Karwan, Diana

    2017-01-01

    Climate change, tectonics, and humans create long- and short-term temporal variations in the supply of suspended sediment to rivers. These signals, generated in upland erosional areas, are filtered by alluvial storage before reaching the basin outlet. We quantified this filter using a random walk model driven by sediment budget data, a power-law distributed probability density function (PDF) to determine how long sediment remains stored, and a constant downstream drift velocity during transport of 157 km/yr. For 25 km of transport, few particles are stored, and the median travel time is 0.2 yr. For 1000 km of transport, nearly all particles are stored, and the median travel time is 2.5 m.y. Both travel-time distributions are power laws. The 1000 km travel-time distribution was then used to filter sinusoidal input signals with periods of 10 yr and 104 yr. The 10 yr signal is delayed by 12.5 times its input period, damped by a factor of 380, and is output as a power law. The 104 yr signal is delayed by 0.15 times its input period, damped by a factor of 3, and the output signal retains its sinusoidal input form (but with a power-law “tail”). Delivery time scales for these two signals are controlled by storage; in-channel transport time is insignificant, and low-frequency signals are transmitted with greater fidelity than high-frequency signals. These signal modifications are essential to consider when evaluating watershed restoration schemes designed to control sediment loading, and where source-area geomorphic processes are inferred from the geologic record.

  16. Ultra-fast quantum randomness generation by accelerated phase diffusion in a pulsed laser diode.

    PubMed

    Abellán, C; Amaya, W; Jofre, M; Curty, M; Acín, A; Capmany, J; Pruneri, V; Mitchell, M W

    2014-01-27

    We demonstrate a high bit-rate quantum random number generator by interferometric detection of phase diffusion in a gain-switched DFB laser diode. Gain switching at few-GHz frequencies produces a train of bright pulses with nearly equal amplitudes and random phases. An unbalanced Mach-Zehnder interferometer is used to interfere subsequent pulses and thereby generate strong random-amplitude pulses, which are detected and digitized to produce a high-rate random bit string. Using established models of semiconductor laser field dynamics, we predict a regime of high visibility interference and nearly complete vacuum-fluctuation-induced phase diffusion between pulses. These are confirmed by measurement of pulse power statistics at the output of the interferometer. Using a 5.825 GHz excitation rate and 14-bit digitization, we observe 43 Gbps quantum randomness generation.

  17. Evolution in fluctuating environments: decomposing selection into additive components of the Robertson-Price equation.

    PubMed

    Engen, Steinar; Saether, Bernt-Erik

    2014-03-01

    We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  18. General practice performance in referral for suspected cancer: influence of number of cases and case-mix on publicly reported data.

    PubMed

    Murchie, P; Chowdhury, A; Smith, S; Campbell, N C; Lee, A J; Linden, D; Burton, C D

    2015-05-26

    Publicly available data show variation in GPs' use of urgent suspected cancer (USC) referral pathways. We investigated whether this could be due to small numbers of cancer cases and random case-mix, rather than due to true variation in performance. We analysed individual GP practice USC referral detection rates (proportion of the practice's cancer cases that are detected via USC) and conversion rates (proportion of the practice's USC referrals that prove to be cancer) in routinely collected data from GP practices in all of England (over 4 years) and northeast Scotland (over 7 years). We explored the effect of pooling data. We then modelled the effects of adding random case-mix to practice variation. Correlations between practice detection rate and conversion rate became less positive when data were aggregated over several years. Adding random case-mix to between-practice variation indicated that the median proportion of poorly performing practices correctly identified after 25 cancer cases were examined was 20% (IQR 17 to 24) and after 100 cases was 44% (IQR 40 to 47). Much apparent variation in GPs' use of suspected cancer referral pathways can be attributed to random case-mix. The methods currently used to assess the quality of GP-suspected cancer referral performance, and to compare individual practices, are misleading. These should no longer be used, and more appropriate and robust methods should be developed.

  19. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  20. Rice solution and World Health Organization solution by gastric infusion for high stool output diarrhea.

    PubMed

    Mota-Hernández, F; Bross-Soriano, D; Pérez-Ricardez, M L; Velásquez-Jones, L

    1991-08-01

    We sought to determine the efficacy of three different types of treatment in children with acute diarrhea who, during the oral rehydration period, had high stool output (greater than 10 mL/kg per hour). Sixty-six children, aged 1 to 18 months, with an average stool output of 22.6 mL/kg per hour were randomly distributed into three groups: group 1 received a rice flour solution, group 2 received the World Health Organization rehydration solution by gastric infusion, and group 3 continued to receive this solution orally. In all three groups, a decrease in stool output was observed, with the higher decrease observed in group 1 patients. Such a decrease facilitated rehydration of all 22 patients in group 1 (100%) in 3.3 +/- 1.5 hours, 16 (73%) in group 2 in 4.3 +/- 2.1 hours, and 15 (69%) in group 3 in 4.9 +/- 2.0 hours. No complications were observed. These data indicate that the rice flour solution is effective in children with high stool output diarrhea.

  1. Least-squares analysis of the Mueller matrix.

    PubMed

    Reimer, Michael; Yevick, David

    2006-08-15

    In a single-mode fiber excited by light with a fixed polarization state, the output polarizations obtained at two different optical frequencies are related by a Mueller matrix. We examine least-squares procedures for estimating this matrix from repeated measurements of the output Stokes vector for a random set of input polarization states. We then apply these methods to the determination of polarization mode dispersion and polarization-dependent loss in an optical fiber. We find that a relatively simple formalism leads to results that are comparable with those of far more involved techniques.

  2. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  3. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  4. Post-Exercise Hypotension and Its Mechanisms Differ after Morning and Evening Exercise: A Randomized Crossover Study

    PubMed Central

    da Silva Junior, Natan D.; Tinucci, Tais; Casarini, Dulce E.; Cipolla-Neto, José

    2015-01-01

    Post-exercise hypotension (PEH), calculated by the difference between post and pre-exercise values, it is greater after exercise performed in the evening than the morning. However, the hypotensive effect of morning exercise may be masked by the morning circadian increase in blood pressure. This study investigated PEH and its hemodynamic and autonomic mechanisms after sessions of aerobic exercise performed in the morning and evening, controlling for responses observed after control sessions performed at the same times of day. Sixteen pre-hypertensive men underwent four sessions (random order): two conducted in the morning (7:30am) and two in the evening (5pm). At each time of day, subjects underwent an exercise (cycling, 45 min, 50%VO2peak) and a control (sitting rest) session. Measurements were taken pre- and post-interventions in all the sessions. The net effects of exercise were calculated for each time of day by [(post-pre exercise)-(post-pre control)] and were compared by paired t-test (P<0.05). Exercise hypotensive net effects (e.g., decreasing systolic, diastolic and mean blood pressure) occurred at both times of day, but systolic blood pressure reductions were greater after morning exercise (-7±3 vs. -3±4 mmHg, P<0.05). Exercise decreased cardiac output only in the morning (-460±771 ml/min, P<0.05), while it decreased stroke volume similarly at both times of day and increased heart rate less in the morning than in the evening (+7±5 vs. +10±5 bpm, P<0.05). Only evening exercise increased sympathovagal balance (+1.5±1.6, P<0.05) and calf blood flow responses to reactive hyperemia (+120±179 vs. -70±188 U, P<0.05). In conclusion, PEH occurs after exercise conducted at both times of day, but the systolic hypotensive effect is greater after morning exercise when circadian variations are considered. This greater effect is accompanied by a reduction of cardiac output due to a smaller increase in heart rate and cardiac sympathovagal balance. PMID:26186444

  5. Effects of ion channel noise on neural circuits: an application to the respiratory pattern generator to investigate breathing variability.

    PubMed

    Yu, Haitao; Dhingra, Rishi R; Dick, Thomas E; Galán, Roberto F

    2017-01-01

    Neural activity generally displays irregular firing patterns even in circuits with apparently regular outputs, such as motor pattern generators, in which the output frequency fluctuates randomly around a mean value. This "circuit noise" is inherited from the random firing of single neurons, which emerges from stochastic ion channel gating (channel noise), spontaneous neurotransmitter release, and its diffusion and binding to synaptic receptors. Here we demonstrate how to expand conductance-based network models that are originally deterministic to include realistic, physiological noise, focusing on stochastic ion channel gating. We illustrate this procedure with a well-established conductance-based model of the respiratory pattern generator, which allows us to investigate how channel noise affects neural dynamics at the circuit level and, in particular, to understand the relationship between the respiratory pattern and its breath-to-breath variability. We show that as the channel number increases, the duration of inspiration and expiration varies, and so does the coefficient of variation of the breath-to-breath interval, which attains a minimum when the mean duration of expiration slightly exceeds that of inspiration. For small channel numbers, the variability of the expiratory phase dominates over that of the inspiratory phase, and vice versa for large channel numbers. Among the four different cell types in the respiratory pattern generator, pacemaker cells exhibit the highest sensitivity to channel noise. The model shows that suppressing input from the pons leads to longer inspiratory phases, a reduction in breathing frequency, and larger breath-to-breath variability, whereas enhanced input from the raphe nucleus increases breathing frequency without changing its pattern. A major source of noise in neuronal circuits is the "flickering" of ion currents passing through the neurons' membranes (channel noise), which cannot be suppressed experimentally. Computational simulations are therefore the best way to investigate the effects of this physiological noise by manipulating its level at will. We investigate the role of noise in the respiratory pattern generator and show that endogenous, breath-to-breath variability is tightly linked to the respiratory pattern. Copyright © 2017 the American Physiological Society.

  6. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  7. Parasite fitness traits under environmental variation: disentangling the roles of a chytrid's immediate host and external environment.

    PubMed

    Van den Wyngaert, Silke; Vanholsbeeck, Olivier; Spaak, Piet; Ibelings, Bas W

    2014-10-01

    Parasite environments are heterogeneous at different levels. The first level of variability is the host itself. The second level represents the external environment for the hosts, to which parasites may be exposed during part of their life cycle. Both levels are expected to affect parasite fitness traits. We disentangle the main and interaction effects of variation in the immediate host environment, here the diatom Asterionella formosa (variables host cell volume and host condition through herbicide pre-exposure) and variation in the external environment (variables host density and acute herbicide exposure) on three fitness traits (infection success, development time and reproductive output) of a chytrid parasite. Herbicide exposure only decreased infection success in a low host density environment. This result reinforces the hypothesis that chytrid zoospores use photosynthesis-dependent chemical cues to locate its host. At high host densities, chemotaxis becomes less relevant due to increasing chance contact rates between host and parasite, thereby following the mass-action principle in epidemiology. Theoretical support for this finding is provided by an agent-based simulation model. The immediate host environment (cell volume) substantially affected parasite reproductive output and also interacted with the external herbicide exposed environment. On the contrary, changes in the immediate host environment through herbicide pre-exposure did not increase infection success, though it had subtle effects on zoospore development time and reproductive output. This study shows that both immediate host and external environment as well as their interaction have significant effects on parasite fitness. Disentangling these effects improves our understanding of the processes underlying parasite spread and disease dynamics.

  8. Productive efficiency and its determinants in the Community Dental Service in the north-west of England.

    PubMed

    Hill, H; Birch, S; Tickle, M; McDonald, R; Brocklehurst, P

    2017-06-01

    To assess the efficiency of service provision in the Community Dental Services and its determinants in the North-West of England. 40 Community Dental Services sites operating across the North-West of England. A data envelopment analysis was undertaken of inputs (number of surgeries, hours worked by dental officers, therapists, hygienists and others) and outputs (treatments delivered, number of courses of treatment and patients seen) of the Community Dental Services to produce relative efficiency ratings by health authority. These were further analyzed in order to identify which inputs (determined within the Community Dental Services) or external factors outside the control of the Community Dental Services are associated with efficiency. Relative efficiency rankings in Community Dental Services production of dental healthcare. Using the quantity of treatments delivered as the measure of output, on average the Community Dental Services in England is operating at a relative efficiency of 85% (95% confidence interval 77%- 99%) compared to the best performing services. Average efficiency is lower when courses of treatment and unique patients seen are used as output measures, 82% and 68% respectively. Neither the input mix nor the patient case mix explained variations in the efficiency across Community Dental Services. Although large variations in performance exist across Community Dental Services, the data available was not able to explain these variations. A useful next step would be to undertake detailed case studies of several best and under-performing services to explore the factors that influence relative performance levels. Copyright© 2017 Dennis Barber Ltd.

  9. Variation in motor output and motor performance in a centrally generated motor pattern

    PubMed Central

    Norris, Brian J.; Doloc-Mihu, Anca; Calabrese, Ronald L.

    2014-01-01

    Central pattern generators (CPGs) produce motor patterns that ultimately drive motor outputs. We studied how functional motor performance is achieved, specifically, whether the variation seen in motor patterns is reflected in motor performance and whether fictive motor patterns differ from those in vivo. We used the leech heartbeat system in which a bilaterally symmetrical CPG coordinates segmental heart motor neurons and two segmented heart tubes into two mutually exclusive coordination modes: rear-to-front peristaltic on one side and nearly synchronous on the other, with regular side-to-side switches. We assessed individual variability of the motor pattern and the beat pattern in vivo. To quantify the beat pattern we imaged intact adults. To quantify the phase relations between motor neurons and heart constrictions we recorded extracellularly from two heart motor neurons and movement from the corresponding heart segments in minimally dissected leeches. Variation in the motor pattern was reflected in motor performance only in the peristaltic mode, where larger intersegmental phase differences in the motor neurons resulted in larger phase differences between heart constrictions. Fictive motor patterns differed from those in vivo only in the synchronous mode, where intersegmental phase differences in vivo had a larger front-to-rear bias and were more constrained. Additionally, load-influenced constriction timing might explain the amplification of the phase differences between heart segments in the peristaltic mode and the higher variability in motor output due to body shape assumed in this soft-bodied animal. The motor pattern determines the beat pattern, peristaltic or synchronous, but heart mechanics influence the phase relations achieved. PMID:24717348

  10. A random Q-switched fiber laser

    PubMed Central

    Tang, Yulong; Xu, Jianqiu

    2015-01-01

    Extensive studies have been performed on random lasers in which multiple-scattering feedback is used to generate coherent emission. Q-switching and mode-locking are well-known routes for achieving high peak power output in conventional lasers. However, in random lasers, the ubiquitous random cavities that are formed by multiple scattering inhibit energy storage, making Q-switching impossible. In this paper, widespread Rayleigh scattering arising from the intrinsic micro-scale refractive-index irregularities of fiber cores is used to form random cavities along the fiber. The Q-factor of the cavity is rapidly increased by stimulated Brillouin scattering just after the spontaneous emission is enhanced by random cavity resonances, resulting in random Q-switched pulses with high brightness and high peak power. This report is the first observation of high-brightness random Q-switched laser emission and is expected to stimulate new areas of scientific research and applications, including encryption, remote three-dimensional random imaging and the simulation of stellar lasing. PMID:25797520

  11. Ultra-low output impedance RF power amplifier for parallel excitation.

    PubMed

    Chu, Xu; Yang, Xing; Liu, Yunfeng; Sabate, Juan; Zhu, Yudong

    2009-04-01

    Inductive coupling between coil elements of a transmit array is one of the key challenges faced by parallel RF transmission. An ultra-low output impedance RF power amplifier (PA) concept was introduced to address this challenge. In an example implementation, an output-matching network was designed to transform the drain-source impedance of the metallic oxide semiconductor field effect transistor (MOSFET) into a very low value for suppressing interelement coupling effect, and meanwhile, to match the input impedance of the coil to the optimum load of the MOSFET for maximizing the available output power. Two prototype amplifiers with 500-W output rating were developed accordingly, and were further evaluated with a transmit array in phantom experiments. Compared to the conventional 50-Omega sources, the new approach exhibited considerable effectiveness suppressing the effects of interelement coupling. The experiments further indicated that the isolation performance was comparable to that achieved by optimized overlap decoupling. The new approach, benefiting from a distinctive current-source characteristic, also exhibited a superior robustness against load variation. Feasibility of the new approach in high-field MR was demonstrated on a 3T clinical scanner.

  12. Free Vibration of Uncertain Unsymmetrically Laminated Beams

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Goyal, Vijay K.

    2001-01-01

    Monte Carlo Simulation and Stochastic FEA are used to predict randomness in the free vibration response of thin unsymmetrically laminated beams. For the present study, it is assumed that randomness in the response is only caused by uncertainties in the ply orientations. The ply orientations may become random or uncertain during the manufacturing process. A new 16-dof beam element, based on the first-order shear deformation beam theory, is used to study the stochastic nature of the natural frequencies. Using variational principles, the element stiffness matrix and mass matrix are obtained through analytical integration. Using a random sequence a large data set is generated, containing possible random ply-orientations. This data is assumed to be symmetric. The stochastic-based finite element model for free vibrations predicts the relation between the randomness in fundamental natural frequencies and the randomness in ply-orientation. The sensitivity derivatives are calculated numerically through an exact formulation. The squared fundamental natural frequencies are expressed in terms of deterministic and probabilistic quantities, allowing to determine how sensitive they are to variations in ply angles. The predicted mean-valued fundamental natural frequency squared and the variance of the present model are in good agreement with Monte Carlo Simulation. Results, also, show that variations between plus or minus 5 degrees in ply-angles can affect free vibration response of unsymmetrically and symmetrically laminated beams.

  13. An Investigation Into the Performance of a Miniature Diesel Engine

    ERIC Educational Resources Information Center

    Stevenson, P. W.

    1970-01-01

    Reports the procedures and results of a student investigation of the performance of a miniature diesel engine. The experiments include (1) torque measurement, (2) power measurement, and (3) variation of power output with applied load. Bibliography. (LC)

  14. Co Laser.

    DTIC Science & Technology

    1976-01-01

    Experimental and Pre- 9 dieted Temporal Behavior of the Laser Output Pulse for a 20% CO and 80% N2 Mixture 3 Comparison of the Normalized Experimental...and Pre- 10 dieted Temporal Behavior of the Laser Output Pulse for a 20% CO and 80% A~ Mixture 4 Predictions of the Temporal Variation of Small...Z o < z CD o o ÜJ 10 -7 .1 D4862 II i i r~rT"T pco (Torr) ♦ 700 O 350 A 200 O 100 + i & i J I \\ I I I 1.0 AVERAUü

  15. Fiber-optic photoelastic pressure sensor with fiber-loss compensation

    NASA Technical Reports Server (NTRS)

    Beheim, G.; Anthan, D. J.

    1987-01-01

    A new fiber-optic pressure sensor is described that has high immunity to the effects of fiber-loss variations. This device uses the photoelastic effect to modulate the proportion of the light from each of two input fibers that is coupled into each of two output fibers. This four-fiber link permits two detectors to be used to measure the sensor's responses to the light from each of two independently controlled sources. These four detector outputs are processed to yield a loss-compensated signal that is a stable and sensitive pressure indicator.

  16. Delta modulation

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1971-01-01

    The conclusions of the design research of the song adaptive delta modulator are presented for source encoding voice signals. The variation of output SNR vs input signal power/when 8, 9, and 10 bit internal arithmetic is employed. Voice intelligibility tapes to test the 10-bit system are used. An analysis of a delta modulator is also presented designed to minimize the in-band rms error. This is accomplished by frequency shaping the error signal in the modulator prior to hard limiting. The result is a significant increase in the output SNR measured after low pass filtering.

  17. Neural control and transient analysis of the LCL-type resonant converter

    NASA Astrophysics Data System (ADS)

    Zouggar, S.; Nait Charif, H.; Azizi, M.

    2000-07-01

    This paper proposes a generalised inverse learning structure to control the LCL converter. A feedforward neural network is trained to act as an inverse model of the LCL converter then both are cascaded such that the composed system results in an identity mapping between desired response and the LCL output voltage. Using the large signal model, we analyse the transient output response of the controlled LCL converter in the case of large variation of the load. The simulation results show the efficiency of using neural networks to regulate the LCL converter.

  18. What if the Hubbard Brook weirs had been built somewhere else? Spatial uncertainty in the application of catchment budgets

    NASA Astrophysics Data System (ADS)

    Bailey, S. W.

    2016-12-01

    Nine catchments are gaged at Hubbard Brook Experimental Forest, Woodstock, NH, USA, with weirs installed on adjacent first-order streams. These catchments have been used as unit ecosystems for analysis of chemical budgets, including evaluation of long term trends and response to disturbance. This study examines uncertainty in the representativeness of these budgets to other nearby catchments, or as representatives of the broader northern hardwood ecosystem, depending on choice of location of the stream gaging station. Within forested northern hardwood catchments across the Hubbard Brook region, there is relatively little spatial variation in amount or chemistry of precipitation inputs or in amount of streamwater outputs. For example, runoff per unit catchment area varies by less than 10% at gaging stations on first to sixth order streams. In contrast, concentrations of major solutes vary by an order of magnitude or more across stream sampling sites, with a similar range in concentrations seen within individual first order catchments as seen across the third order Hubbard Brook valley or across the White Mountain region. These spatial variations in stream chemistry are temporally persistent across a range of flow conditions. Thus first order catchment budgets vary greatly depending on very local variations in stream chemistry driven by choice of the site to develop a stream gage. For example, carbon output in dissolved organic matter varies by a factor of five depending on where the catchment output is defined at Watershed 3. I hypothesize that catchment outputs from first order streams are driven by spatially variable chemistry of shallow groundwater, reflecting local variations in the distribution of soils and vegetation. In contrast, spatial variability in stream chemistry decreases with stream order, hypothesized to reflect deeper groundwater inputs on larger streams, which are more regionally uniform. Thus, choice of a gaging site and definition of an ecosystem as a unit of analysis at a larger scale, such as the Hubbard Brook valley, would have less impact on calculated budgets than at the headwater scale. Monitoring of a larger catchment is more likely to be representative of other similar sized catchments. However, particular research questions may be better studied at the smaller headwater scale.

  19. Allowable Trajectory Variations for Space Shuttle Orbiter Entry-Aeroheating CFD

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Alter, Stephen J.

    2008-01-01

    Reynolds-number criteria are developed for acceptable variations in Space Shuttle Orbiter entry trajectories for use in computational aeroheating analyses. The criteria determine if an existing computational fluid dynamics solution for a particular trajectory can be extrapolated to a different trajectory. The criteria development begins by estimating uncertainties for seventeen types of computational aeroheating data, such as boundary layer thickness, at exact trajectory conditions. For each type of datum, the allowable uncertainty contribution due to trajectory variation is set to be half of the value of the estimated exact-trajectory uncertainty. Then, for the twelve highest-priority datum types, Reynolds-number relations between trajectory variation and output uncertainty are determined. From these relations the criteria are established for the maximum allowable trajectory variations. The most restrictive criterion allows a 25% variation in Reynolds number at constant Mach number between trajectories.

  20. Computer methods for sampling from the gamma distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, M.E.; Tadikamalla, P.R.

    1978-01-01

    Considerable attention has recently been directed at developing ever faster algorithms for generating gamma random variates on digital computers. This paper surveys the current state of the art including the leading algorithms of Ahrens and Dieter, Atkinson, Cheng, Fishman, Marsaglia, Tadikamalla, and Wallace. General random variate generation techniques are explained with reference to these gamma algorithms. Computer simulation experiments on IBM and CDC computers are reported.

  1. Time Course of Visual Extrapolation Accuracy

    DTIC Science & Technology

    1995-09-01

    The pond and duckweed problem: Three experiments on the misperception of exponential growth . Acta Psychologica 43, 239-251. Wiener, E.L., 1962...random variation in tracker velocity. Both models predicted changes in hit and false alarm rates well, except in a condition where response asymmetries...systematic velocity error in tracking, only random variation in tracker velocity. Both models predicted changes in hit and false alarm rates well

  2. Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.

  3. Randomised controlled trial of whether erotic material is required for semen collection: impact of informed consent on outcome.

    PubMed

    Handelsman, D J; Sivananathan, T; Andres, L; Bathur, F; Jayadev, V; Conway, A J

    2013-11-01

    Semen is collected to evaluate male fertility or cryostore sperm preferentially in laboratories but such collection facilities have no standard fit-out. It is widely believed but untested whether providing erotic material (EM) is required to collect semen by masturbation in the unfamiliar environment. To test this assumption, 1520 men (1046 undergoing fertility evaluation, 474 sperm cryostorage, providing 1932 semen collection episodes) consecutively attending the semen laboratory of a major metropolitan teaching hospital for semen analysis were eligible for randomization to be provided or not with printed erotic material EM (X-rated, soft-core magazines) during semen collection. Randomization was performed by providing magazines in the collection rooms (as a variation on non-standard fit-out) on alternate weeks using a schedule concealed from participants. In the pilot study, men were randomized without seeking consent. In the second part of the study, which continued on from the first without interruption, an approved informed consent procedure was added. The primary outcome, the time to collect semen defined as the time from receiving to returning the sample receptacle, was significantly longer (by ~6%, 14.9 ± 0.3 [mean ± standard error of mean] vs. 14.0 ± 0.2 minutes, p = 0.02) among men provided with EM than those randomized to not being provided. There was no significant increase in the failure to collect semen samples (2.6% overall) nor any difference in age, semen volume or sperm concentration, output or motility according to whether EM was provided or not. The significantly longer time to collect was evident in the pilot study and the study overall, but not in the main study where the informed consent procedure was used. This study provides evidence that refutes the assumption that EM needs to be provided for semen collection in a laboratory. It also provides an example of a usually unobservable participation bias influencing study outcome of a randomized controlled trials. © 2013 American Society of Andrology and European Academy of Andrology.

  4. Identifying Active Travel Behaviors in Challenging Environments Using GPS, Accelerometers, and Machine Learning Algorithms

    PubMed Central

    Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline

    2014-01-01

    Background: Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. Methods: We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. Results: The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Conclusion: Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel. PMID:24795875

  5. Supplementation with a Polyphenol-Rich Extract, PerfLoad®, Improves Physical Performance during High-Intensity Exercise: A Randomized, Double Blind, Crossover Trial

    PubMed Central

    Cases, Julien; Romain, Cindy; Marín-Pagán, Cristian; Chung, Linda H.; Rubio-Pérez, José Miguel; Laurent, Caroline; Gaillet, Sylvie; Prost-Camus, Emmanuelle; Prost, Michel; Alcaraz, Pedro E.

    2017-01-01

    Workout capacity is energy-production driven. To produce peak metabolic power outputs, the organism predominantly relies more on anaerobic metabolism, but this undoubtedly has a negative and limiting impact on muscle function and performance. The aim of the study was to evaluate if an innovative polyphenol-based food supplement, PerfLoad®, was able to improve metabolic homeostasis and physical performance during high-intensity exercises under anaerobic conditions. The effect of a supplementation has been investigated on fifteen recreationally-active male athletes during a randomized, double-blind and crossover clinical investigation. The Wingate test, an inducer of an unbalanced metabolism associated to oxidative stress, was used to assess maximum anaerobic power during a high-intensity exercise on a cycle ergometer. Supplementation with PerfLoad® correlated with a significant increase in total power output (5%), maximal peak power output (3.7%), and average power developed (5%), without inducing more fatigue or greater heart rate. Instead, oxidative homeostasis was stabilized in supplemented subjects. Such results demonstrated that PerfLoad® is a natural and efficient solution capable of, similarly to training benefits, helping athletes to improve their physical performance, while balancing their metabolism and reducing exercise-induced oxidative stress. PMID:28441760

  6. Design and implementation of a random neural network routing engine.

    PubMed

    Kocak, T; Seeber, J; Terzioglu, H

    2003-01-01

    Random neural network (RNN) is an analytically tractable spiked neural network model that has been implemented in software for a wide range of applications for over a decade. This paper presents the hardware implementation of the RNN model. Recently, cognitive packet networks (CPN) is proposed as an alternative packet network architecture where there is no routing table, instead the RNN based reinforcement learning is used to route packets. Particularly, we describe implementation details for the RNN based routing engine of a CPN network processor chip: the smart packet processor (SPP). The SPP is a dual port device that stores, modifies, and interprets the defining characteristics of multiple RNN models. In addition to hardware design improvements over the software implementation such as the dual access memory, output calculation step, and reduced output calculation module, this paper introduces a major modification to the reinforcement learning algorithm used in the original CPN specification such that the number of weight terms are reduced from 2n/sup 2/ to 2n. This not only yields significant memory savings, but it also simplifies the calculations for the steady state probabilities (neuron outputs in RNN). Simulations have been conducted to confirm the proper functionality for the isolated SPP design as well as for the multiple SPP's in a networked environment.

  7. Modelling the minislump spread of superplasticized PPC paste using RLS with the application of Random Kitchen sink

    NASA Astrophysics Data System (ADS)

    Sathyan, Dhanya; Anand, K. B.; Jose, Chinnu; Aravind, N. R.

    2018-02-01

    Super plasticizers(SPs) are added to the concrete to improve its workability with out changing the water cement ratio. Property of fresh concrete is mainly governed by the cement paste which depends on the dispersion of cement particle. Cement dispersive properties of the SP depends up on its dosage and the family. Mini slump spread diameter with different dosages and families of SP is taken as the measure of workability characteristic of cement paste chosen for measuring the rheological properties of cement paste. The main purpose of this study includes measure the dispersive ability of different families of SP by conducting minislump test and model the minislump spread diameter of the super plasticized Portland Pozzolona Cement (PPC)paste using regularized least square (RLS) approach along with the application of Random kitchen sink (RKS) algorithm. For preparing test and training data for the model 287 different mixes were prepared in the laboratory at a water cement ratio of 0.37 using four locally available brand of Portland Pozzolona cement (PPC) and SP belonging to four different families. Water content, cement weight and amount of SP (by considering it as seven separate input based on their family and brand) were the input parameters and mini slump spread diameter was the output parameter for the model. The variation of predicted and measured values of spread diameters were compared and validated. From this study it was observed that, the model could effectively predict the minislump spread of cement paste

  8. Application of random effects to the study of resource selection by animals

    USGS Publications Warehouse

    Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.

    2006-01-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  9. Application of random effects to the study of resource selection by animals.

    PubMed

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  10. Model Sensitivity and Use of the Comparative Finite Element Method in Mammalian Jaw Mechanics: Mandible Performance in the Gray Wolf

    PubMed Central

    Tseng, Zhijie Jack; Mcnitt-Gray, Jill L.; Flashner, Henryk; Wang, Xiaoming; Enciso, Reyes

    2011-01-01

    Finite Element Analysis (FEA) is a powerful tool gaining use in studies of biological form and function. This method is particularly conducive to studies of extinct and fossilized organisms, as models can be assigned properties that approximate living tissues. In disciplines where model validation is difficult or impossible, the choice of model parameters and their effects on the results become increasingly important, especially in comparing outputs to infer function. To evaluate the extent to which performance measures are affected by initial model input, we tested the sensitivity of bite force, strain energy, and stress to changes in seven parameters that are required in testing craniodental function with FEA. Simulations were performed on FE models of a Gray Wolf (Canis lupus) mandible. Results showed that unilateral bite force outputs are least affected by the relative ratios of the balancing and working muscles, but only ratios above 0.5 provided balancing-working side joint reaction force relationships that are consistent with experimental data. The constraints modeled at the bite point had the greatest effect on bite force output, but the most appropriate constraint may depend on the study question. Strain energy is least affected by variation in bite point constraint, but larger variations in strain energy values are observed in models with different number of tetrahedral elements, masticatory muscle ratios and muscle subgroups present, and number of material properties. These findings indicate that performance measures are differentially affected by variation in initial model parameters. In the absence of validated input values, FE models can nevertheless provide robust comparisons if these parameters are standardized within a given study to minimize variation that arise during the model-building process. Sensitivity tests incorporated into the study design not only aid in the interpretation of simulation results, but can also provide additional insights on form and function. PMID:21559475

  11. Decomposing variation in dairy profitability: the impact of output, inputs, prices, labour and management.

    PubMed

    Wilson, P

    2011-08-01

    The UK dairy sector has undergone considerable structural change in recent years, with a decrease in the number of producers accompanied by an increased average herd size and increased concentrate use and milk yields. One of the key drivers to producers remaining in the industry is the profitability of their herds. The current paper adopts a holistic approach to decomposing the variation in dairy profitability through an analysis of net margin data explained by physical input-output measures, milk price variation, labour utilization and managerial behaviours and characteristics. Data are drawn from the Farm Business Survey (FBS) for England in 2007/08 for 228 dairy enterprises. Average yields are 7100 litres/cow/yr, from a herd size of 110 cows that use 0·56 forage ha/cow/yr and 43·2 labour h/cow/yr. An average milk price of 22·57 pence per litre (ppl) produced milk output of £1602/cow/yr, which after accounting for calf sales, herd replacements and quota leasing costs, gave an average dairy output of £1516/cow/yr. After total costs of £1464/cow/yr this left an economic return of £52/cow/yr (0·73 ppl) net margin profit. There is wide variation in performance, with the most profitable (as measured by net margin per cow) quartile of producers achieving 2000 litres/cow/yr more than the least profitable quartile, returning a net margin of £335/cow/yr compared to a loss of £361/cow/yr for the least profitable. The most profitable producers operate larger, higher yielding herds and achieve a greater milk price for their output. In addition, a significantly greater number of the most profitable producers undertake financial benchmarking within their businesses and operate specialist dairy farms. When examining the full data set, the most profitable enterprises included significantly greater numbers of organic producers. The most profitable tend to have a greater reliance on independent technical advice, but this finding is not statistically significant. Decomposing the variation in net margin performance between the most and least profitable groups, an approximate ratio of 65:23:12 is observed for higher yields: lower costs: higher milk price. This result indicates that yield differentials are the key performance driver in dairy profitability. Lower costs per cow are dominated by the significantly lower cost of farmer and spouse labour per cow of the most profitable group, flowing directly from the upper quartile expending 37·7 labour h/cow/yr in comparison with 58·8 h/cow/yr for the lower quartile. The upper quartile's greater milk price is argued to be achieved through contract negotiations and higher milk quality, and this accounts for 0·12 of the variation in net margin performance. The average economic return to the sample of dairy enterprises in this survey year was less than £6000/farm/yr. However, the most profitable quartile returned an average economic return of approximately £50 000 per farm/yr. Structural change in the UK dairy sector is likely to continue with the least profitable and typically smaller dairy enterprises being replaced by a smaller number of expanding dairy production units.

  12. Decomposing variation in dairy profitability: the impact of output, inputs, prices, labour and management

    PubMed Central

    WILSON, P.

    2011-01-01

    SUMMARY The UK dairy sector has undergone considerable structural change in recent years, with a decrease in the number of producers accompanied by an increased average herd size and increased concentrate use and milk yields. One of the key drivers to producers remaining in the industry is the profitability of their herds. The current paper adopts a holistic approach to decomposing the variation in dairy profitability through an analysis of net margin data explained by physical input–output measures, milk price variation, labour utilization and managerial behaviours and characteristics. Data are drawn from the Farm Business Survey (FBS) for England in 2007/08 for 228 dairy enterprises. Average yields are 7100 litres/cow/yr, from a herd size of 110 cows that use 0·56 forage ha/cow/yr and 43·2 labour h/cow/yr. An average milk price of 22·57 pence per litre (ppl) produced milk output of £1602/cow/yr, which after accounting for calf sales, herd replacements and quota leasing costs, gave an average dairy output of £1516/cow/yr. After total costs of £1464/cow/yr this left an economic return of £52/cow/yr (0·73 ppl) net margin profit. There is wide variation in performance, with the most profitable (as measured by net margin per cow) quartile of producers achieving 2000 litres/cow/yr more than the least profitable quartile, returning a net margin of £335/cow/yr compared to a loss of £361/cow/yr for the least profitable. The most profitable producers operate larger, higher yielding herds and achieve a greater milk price for their output. In addition, a significantly greater number of the most profitable producers undertake financial benchmarking within their businesses and operate specialist dairy farms. When examining the full data set, the most profitable enterprises included significantly greater numbers of organic producers. The most profitable tend to have a greater reliance on independent technical advice, but this finding is not statistically significant. Decomposing the variation in net margin performance between the most and least profitable groups, an approximate ratio of 65:23:12 is observed for higher yields: lower costs: higher milk price. This result indicates that yield differentials are the key performance driver in dairy profitability. Lower costs per cow are dominated by the significantly lower cost of farmer and spouse labour per cow of the most profitable group, flowing directly from the upper quartile expending 37·7 labour h/cow/yr in comparison with 58·8 h/cow/yr for the lower quartile. The upper quartile's greater milk price is argued to be achieved through contract negotiations and higher milk quality, and this accounts for 0·12 of the variation in net margin performance. The average economic return to the sample of dairy enterprises in this survey year was less than £6000/farm/yr. However, the most profitable quartile returned an average economic return of approximately £50 000 per farm/yr. Structural change in the UK dairy sector is likely to continue with the least profitable and typically smaller dairy enterprises being replaced by a smaller number of expanding dairy production units. PMID:22505774

  13. Pulmonary blood flow redistribution by increased gravitational force

    NASA Technical Reports Server (NTRS)

    Hlastala, M. P.; Chornuk, M. A.; Self, D. A.; Kallas, H. J.; Burns, J. W.; Bernard, S.; Polissar, N. L.; Glenny, R. W.

    1998-01-01

    This study was undertaken to assess the influence of gravity on the distribution of pulmonary blood flow (PBF) using increased inertial force as a perturbation. PBF was studied in unanesthetized swine exposed to -Gx (dorsal-to-ventral direction, prone position), where G is the magnitude of the force of gravity at the surface of the Earth, on the Armstrong Laboratory Centrifuge at Brooks Air Force Base. PBF was measured using 15-micron fluorescent microspheres, a method with markedly enhanced spatial resolution. Each animal was exposed randomly to -1, -2, and -3 Gx. Pulmonary vascular pressures, cardiac output, heart rate, arterial blood gases, and PBF distribution were measured at each G level. Heterogeneity of PBF distribution as measured by the coefficient of variation of PBF distribution increased from 0.38 +/- 0.05 to 0.55 +/- 0.11 to 0.72 +/- 0.16 at -1, -2, and -3 Gx, respectively. At -1 Gx, PBF was greatest in the ventral and cranial and lowest in the dorsal and caudal regions of the lung. With increased -Gx, this gradient was augmented in both directions. Extrapolation of these values to 0 G predicts a slight dorsal (nondependent) region dominance of PBF and a coefficient of variation of 0.22 in microgravity. Analysis of variance revealed that a fixed component (vascular structure) accounted for 81% and nonstructure components (including gravity) accounted for the remaining 19% of the PBF variance across the entire experiment (all 3 gravitational levels). The results are inconsistent with the predictions of the zone model.

  14. Temporal changes in randomness of bird communities across Central Europe.

    PubMed

    Renner, Swen C; Gossner, Martin M; Kahl, Tiemo; Kalko, Elisabeth K V; Weisser, Wolfgang W; Fischer, Markus; Allan, Eric

    2014-01-01

    Many studies have examined whether communities are structured by random or deterministic processes, and both are likely to play a role, but relatively few studies have attempted to quantify the degree of randomness in species composition. We quantified, for the first time, the degree of randomness in forest bird communities based on an analysis of spatial autocorrelation in three regions of Germany. The compositional dissimilarity between pairs of forest patches was regressed against the distance between them. We then calculated the y-intercept of the curve, i.e. the 'nugget', which represents the compositional dissimilarity at zero spatial distance. We therefore assume, following similar work on plant communities, that this represents the degree of randomness in species composition. We then analysed how the degree of randomness in community composition varied over time and with forest management intensity, which we expected to reduce the importance of random processes by increasing the strength of environmental drivers. We found that a high portion of the bird community composition could be explained by chance (overall mean of 0.63), implying that most of the variation in local bird community composition is driven by stochastic processes. Forest management intensity did not consistently affect the mean degree of randomness in community composition, perhaps because the bird communities were relatively insensitive to management intensity. We found a high temporal variation in the degree of randomness, which may indicate temporal variation in assembly processes and in the importance of key environmental drivers. We conclude that the degree of randomness in community composition should be considered in bird community studies, and the high values we find may indicate that bird community composition is relatively hard to predict at the regional scale.

  15. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    PubMed

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.

  16. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations

    PubMed Central

    Fernandez, Fernando R.; Malerba, Paola; White, John A.

    2015-01-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances. PMID:25909971

  17. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations.

    PubMed

    Fernandez, Fernando R; Malerba, Paola; White, John A

    2015-04-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances.

  18. Listening to music affects diurnal variation in muscle power output.

    PubMed

    Chtourou, H; Chaouachi, A; Hammouda, O; Chamari, K; Souissi, N

    2012-01-01

    The purpose of this investigation was to assess the effects of listening to music while warming-up on the diurnal variations of power output during the Wingate test. 12 physical education students underwent four Wingate tests at 07:00 and 17:00 h, after 10 min of warm-up with and without listening to music. The warm-up consisted of 10 min of pedalling at a constant pace of 60 rpm against a light load of 1 kg. During the Wingate test, peak and mean power were measured. The main finding was that peak and mean power improved from morning to afternoon after no music warm-up (p<0.001 and p<0.01, respectively). These diurnal variations disappeared for mean power and persisted with an attenuated morning-evening difference (p<0.05) for peak power after music warm-up. Moreover, peak and mean power were significantly higher after music than no music warm-up during the two times of testing. Thus, as it is a legal method and an additional aid, music should be used during warm-up before performing activities requiring powerful lower limbs' muscles contractions, especially in the morning competitive events. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Improving Pyroelectric Energy Harvesting Using a Sandblast Etching Technique

    PubMed Central

    Hsiao, Chun-Ching; Siao, An-Shen

    2013-01-01

    Large amounts of low-grade heat are emitted by various industries and exhausted into the environment. This heat energy can be used as a free source for pyroelectric power generation. A three-dimensional pattern helps to improve the temperature variation rates in pyroelectric elements by means of lateral temperature gradients induced on the sidewalls of the responsive elements. A novel method using sandblast etching is successfully applied in fabricating the complex pattern of a vortex-like electrode. Both experiment and simulation show that the proposed design of the vortex-like electrode improved the electrical output of the pyroelectric cells and enhanced the efficiency of pyroelectric harvesting converters. A three-dimensional finite element model is generated by commercial software for solving the transient temperature fields and exploring the temperature variation rate in the PZT pyroelectric cells with various designs. The vortex-like type has a larger temperature variation rate than the fully covered type, by about 53.9%.The measured electrical output of the vortex-like electrode exhibits an obvious increase in the generated charge and the measured current, as compared to the fully covered electrode, by of about 47.1% and 53.1%, respectively. PMID:24025557

  20. A Strip Cell in Pyroelectric Devices

    PubMed Central

    Siao, An-Shen; Chao, Ching-Kong; Hsiao, Chun-Ching

    2016-01-01

    The pyroelectric effect affords the opportunity to convert temporal temperature fluctuations into usable electrical energy in order to develop abundantly available waste heat. A strip pyroelectric cell, used to enhance temperature variation rates by lateral temperature gradients and to reduce cell capacitance to further promote the induced voltage, is described as a means of improving pyroelectric energy transformation. A precision dicing saw was successfully applied in fabricating the pyroelectric cell with a strip form. The strip pyroelectric cell with a high-narrow cross section is able to greatly absorb thermal energy via the side walls of the strips, thereby inducing lateral temperature gradients and increasing temperature variation rates in a thicker pyroelectric cell. Both simulation and experimentation show that the strip pyroelectric cell improves the electrical outputs of pyroelectric cells and enhances the efficiency of pyroelectric harvesters. The strip-type pyroelectric cell has a larger temperature variation when compared to the trenched electrode and the original type, by about 1.9 and 2.4 times, respectively. The measured electrical output of the strip type demonstrates a conspicuous increase in stored energy as compared to the trenched electrode and the original type, by of about 15.6 and 19.8 times, respectively. PMID:26999134

  1. Evaluation of selected strapdown inertial instruments and pulse torque loops, volume 1

    NASA Technical Reports Server (NTRS)

    Sinkiewicz, J. S.; Feldman, J.; Lory, C. B.

    1974-01-01

    Design, operational and performance variations between ternary, binary and forced-binary pulse torque loops are presented. A fill-in binary loop which combines the constant power advantage of binary with the low sampling error of ternary is also discussed. The effects of different output-axis supports on the performance of a single-degree-of-freedom, floated gyroscope under a strapdown environment are illustrated. Three types of output-axis supports are discussed: pivot-dithered jewel, ball bearing and electromagnetic. A test evaluation on a Kearfott 2544 single-degree-of-freedom, strapdown gyroscope operating with a pulse torque loop, under constant rates and angular oscillatory inputs is described and the results presented. Contributions of the gyroscope's torque generator and the torque-to-balance electronics on scale factor variation with rate are illustrated for a SDF 18 IRIG Mod-B strapdown gyroscope operating with various pulse rebalance loops. Also discussed are methods of reducing this scale factor variation with rate by adjusting the tuning network which shunts the torque coil. A simplified analysis illustrating the principles of operation of the Teledyne two-degree-of-freedom, elastically-supported, tuned gyroscope and the results of a static and constant rate test evaluation of that instrument are presented.

  2. Automatic video summarization driven by a spatio-temporal attention model

    NASA Astrophysics Data System (ADS)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  3. Automated array assembly task, phase 1

    NASA Technical Reports Server (NTRS)

    Carbajal, B. G.

    1977-01-01

    State-of-the-art technologies applicable to silicon solar cell and solar cell module fabrication were assessed. The assessment consisted of a technical feasibility evaluation and a cost projection for high volume production of solar cell modules. Design equations based on minimum power loss were used as a tool in the evaluation of metallization technologies. A solar cell process sensitivity study using models, computer calculations, and experimental data was used to identify process step variation and cell output variation correlations.

  4. Microclimatic temperatures at Danish cattle farms, 2000-2016: quantifying the temporal and spatial variation in the transmission potential of Schmallenberg virus.

    PubMed

    Haider, Najmul; Cuellar, Ana Carolina; Kjær, Lene Jung; Sørensen, Jens Havskov; Bødker, Rene

    2018-03-05

    Microclimatic temperatures provide better estimates of vector-borne disease transmission parameters than standard meteorological temperatures, as the microclimate represent the actual temperatures to which the vectors are exposed. The objectives of this study were to quantify farm-level geographic variations and temporal patterns in the extrinsic incubation period (EIP) of Schmallenberg virus transmitted by Culicoides in Denmark through generation of microclimatic temperatures surrounding all Danish cattle farms. We calculated the hourly microclimatic temperatures at potential vector-resting sites within a 500 m radius of 22,004 Danish cattle farms for the months April to November from 2000 to 2016. We then modeled the daily EIP of Schmallenberg virus at each farm, assuming vectors choose resting sites either randomly or based on temperatures (warmest or coolest available) every hour. The results of the model output are presented as 17-year averages. The difference between the warmest and coolest microhabitats at the same farm was on average 3.7 °C (5th and 95th percentiles: 1.0 °C to 7.8 °C). The mean EIP of Schmallenberg virus (5th and 95th percentiles) for all cattle farms during spring, summer, and autumn was: 23 (18-33), 14 (12-18) and 51 (48-55) days, respectively, assuming Culicoides select resting sites randomly. These estimated EIP values were considerably shorter than those estimated using standard meteorological temperatures obtained from a numerical weather prediction model for the same periods: 43 (39-52), 21 (17-24) and 57 (55-58) days, respectively. When assuming that vectors actively select the coolest resting sites at a farm, the EIP was 2.3 (range: 1.1 to 4.1) times longer compared to that of the warmest sites at the same farm. We estimated a wide range of EIP in different microclimatic habitats surrounding Danish cattle farms, stressing the importance of identifying the specific resting sites of vectors when modeling vector-borne disease transmission. We found a large variation in the EIP among different farms, suggesting disease transmission may vary substantially between regions, even within a small country. Our findings could be useful for designing risk-based surveillance, and in the control and prevention of emerging and re-emerging vector-borne diseases.

  5. Quantitative Resistance: More Than Just Perception of a Pathogen.

    PubMed

    Corwin, Jason A; Kliebenstein, Daniel J

    2017-04-01

    Molecular plant pathology has focused on studying large-effect qualitative resistance loci that predominantly function in detecting pathogens and/or transmitting signals resulting from pathogen detection. By contrast, less is known about quantitative resistance loci, particularly the molecular mechanisms controlling variation in quantitative resistance. Recent studies have provided insight into these mechanisms, showing that genetic variation at hundreds of causal genes may underpin quantitative resistance. Loci controlling quantitative resistance contain some of the same causal genes that mediate qualitative resistance, but the predominant mechanisms of quantitative resistance extend beyond pathogen recognition. Indeed, most causal genes for quantitative resistance encode specific defense-related outputs such as strengthening of the cell wall or defense compound biosynthesis. Extending previous work on qualitative resistance to focus on the mechanisms of quantitative resistance, such as the link between perception of microbe-associated molecular patterns and growth, has shown that the mechanisms underlying these defense outputs are also highly polygenic. Studies that include genetic variation in the pathogen have begun to highlight a potential need to rethink how the field considers broad-spectrum resistance and how it is affected by genetic variation within pathogen species and between pathogen species. These studies are broadening our understanding of quantitative resistance and highlighting the potentially vast scale of the genetic basis of quantitative resistance. © 2017 American Society of Plant Biologists. All rights reserved.

  6. Cure fraction model with random effects for regional variation in cancer survival.

    PubMed

    Seppä, Karri; Hakulinen, Timo; Kim, Hyon-Jung; Läärä, Esa

    2010-11-30

    Assessing regional differences in the survival of cancer patients is important but difficult when separate regions are small or sparsely populated. In this paper, we apply a mixture cure fraction model with random effects to cause-specific survival data of female breast cancer patients collected by the population-based Finnish Cancer Registry. Two sets of random effects were used to capture the regional variation in the cure fraction and in the survival of the non-cured patients, respectively. This hierarchical model was implemented in a Bayesian framework using a Metropolis-within-Gibbs algorithm. To avoid poor mixing of the Markov chain, when the variance of either set of random effects was close to zero, posterior simulations were based on a parameter-expanded model with tailor-made proposal distributions in Metropolis steps. The random effects allowed the fitting of the cure fraction model to the sparse regional data and the estimation of the regional variation in 10-year cause-specific breast cancer survival with a parsimonious number of parameters. Before 1986, the capital of Finland clearly stood out from the rest, but since then all the 21 hospital districts have achieved approximately the same level of survival. Copyright © 2010 John Wiley & Sons, Ltd.

  7. Investigation of Multi-Input Multi-Output Robust Control Methods to Handle Parametric Uncertainties in Autopilot Design.

    PubMed

    Kasnakoğlu, Coşku

    2016-01-01

    Some level of uncertainty is unavoidable in acquiring the mass, geometry parameters and stability derivatives of an aerial vehicle. In certain instances tiny perturbations of these could potentially cause considerable variations in flight characteristics. This research considers the impact of varying these parameters altogether. This is a generalization of examining the effects of particular parameters on selected modes present in existing literature. Conventional autopilot designs commonly assume that each flight channel is independent and develop single-input single-output (SISO) controllers for every one, that are utilized in parallel for actual flight. It is demonstrated that an attitude controller built like this can function flawlessly on separate nominal cases, but can become unstable with a perturbation no more than 2%. Two robust multi-input multi-output (MIMO) design strategies, specifically loop-shaping and μ-synthesis are outlined as potential substitutes and are observed to handle large parametric changes of 30% while preserving decent performance. Duplicating the loop-shaping procedure for the outer loop, a complete flight control system is formed. It is confirmed through software-in-the-loop (SIL) verifications utilizing blade element theory (BET) that the autopilot is capable of navigation and landing exposed to high parametric variations and powerful winds.

  8. Investigation of Multi-Input Multi-Output Robust Control Methods to Handle Parametric Uncertainties in Autopilot Design

    PubMed Central

    Kasnakoğlu, Coşku

    2016-01-01

    Some level of uncertainty is unavoidable in acquiring the mass, geometry parameters and stability derivatives of an aerial vehicle. In certain instances tiny perturbations of these could potentially cause considerable variations in flight characteristics. This research considers the impact of varying these parameters altogether. This is a generalization of examining the effects of particular parameters on selected modes present in existing literature. Conventional autopilot designs commonly assume that each flight channel is independent and develop single-input single-output (SISO) controllers for every one, that are utilized in parallel for actual flight. It is demonstrated that an attitude controller built like this can function flawlessly on separate nominal cases, but can become unstable with a perturbation no more than 2%. Two robust multi-input multi-output (MIMO) design strategies, specifically loop-shaping and μ-synthesis are outlined as potential substitutes and are observed to handle large parametric changes of 30% while preserving decent performance. Duplicating the loop-shaping procedure for the outer loop, a complete flight control system is formed. It is confirmed through software-in-the-loop (SIL) verifications utilizing blade element theory (BET) that the autopilot is capable of navigation and landing exposed to high parametric variations and powerful winds. PMID:27783706

  9. Gas-laser behavior in a low-gravity environment

    NASA Technical Reports Server (NTRS)

    Owen, R. B.

    1981-01-01

    In connection with several experiments proposed for flight on the Space Shuttle, which involve the use of gas lasers, the behavior of a He-Ne laser in a low-gravity environment has been studied theoretically and experimentally in a series of flight tests using a low-gravity-simulation aircraft. No fluctuation in laser output above the noise level of the meter (1 part in 1000 for 1 hr) was observed during the low-gravity portion of the flight tests. The laser output gradually increased by 1.4% during a 1.5-hr test; at no time were rapid variations observed in the laser output. A maximum laser instability of 1 part in 100 was observed during forty low-gravity parabolic maneuvers. The beam remained Gaussian throughout the tests and no lobe patterns were observed.

  10. Practical applications of current loop signal conditioning

    NASA Astrophysics Data System (ADS)

    Anderson, Karl F.

    1994-10-01

    This paper describes a variety of practical application circuits based on the current loop signal conditioning paradigm. Equations defining the circuit response are also provided. The constant current loop is a fundamental signal conditioning circuit concept that can be implemented in a variety of configurations for resistance-based transducers, such as strain gages and resistance temperature devices. The circuit features signal conditioning outputs which are unaffected by extremely large variations in lead wire resistance, direct current frequency response, and inherent linearity with respect to resistance change. Sensitivity of this circuit is double that of a Wheatstone bridge circuit. Electrical output is zero for resistance change equals zero. The same excitation and output sense wires can serve multiple transducers. More application arrangements are possible with constant current loop signal conditioning than with the Wheatstone bridge.

  11. Closed Loop solar array-ion thruster system with power control circuitry

    NASA Technical Reports Server (NTRS)

    Gruber, R. P. (Inventor)

    1979-01-01

    A power control circuit connected between a solar array and an ion thruster receives voltage and current signals from the solar array. The control circuit multiplies the voltage and current signals together to produce a power signal which is differentiated with respect to time. The differentiator output is detected by a zero crossing detector and, after suitable shaping, the detector output is phase compared with a clock in a phase demodulator. An integrator receives no output from the phase demodulator when the operating point is at the maximum power but is driven toward the maximum power point for non-optimum operation. A ramp generator provides minor variations in the beam current reference signal produced by the integrator in order to obtain the first derivative of power.

  12. Current loop signal conditioning: Practical applications

    NASA Technical Reports Server (NTRS)

    Anderson, Karl F.

    1995-01-01

    This paper describes a variety of practical application circuits based on the current loop signal conditioning paradigm. Equations defining the circuit response are also provided. The constant current loop is a fundamental signal conditioning circuit concept that can be implemented in a variety of configurations for resistance-based transducers, such as strain gages and resistance temperature detectors. The circuit features signal conditioning outputs which are unaffected by extremely large variations in lead wire resistance, direct current frequency response, and inherent linearity with respect to resistance change. Sensitivity of this circuit is double that of a Wheatstone bridge circuit. Electrical output is zero for resistance change equals zero. The same excitation and output sense wires can serve multiple transducers. More application arrangements are possible with constant current loop signal conditioning than with the Wheatstone bridge.

  13. Phase-locking of an axisymmetric-fold combination cavity CO2 laser using the back surface of the output-mirror

    NASA Astrophysics Data System (ADS)

    Xu, Yonggen; Li, Yude; Feng, Ting; Qiu, Yi

    2009-12-01

    The principle of phase-locking of an axisymmetric fold combination cavity CO2 laser, fulfilled by the reflection-injection of the back surface of the output-mirror, has been studied in detail. Variation of the equiphase surface and the influence of some characteristic parameters on phase-locking are analyzed—for example, phase error, changes in the cavity length and curvature radius, line-width and temperature. It is shown that the injected beam can excite a stable mode in the cavities, and the value of the energy coupling coefficient directly reflects the degree of phase-locking. Therefore, the output beams have a fixed phase relation between each other, and good coherent beams can be obtained by using the phase-locking method.

  14. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1987-01-01

    Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.

  15. [Rice water with and without electrolytes in diarrhea with a high stool output].

    PubMed

    Mota-Hernández, F; Posadas-Tello, N M; Rodríguez-Leyva, G

    1993-12-01

    The objective of the study was to determine the efficacy and safety of two rice-based oral rehydration solutions, with and without added electrolyte in children presenting acute diarrheal dehydration with high stool output (> 10 mL/kg/h) during a two-hour rehydration period. Twenty-two patients of one to 18 months old were recruited and randomly distributed into two groups: group A received the rice-based solution without electrolytes, and group B received the rice-based solution with electrolytes. A stool output diminishing was observed in both groups and rehydration was achieved in 4.0 +/- 0.9 hours in 21 patients from group A and in 4.6 +/- 0.9 hours in 13 patients group group B. There was not a statistically significant difference between the groups regarding the laboratory results. The rice-based oral rehydration solution without added electrolytes was useful for rehydration of children presenting high stool output, after administering the WHO/ORS recommended formula during a two-hour period.

  16. Analysis of trait mean and variability versus temperature in trematode cercariae: is there scope for adaptation to global warming?

    PubMed

    Studer, A; Poulin, R

    2014-05-01

    The potential of species for evolutionary adaptation in the context of global climate change has recently come under scrutiny. Estimates of phenotypic variation in biological traits may prove valuable for identifying species, or groups of species, with greater or lower potential for evolutionary adaptation, as this variation, when heritable, represents the basis for natural selection. Assuming that measures of trait variability reflect the evolutionary potential of these traits, we conducted an analysis across trematode species to determine the potential of these parasites as a group to adapt to increasing temperatures. Firstly, we assessed how the mean number of infective stages (cercariae) emerging from infected snail hosts as well as the survival and infectivity of cercariae are related to temperature. Secondly and importantly in the context of evolutionary potential, we assessed how coefficients of variation for these traits are related to temperature, in both cases controlling for other factors such as habitat, acclimatisation, latitude and type of target host. With increasing temperature, an optimum curve was found for mean output and mean infectivity, and a linear decrease for survival of cercariae. For coefficients of variation, temperature was only an important predictor in the case of cercarial output, where results indicated that there is, however, no evidence for limited trait variation at the higher temperature range. No directional trend was found for either variation of survival or infectivity. These results, characterising general patterns among trematodes, suggest that all three traits considered may have potential to change through adaptive evolution. Copyright © 2014 Australian Society for Parasitology Inc. Published by Elsevier Ltd. All rights reserved.

  17. Evolutionary Perspective on Collective Decision Making

    NASA Astrophysics Data System (ADS)

    Farrell, Dene; Sayama, Hiroki; Dionne, Shelley D.; Yammarino, Francis J.; Wilson, David Sloan

    Team decision making dynamics are investigated from a novel perspective by shifting agency from decision makers to representations of potential solutions. We provide a new way to navigate social dynamics of collective decision making by interpreting decision makers as constituents of an evolutionary environment of an ecology of evolving solutions. We demonstrate distinct patterns of evolution with respect to three forms of variation: (1) Results with random variations in utility functions of individuals indicate that groups demonstrating minimal internal variation produce higher true utility values of group solutions and display better convergence; (2) analysis of variations in behavioral patterns within a group shows that a proper balance between selective and creative evolutionary forces is crucial to producing adaptive solutions; and (3) biased variations of the utility functions diminish the range of variation for potential solution utility, leaving only the differential of convergence performance static. We generally find that group cohesion (low random variation within a group) and composition (appropriate variation of behavioral patterns within a group) are necessary for a successful navigation of the solution space, but performance in both cases is susceptible to group level biases.

  18. A blackbody-pumped CO2-N2 transfer laser

    NASA Astrophysics Data System (ADS)

    Deyoung, R. J.; Higdon, N. S.

    1984-08-01

    A compact blackbody-pumped CO2-N2 transfer laser was constructed and the significant operating parameters were investigated. Lasing was achieved at 10.6 microns by passing preheated N2 through a 1.5-mm-diameter nozzle to a laser cavity where the N2 was mixed with CO2 and He. An intrinsic efficiency of 0.7 percent was achieved for an oven temperature of 1473 K and N2 oven pressure of 440 torr. The optimum laser cavity consisted of a back mirror with maximum reflectivity and an output mirror with 97.5-percent reflectivity. The optimum gas mixture was 1CO2/.5He/6N2. The variation of laser output was measured as a function of oven temperature, nozzle diameter, N2 oven pressure, He and CO2 partial pressures, nozzle-to-oven separation, laser cell temperature, and output laser mirror reflectivity. With these parameters optimized, outputs approaching 1.4 watts were achieved.

  19. Experimental study on high-power all-fiber superfluorescent source operating near 980 nm

    NASA Astrophysics Data System (ADS)

    Ren, Yankun; Cao, Jianqiu; Ying, Hanyuan; Chen, Heng; Pan, Zhiyong; Du, Shaojun; Chen, Jinbao

    2018-07-01

    A high-power all-fiber superfluorescent source operating near 980 nm is experimentally studied with the help of a large-core distributed side-coupled cladding-pumped Yb-doped fiber. By optimizing the active fiber length and the angle cleaving of the output fiber facet, a 10 W all-fiber superfluorescent source operating near 980 nm is demonstrated for the first time, to the best of our knowledge. An 11.4 W combined 980 nm ASE power is obtained with a 9.3% slope efficiency and an 18 dB suppression of the ASE around 1030 nm. The output spectrum spans 973 nm to 982 nm with the 3 dB bandwidth around 3.5 nm. A 10.5 W output power with 13.1% slope efficiency is also obtained by changing the length of the active fiber. The variations of the output power and spectrum with the active fiber length and pump power are also investigated in the experiment.

  20. An Adaptive Impedance Matching Network with Closed Loop Control Algorithm for Inductive Wireless Power Transfer

    PubMed Central

    Miao, Zhidong; Liu, Dake

    2017-01-01

    For an inductive wireless power transfer (IWPT) system, maintaining a reasonable power transfer efficiency and a stable output power are two most challenging design issues, especially when coil distance varies. To solve these issues, this paper presents a novel adaptive impedance matching network (IMN) for IWPT system. In our adaptive IMN IWPT system, the IMN is automatically reconfigured to keep matching with the coils and to adjust the output power adapting to coil distance variation. A closed loop control algorithm is used to change the capacitors continually, which can compensate mismatches and adjust output power simultaneously. The proposed adaptive IMN IWPT system is working at 125 kHz for 2 W power delivered to load. Comparing with the series resonant IWPT system and fixed IMN IWPT system, the power transfer efficiency of our system increases up to 31.79% and 60% when the coupling coefficient varies in a large range from 0.05 to 0.8 for 2 W output power. PMID:28763011

  1. An Adaptive Impedance Matching Network with Closed Loop Control Algorithm for Inductive Wireless Power Transfer.

    PubMed

    Miao, Zhidong; Liu, Dake; Gong, Chen

    2017-08-01

    For an inductive wireless power transfer (IWPT) system, maintaining a reasonable power transfer efficiency and a stable output power are two most challenging design issues, especially when coil distance varies. To solve these issues, this paper presents a novel adaptive impedance matching network (IMN) for IWPT system. In our adaptive IMN IWPT system, the IMN is automatically reconfigured to keep matching with the coils and to adjust the output power adapting to coil distance variation. A closed loop control algorithm is used to change the capacitors continually, which can compensate mismatches and adjust output power simultaneously. The proposed adaptive IMN IWPT system is working at 125 kHz for 2 W power delivered to load. Comparing with the series resonant IWPT system and fixed IMN IWPT system, the power transfer efficiency of our system increases up to 31.79% and 60% when the coupling coefficient varies in a large range from 0.05 to 0.8 for 2 W output power.

  2. Development of a Bolometer Detector System for the NIST High Accuracy Infrared Spectrophotometer

    PubMed Central

    Zong, Y.; Datla, R. U.

    1998-01-01

    A bolometer detector system was developed for the high accuracy infrared spectrophotometer at the National Institute of Standards and Technology to provide maximum sensitivity, spatial uniformity, and linearity of response covering the entire infrared spectral range. The spatial response variation was measured to be within 0.1 %. The linearity of the detector output was measured over three decades of input power. After applying a simple correction procedure, the detector output was found to deviate less than 0.2 % from linear behavior over this range. The noise equivalent power (NEP) of the bolometer system was 6 × 10−12 W/Hz at the frequency of 80 Hz. The detector output 3 dB roll-off frequency was 200 Hz. The detector output was stable to within ± 0.05 % over a 15 min period. These results demonstrate that the bolometer detector system will serve as an excellent detector for the high accuracy infrared spectrophotometer. PMID:28009364

  3. A blackbody-pumped CO2-N2 transfer laser

    NASA Technical Reports Server (NTRS)

    Deyoung, R. J.; Higdon, N. S.

    1984-01-01

    A compact blackbody-pumped CO2-N2 transfer laser was constructed and the significant operating parameters were investigated. Lasing was achieved at 10.6 microns by passing preheated N2 through a 1.5-mm-diameter nozzle to a laser cavity where the N2 was mixed with CO2 and He. An intrinsic efficiency of 0.7 percent was achieved for an oven temperature of 1473 K and N2 oven pressure of 440 torr. The optimum laser cavity consisted of a back mirror with maximum reflectivity and an output mirror with 97.5-percent reflectivity. The optimum gas mixture was 1CO2/.5He/6N2. The variation of laser output was measured as a function of oven temperature, nozzle diameter, N2 oven pressure, He and CO2 partial pressures, nozzle-to-oven separation, laser cell temperature, and output laser mirror reflectivity. With these parameters optimized, outputs approaching 1.4 watts were achieved.

  4. Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation

    DTIC Science & Technology

    2010-01-01

    classi- fication algorithms: simple random resampling (RRS), equal-instance random resampling (ERS), and network cross-validation ( NCV ). The first two... NCV procedure that eliminates overlap between test sets altogether. The procedure samples for k disjoint test sets that will be used for evaluation...propLabeled ∗ S) nodes from train Pool in f erenceSet =network − trainSet F = F ∪ < trainSet, test Set, in f erenceSet > end for output: F NCV addresses

  5. FPGA and USB based control board for quantum random number generator

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Wan, Xu; Zhang, Hong-Fei; Gao, Yuan; Chen, Teng-Yun; Liang, Hao

    2009-09-01

    The design and implementation of FPGA-and-USB-based control board for quantum experiments are discussed. The usage of quantum true random number generator, control- logic in FPGA and communication with computer through USB protocol are proposed in this paper. Programmable controlled signal input and output ports are implemented. The error-detections of data frame header and frame length are designed. This board has been used in our decoy-state based quantum key distribution (QKD) system successfully.

  6. Prolonged Reduction in Shoulder Strength after Transcutaneous Electrical Nerve Stimulation Treatment of Exercise-Induced Acute Muscle Pain.

    PubMed

    Butera, Katie A; George, Steven Z; Borsa, Paul A; Dover, Geoffrey C

    2018-03-05

    Transcutaneous electrical nerve stimulation (TENS) is commonly used for reducing musculoskeletal pain to improve function. However, peripheral nerve stimulation using TENS can alter muscle motor output. Few studies examine motor outcomes following TENS in a human pain model. Therefore, this study investigated the influence of TENS sensory stimulation primarily on motor output (strength) and secondarily on pain and disability following exercise-induced delayed-onset muscle soreness (DOMS). Thirty-six participants were randomized to a TENS treatment, TENS placebo, or control group after completing a standardized DOMS protocol. Measures included shoulder strength, pain, mechanical pain sensitivity, and disability. TENS treatment and TENS placebo groups received 90 minutes of active or sham treatment 24, 48, and 72 hours post-DOMS. All participants were assessed daily. A repeated measures analysis of variance and post-hoc analysis indicated that, compared to the control group, strength remained reduced in the TENS treatment group (48 hours post-DOMS, P < 0.05) and TENS placebo group (48 hours post-DOMS, P < 0.05; 72 hours post-DOMS, P < 0.05). A mixed-linear modeling analysis was conducted to examine the strength (motor) change. Randomization group explained 5.6% of between-subject strength variance (P < 0.05). Independent of randomization group, pain explained 8.9% of within-subject strength variance and disability explained 3.3% of between-subject strength variance (both P < 0.05). While active and placebo TENS resulted in prolonged strength inhibition, the results were nonsignificant for pain. Results indicated that higher pain and higher disability were independently related to decreased strength. Regardless of the impact on pain, TENS, or even the perception of TENS, may act as a nocebo for motor output. © 2018 World Institute of Pain.

  7. A time-domain digitally controlled oscillator composed of a free running ring oscillator and flying-adder

    NASA Astrophysics Data System (ADS)

    Wei, Liu; Wei, Li; Peng, Ren; Qinglong, Lin; Shengdong, Zhang; Yangyuan, Wang

    2009-09-01

    A time-domain digitally controlled oscillator (DCO) is proposed. The DCO is composed of a free-running ring oscillator (FRO) and a two lap-selectors integrated flying-adder (FA). With a coiled cell array which allows uniform loading capacitances of the delay cells, the FRO produces 32 outputs with consistent tap spacing for the FA as reference clocks. The FA uses the outputs from the FRO to generate the output of the DCO according to the control number, resulting in a linear dependence of the output period, instead of the frequency on the digital controlling word input. Thus the proposed DCO ensures a good conversion linearity in a time-domain, and is suitable for time-domain all-digital phase locked loop applications. The DCO was implemented in a standard 0.13 μm digital logic CMOS process. The measurement results show that the DCO has a linear and monotonic tuning curve with gain variation of less than 10%, and a very low root mean square period jitter of 9.3 ps in the output clocks. The DCO works well at supply voltages ranging from 0.6 to 1.2 V, and consumes 4 mW of power with 500 MHz frequency output at 1.2 V supply voltage.

  8. Variability of pulsed energy outputs from three dermatology lasers during multiple simulated treatments.

    PubMed

    Britton, Jason

    2018-01-20

    Dermatology laser treatments are undertaken at regional departments using lasers of different powers and wavelengths. In order to achieve good outcomes, there needs to be good consistency of laser output across different weeks as it is custom and practice to break down the treatments into individual fractions. Departments will also collect information from test patches to help decide on the most appropriate treatment parameters for individual patients. The objective of these experiments is to assess the variability of the energy outputs from a small number of lasers across multiple weeks at realistic parameters. The energy outputs from 3 lasers were measured at realistic treatment parameters using a thermopile detector across a period of 6 weeks. All lasers fired in single-pulse mode demonstrated good repeatability of energy output. In spite of one of the lasers being scheduled for a dye canister change in the next 2 weeks, there was good energy matching between the two devices with only a 4%-5% variation in measured energies. Based on the results presented, clinical outcomes should not be influenced by variability in the energy outputs of the dermatology lasers used as part of the treatment procedure. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Indicator system provides complete data of engine cylinder pressure variation

    NASA Technical Reports Server (NTRS)

    Mc Jones, R. W.; Morgan, N. E.

    1966-01-01

    Varying reference pressure used together with a balanced pressure pickup /a diaphragm switch/ to switch the electric output of the pressure transducer in a reference pressure line obtains precise engine cylinder pressure data from a high speed internal combustion engine.

  10. Ultra-Low-Energy Sub-Threshold Circuits: Program Overview

    DTIC Science & Technology

    2007-04-10

    with global > 0.1 corner, but so does VUL, VIH 0 .0 5 -_ "or ni n a Global Variatlion 0.0a 0•,lN& 0.24.. 7 Mir" Output Swing Metrics " Need a... VIH . lines plot the VTCs when random local VT mismatch is ap- In Figure 1(b), a NAND gate has sufficient output swing plied to the inverter. One case...the VTC is input-dependent, all inputs are varied simultaneously to >P 1 0 SNM side of largest obtain the worst case ViH and VIL. > 0 ins0nbedsquare

  11. Mapping an operator's perception of a parameter space

    NASA Technical Reports Server (NTRS)

    Pew, R. W.; Jagacinski, R. J.

    1972-01-01

    Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.

  12. Calibration of a universal indicated turbulence system

    NASA Technical Reports Server (NTRS)

    Chapin, W. G.

    1977-01-01

    Theoretical and experimental work on a Universal Indicated Turbulence Meter is described. A mathematical transfer function from turbulence input to output indication was developed. A random ergodic process and a Gaussian turbulence distribution were assumed. A calibration technique based on this transfer function was developed. The computer contains a variable gain amplifier to make the system output independent of average velocity. The range over which this independence holds was determined. An optimum dynamic response was obtained for the tubulation between the system pitot tube and pressure transducer by making dynamic response measurements for orifices of various lengths and diameters at the source end.

  13. Microwave power transmission system wherein level of transmitted power is controlled by reflections from receiver

    NASA Technical Reports Server (NTRS)

    Robinson, W. J., Jr. (Inventor)

    1974-01-01

    A microwave, wireless, power transmission system is described in which the transmitted power level is adjusted to correspond with power required at a remote receiving station. Deviations in power load produce an antenna impedance mismatch causing variations in energy reflected by the power receiving antenna employed by the receiving station. The variations in reflected energy are sensed by a receiving antenna at the transmitting station and used to control the output power of a power transmitter.

  14. Model of an axially strained weakly guiding optical fiber modal pattern

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.

    1992-01-01

    Axial strain can be determined by monitoring the modal pattern variation of an optical fiber. The results of a numerical model developed to calculate the modal pattern variation at the end of a weakly guiding optical fiber under axial strain is presented. Whenever an optical fiber is under stress, the optical path length, the index of refraction, and the propagation constants of each fiber mode change. In consequence, the modal phase term for the fields and the fiber output pattern are also modified. For multimode fibers, very complicated patterns result. The predicted patterns are presented, and an expression for the phase variation with strain is derived.

  15. Uncertainty analysis of absorbed dose calculations from thermoluminescence dosimeters.

    PubMed

    Kirby, T H; Hanson, W F; Johnston, D A

    1992-01-01

    Thermoluminescence dosimeters (TLD) are widely used to verify absorbed doses delivered from radiation therapy beams. Specifically, they are used by the Radiological Physics Center for mailed dosimetry for verification of therapy machine output. The effects of the random experimental uncertainties of various factors on dose calculations from TLD signals are examined, including: fading, dose response nonlinearity, and energy response corrections; reproducibility of TL signal measurements and TLD reader calibration. Individual uncertainties are combined to estimate the total uncertainty due to random fluctuations. The Radiological Physics Center's (RPC) mail out TLD system, utilizing throwaway LiF powder to monitor high-energy photon and electron beam outputs, is analyzed in detail. The technique may also be applicable to other TLD systems. It is shown that statements of +/- 2% dose uncertainty and +/- 5% action criterion for TLD dosimetry are reasonable when related to uncertainties in the dose calculations, provided the standard deviation (s.d.) of TL readings is 1.5% or better.

  16. Computer Modeling of High-Intensity Cs-Sputter Ion Sources

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.

  17. Full-custom design of split-set data weighted averaging with output register for jitter suppression

    NASA Astrophysics Data System (ADS)

    Jubay, M. C.; Gerasta, O. J.

    2015-06-01

    A full-custom design of an element selection algorithm, named as Split-set Data Weighted Averaging (SDWA) is implemented in 90nm CMOS Technology Synopsys Library. SDWA is applied in seven unit elements (3-bit) using a thermometer-coded input. Split-set DWA is an improved DWA algorithm which caters the requirement for randomization along with long-term equal element usage. Randomization and equal element-usage improve the spectral response of the unit elements due to higher Spurious-free dynamic range (SFDR) and without significantly degrading signal-to-noise ratio (SNR). Since a full-custom, the design is brought to transistor-level and the chip custom layout is also provided, having a total area of 0.3mm2, a power consumption of 0.566 mW, and simulated at 50MHz clock frequency. On this implementation, SDWA is successfully derived and improved by introducing a register at the output that suppresses the jitter introduced at the final stage due to switching loops and successive delays.

  18. Estimation of hysteretic damping of structures by stochastic subspace identification

    NASA Astrophysics Data System (ADS)

    Bajrić, Anela; Høgsberg, Jan

    2018-05-01

    Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.

  19. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  20. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  1. Multistate Lempel-Ziv (MLZ) index interpretation as a measure of amplitude and complexity changes.

    PubMed

    Sarlabous, Leonardo; Torres, Abel; Fiz, Jose A; Gea, Joaquim; Galdiz, Juan B; Jane, Raimon

    2009-01-01

    The Lempel-Ziv complexity (LZ) has been widely used to evaluate the randomness of finite sequences. In general, the LZ complexity has been used to determine the complexity grade present in biomedical signals. The LZ complexity is not able to discern between signals with different amplitude variations and similar random components. On the other hand, amplitude parameters, as the root mean square (RMS), are not able to discern between signals with similar power distributions and different random components. In this work, we present a novel method to quantify amplitude and complexity variations in biomedical signals by means of the computation of the LZ coefficient using more than two quantification states, and with thresholds fixed and independent of the dynamic range or standard deviation of the analyzed signal: the Multistate Lempel-Ziv (MLZ) index. Our results indicate that MLZ index with few quantification levels only evaluate the complexity changes of the signal, with high number of levels, the amplitude variations, and with an intermediate number of levels informs about both amplitude and complexity variations. The study performed in diaphragmatic mechanomyographic signals shows that the amplitude variations of this signal are more correlated with the respiratory effort than the complexity variations. Furthermore, it has been observed that the MLZ index with high number of levels practically is not affected by the existence of impulsive, sinusoidal, constant and Gaussian noises compared with the RMS amplitude parameter.

  2. Biological monitoring of environmental quality: The use of developmental instability

    USGS Publications Warehouse

    Freeman, D.C.; Emlen, J.M.; Graham, J.H.; Hough, R. A.; Bannon, T.A.

    1994-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails.

  3. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  4. Dynamics and Instabilities of the Shastry-Sutherland Model

    NASA Astrophysics Data System (ADS)

    Wang, Zhentao; Batista, Cristian D.

    2018-06-01

    We study the excitation spectrum in the dimer phase of the Shastry-Sutherland model by using an unbiased variational method that works in the thermodynamic limit. The method outputs dynamical correlation functions in all possible channels. This output is exploited to identify the order parameters with the highest susceptibility (single or multitriplon condensation in a specific channel) upon approaching a quantum phase transition in the magnetic field versus the J'/J phase diagram. We find four different instabilities: antiferro spin nematic, plaquette spin nematic, stripe magnetic order, and plaquette order, two of which have been reported in previous studies.

  5. The Production of Biologically Active Substances by Plant Cell Cultures in Space

    NASA Astrophysics Data System (ADS)

    Strogov, S. E.; Zaitseva, G. V.; Konstantinova, N. A.; Fetisova, E. M.; Mikhailova, O. M.; Belousova, I. M.; Turkin, V. V.; Ukraintsev, A. D.

    2001-07-01

    The impact of the conditions of space flight on the productivity of cultures of the plant cells with respect to the biomass and the metabolites is investigated. The experiments were performed with the callus cultures of the cells of ginseng ( Panax ginseng), red root puccoon ( Lithospermum arythrorhizon), and macrotomia coloring ( Macrotomia euchroma) onboard the orbital station Mirand American Space Shuttle. A more pronounced variation of the output of the metabolites is noted with respect to the ground control. This output depends upon the properties of the strain and conditions of the experiment.

  6. Federal Logistics Information Systems. FLIS Procedures Manual. Document Identifier Code Input/Output Formats (Variable Length). Volume 9.

    DTIC Science & Technology

    1997-04-01

    DATA COLLABORATORS 0001N B NQ 8380 NUMBER OF DATA RECEIVERS 0001N B NQ 2533 AUTHORIZED ITEM IDENTIFICATION DATA COLLABORATOR CODE 0002 ,X B 03 18 TD...01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 9505 TYPE OF SCREENING CODE 0001A 01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 4690 OUTPUT DATA... 9505 TYPE OF SCREENING CODE 0001A 2 89 2910 REFERENCE NUMBER CATEGORY CODE (RNCC) 0001X 2 89 4780 REFERENCE NUMBER VARIATION CODE (RNVC) 0001 N 2 89

  7. Principles of cell-free genetic circuit assembly.

    PubMed

    Noireaux, Vincent; Bar-Ziv, Roy; Libchaber, Albert

    2003-10-28

    Cell-free genetic circuit elements were constructed in a transcription-translation extract. We engineered transcriptional activation and repression cascades, in which the protein product of each stage is the input required to drive or block the following stage. Although we can find regions of linear response for single stages, cascading to subsequent stages requires working in nonlinear regimes. Substantial time delays and dramatic decreases in output production are incurred with each additional stage because of a bottleneck at the translation machinery. Faster turnover of RNA message can relieve competition between genes and stabilize output against variations in input and parameters.

  8. Air-sea interaction over the Indian Ocean due to variations in the Indonesian throughflow

    NASA Astrophysics Data System (ADS)

    Wajsowicz, R. C.

    The effects of the Indonesian throughflow on the upper thermocline circulation and surface heat flux over the Indian Ocean are presented for a 3-D ocean model forced by two different monthly wind-stress climatologies, as they show interesting differences, which could have implications for long-term variability in the Indian and Australasian monsoons. The effects are determined by contrasting a control run with a run in which the throughflow is blocked by an artificial land-bridge across the exit channels into the Indian Ocean. In the model forced by ECMWF wind stresses, there is little impact on the annual mean surface heat flux in the region surrounding the throughflow exit straits, whereas in the model forced by SSM/I-based wind stresses, a modest throughflow of less than 5 ×106 m3s-1 over the upper 300 m induces an extra 10-50 Wm-2 output. In the SSM/I-forced model, there is insignificant penetration of the throughflow into the northern Indian Ocean. However, in the ECMWF-forced model, the throughflow induces a 5-10 Wm-2 reduction in heat input into the ocean, i.e., an effective output, over the Somali Current in the annual mean. These differences are attributed to differences in the strength and direction of the Ekman transport of the ambient flow, and the vertical structure of the transport and temperature anomalies associated with the throughflow. In both models, the throughflow induces a 5-30 Wm-2 increase in net output over a broad swathe of the southern Indian Ocean, and a reduction in heat output of 10-60 Wm-2 in a large L-shaped band around Tasmania. Effective increases in throughflow-induced net output reach up to 40 (60) Wm-2 over the Agulhas Current retroflection in the ECMWF (SSM/I)-forced model. Seasonal variations in the throughflow's effect on the net surface heat flux are attributed to seasonal variations in the ambient circulation of the Indian Ocean, specifically in coastal upwelling along the south Javan, west Australian, and Somalian coasts, and in the depth of convective overturning between 40°S to 50°S, and its sensing of the mean throughflow's thermal anomaly. The seasonal anomalies plus annual mean yield maximum values for the throughflow-induced net surface heat output in boreal summer. Values may exceed 40 Wm-2 in the southern Indian Ocean interior in both models, exceed 60 Wm-2 over the Agulhas retroflection and immediate vicinity of the exit channels in the SSM/I-forced model, and reach 30 Wm-2 over the Somali jet in the ECMWF-forced model.

  9. Hints of correlation between broad-line and radio variations for 3C 120

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H. T.; Bai, J. M.; Li, S. K.

    2014-01-01

    In this paper, we investigate the correlation between broad-line and radio variations for the broad-line radio galaxy 3C 120. By the z-transformed discrete correlation function method and the model-independent flux randomization/random subset selection (FR/RSS) Monte Carlo method, we find that broad Hβ line variations lead the 15 GHz variations. The FR/RSS method shows that the Hβ line variations lead the radio variations by a factor of τ{sub ob} = 0.34 ± 0.01 yr. This time lag can be used to locate the position of the emitting region of radio outbursts in the jet, on the order of ∼5 lt-yr frommore » the central engine. This distance is much larger than the size of the broad-line region. The large separation of the radio outburst emitting region from the broad-line region will observably influence the gamma-ray emission in 3C 120.« less

  10. On problems of analyzing aerodynamic properties of blunted rotary bodies with small random surface distortions under supersonic and hypersonic flows

    NASA Astrophysics Data System (ADS)

    Degtyar, V. G.; Kalashnikov, S. T.; Mokin, Yu. A.

    2017-10-01

    The paper considers problems of analyzing aerodynamic properties (ADP) of reenetry vehicles (RV) as blunted rotary bodies with small random surface distortions. The interactions of math simulation of surface distortions, selection of tools for predicting ADPs of shaped bodies, evaluation of different-type ADP variations and their adaptation for dynamic problems are analyzed. The possibilities of deterministic and probabilistic approaches to evaluation of ADP variations are considered. The practical value of the probabilistic approach is demonstrated. The examples of extremal deterministic evaluations of ADP variations for a sphere and a sharp cone are given.

  11. Effective seat-to-head transmissibility in whole-body vibration: Effects of posture and arm position

    NASA Astrophysics Data System (ADS)

    Rahmatalla, Salam; DeShaw, Jonathan

    2011-12-01

    Seat-to-head transmissibility is a biomechanical measure that has been widely used for many decades to evaluate seat dynamics and human response to vibration. Traditionally, transmissibility has been used to correlate single-input or multiple-input with single-output motion; it has not been effectively used for multiple-input and multiple-output scenarios due to the complexity of dealing with the coupled motions caused by the cross-axis effect. This work presents a novel approach to use transmissibility effectively for single- and multiple-input and multiple-output whole-body vibrations. In this regard, the full transmissibility matrix is transformed into a single graph, such as those for single-input and single-output motions. Singular value decomposition and maximum distortion energy theory were used to achieve the latter goal. Seat-to-head transmissibility matrices for single-input/multiple-output in the fore-aft direction, single-input/multiple-output in the vertical direction, and multiple-input/multiple-output directions are investigated in this work. A total of ten subjects participated in this study. Discrete frequencies of 0.5-16 Hz were used for the fore-aft direction using supported and unsupported back postures. Random ride files from a dozer machine were used for the vertical and multiple-axis scenarios considering two arm postures: using the armrests or grasping the steering wheel. For single-input/multiple-output, the results showed that the proposed method was very effective in showing the frequencies where the transmissibility is mostly sensitive for the two sitting postures and two arm positions. For multiple-input/multiple-output, the results showed that the proposed effective transmissibility indicated higher values for the armrest-supported posture than for the steering-wheel-supported posture.

  12. Surface-acoustic-wave (SAW) flow sensor

    NASA Astrophysics Data System (ADS)

    Joshi, Shrinivas G.

    1991-03-01

    The use of a surface-acoustic-wave (SAW) device to measure the rate of gas flow is described. A SAW oscillator heated to a suitable temperature above ambient is placed in the path of a flowing gas. Convective cooling caused by the gas flow results in a change in the oscillator frequency. A 73-MHz oscillator fabricated on 128 deg rotated Y-cut lithium niobate substrate and heated to 55 C above ambient shows a frequency variation greater than 142 kHz for flow-rate variation from 0 to 1000 cu cm/min. The output of the sensor can be calibrated to provide a measurement of volume flow rate, pressure differential across channel ports, or mass flow rate. High sensitivity, wide dynamic range, and direct digital output are among the attractive features of this sensor. Theoretical expressions for the sensitivity and response time of the sensor are derived. It is shown that by using ultrasonic Lamb waves propagating in thin membranes, a flow sensor with faster response than a SAW sensor can be realized.

  13. Surface-acoustic-wave (SAW) flow sensor.

    PubMed

    Joshi, S G

    1991-01-01

    The use of a surface-acoustic-wave (SAW) device to measure the rate of gas flow is described. A SAW oscillator heated to a suitable temperature above ambient is placed in the path of a flowing gas. Convective cooling caused by the gas flow results in a change in the oscillator frequency. A 73-MHz oscillator fabricated on 128 degrees rotated Y-cut lithium niobate substrate and heated to 55 degrees C above ambient shows a frequency variation greater than 142 kHz for flow-rate variation from 0 to 1000 cm(3)/min. The output of the sensor can be calibrated to provide a measurement of volume flow rate, pressure differential across channel ports, or mass flow rate. High sensitivity, wide dynamic range, and direct digital output are among the attractive features of this sensor. Theoretical expressions for the sensitivity and response time of the sensor are derived. It is shown that by using ultrasonic Lamb waves, propagating in thin membranes, a flow sensor with faster response than a SAW sensor can be realized.

  14. Strong intensity variations of laser feedback interferometer caused by atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Sun, Yiyi; Li, Zhiping

    2003-05-01

    The significant variation of the laser output can be caused by feedback of a small part of laser beam, which is reflected or backscattered by a target at a long distance from laser source, into the laser cavity. This paper describes and analyzes theoretically and experimentally the influence of atmospheric turbulence on interference caused by laser feedback. The influence depends upon both the energy of feedback into the laser cavity and the strength of turbulence over a laser propagation path in the atmosphere. In the case of stronger energy of feedback and weak turbulence variance of fluctuation of the laser output can be enhanced by hundreds to thousands times. From our measurements and theoretical analysis it shows thatthese significant enhancements can result from the change of laser-cavity-modes which can be stimulated simultaneously and from beat oscillations between a variety of frequencies of laser modes. This also can result from optical chaos inside the laser resonator because a non-separable distorted external cavity can become a prerequisite for optical chaos.

  15. A post-processing system for automated rectification and registration of spaceborne SAR imagery

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Kwok, Ronald; Pang, Shirley S.

    1987-01-01

    An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.

  16. Effect of Random Thermal Spikes on Stirling Convertor Heater Head Reliability

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Halford, Gary R.

    2004-01-01

    Onboard radioisotope power systems being developed to support future NASA exploration missions require reliable design lifetimes of up to 14 yr and beyond. The structurally critical heater head of the high-efficiency developmental Stirling power convertor has undergone extensive computational analysis of operating temperatures (up to 650 C), stresses, and creep resistance of the thin-walled Inconel 718 bill of material. Additionally, assessment of the effect of uncertainties in the creep behavior of the thin-walled heater head, the variation in the manufactured thickness, variation in control temperature, and variation in pressure on the durability and reliability were performed. However, it is possible for the heater head to experience rare incidences of random temperature spikes (excursions) of short duration. These incidences could occur randomly with random magnitude and duration during the desired mission life. These rare incidences could affect the creep strain rate and therefore the life. The paper accounts for these uncertainties and includes the effect of such rare incidences, random in nature, on the reliability. The sensitivities of variables affecting the reliability are quantified and guidelines developed to improve the reliability are outlined. Furthermore, the quantified reliability is being verified with test data from the accelerated benchmark tests being conducted at the NASA Glenn Research Center.

  17. Random field assessment of nanoscopic inhomogeneity of bone

    PubMed Central

    Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu

    2010-01-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128

  18. Analyzing wildfire exposure on Sardinia, Italy

    NASA Astrophysics Data System (ADS)

    Salis, Michele; Ager, Alan A.; Arca, Bachisio; Finney, Mark A.; Alcasena, Fermin; Bacciu, Valentina; Duce, Pierpaolo; Munoz Lozano, Olga; Spano, Donatella

    2014-05-01

    We used simulation modeling based on the minimum travel time algorithm (MTT) to analyze wildfire exposure of key ecological, social and economic features on Sardinia, Italy. Sardinia is the second largest island of the Mediterranean Basin, and in the last fifty years experienced large and dramatic wildfires, which caused losses and threatened urban interfaces, forests and natural areas, and agricultural productions. Historical fires and environmental data for the period 1995-2009 were used as input to estimate fine scale burn probability, conditional flame length, and potential fire size in the study area. With this purpose, we simulated 100,000 wildfire events within the study area, randomly drawing from the observed frequency distribution of burn periods and wind directions for each fire. Estimates of burn probability, excluding non-burnable fuels, ranged from 0 to 1.92x10-3, with a mean value of 6.48x10-5. Overall, the outputs provided a quantitative assessment of wildfire exposure at the landscape scale and captured landscape properties of wildfire exposure. We then examined how the exposure profiles varied among and within selected features and assets located on the island. Spatial variation in modeled outputs resulted in a strong effect of fuel models, coupled with slope and weather. In particular, the combined effect of Mediterranean maquis, woodland areas and complex topography on flame length was relevant, mainly in north-east Sardinia, whereas areas with herbaceous fuels and flat areas were in general characterized by lower fire intensity but higher burn probability. The simulation modeling proposed in this work provides a quantitative approach to inform wildfire risk management activities, and represents one of the first applications of burn probability modeling to capture fire risk and exposure profiles in the Mediterranean basin.

  19. ARG-walker: inference of individual specific strengths of meiotic recombination hotspots by population genomics analysis.

    PubMed

    Chen, Hao; Yang, Peng; Guo, Jing; Kwoh, Chee Keong; Przytycka, Teresa M; Zheng, Jie

    2015-01-01

    Meiotic recombination hotspots play important roles in various aspects of genomics, but the underlying mechanisms for regulating the locations and strengths of recombination hotspots are not yet fully revealed. Most existing algorithms for estimating recombination rates from sequence polymorphism data can only output average recombination rates of a population, although there is evidence for the heterogeneity in recombination rates among individuals. For genome-wide association studies (GWAS) of recombination hotspots, an efficient algorithm that estimates the individualized strengths of recombination hotspots is highly desirable. In this work, we propose a novel graph mining algorithm named ARG-walker, based on random walks on ancestral recombination graphs (ARG), to estimate individual-specific recombination hotspot strengths. Extensive simulations demonstrate that ARG-walker is able to distinguish the hot allele of a recombination hotspot from the cold allele. Integrated with output of ARG-walker, we performed GWAS on the phased haplotype data of the 22 autosome chromosomes of the HapMap Asian population samples of Chinese and Japanese (JPT+CHB). Significant cis-regulatory signals have been detected, which is corroborated by the enrichment of the well-known 13-mer motif CCNCCNTNNCCNC of PRDM9 protein. Moreover, two new DNA motifs have been identified in the flanking regions of the significantly associated SNPs (single nucleotide polymorphisms), which are likely to be new cis-regulatory elements of meiotic recombination hotspots of the human genome. Our results on both simulated and real data suggest that ARG-walker is a promising new method for estimating the individual recombination variations. In the future, it could be used to uncover the mechanisms of recombination regulation and human diseases related with recombination hotspots.

  20. The effect of the oxygen uptake-power output relationship on the prediction of supramaximal oxygen demands.

    PubMed

    Muniz-Pumares, Daniel; Pedlar, Charles; Godfrey, Richard; Glaister, Mark

    2017-01-01

    The aim of this study was to investigate the relationship between oxygen uptake (V̇O2) and power output at intensities below and above the lactate threshold (LT) in cyclists; and to determine the reliability of supramaximal power outputs linearly projected from these relationships. Nine male cyclists (mean±standard deviation age: 41±8 years; mass: 77±6 kg, height: 1.79±0.05 m and V̇O2max: 54±7 mL∙kg-1∙min-1) completed two cycling trials each consisting of a step test (10×3 min stages at submaximal incremental intensities) followed by a maximal test to exhaustion. The lines of best fit for V̇O2 and power output were determined for: the entire step test; stages below and above the LT, and from rolling clusters of five consecutive stages. Lines were projected to determine a power output predicted to elicit 110% peak V̇O2. There were strong linear correlations (r≥0.953; P<0.01) between V̇O2 and power output using the three approaches; with the slope, intercept, and projected values of these lines unaffected (P≥0.05) by intensity. The coefficient of variation of the predicted power output at 110% V̇O2max was 6.7% when using all ten submaximal stages. Cyclists exhibit a linear V̇O2 and power output relationship when determined using 3 min stages, which allows for prediction of a supramaximal intensity with acceptable reliability.

  1. NASTRAN computer system level 12.1

    NASA Technical Reports Server (NTRS)

    Butler, T. G.

    1971-01-01

    Program uses finite element displacement method for solving linear response of large, three-dimensional structures subject to static, dynamic, thermal, and random loadings. Program adapts to computers of different manufacture, permits up-dating and extention, allows interchange of output and input information between users, and is extensively documented.

  2. CONTEXTUAL INTERFERENCE AND INTROVERSION/EXTRAVERSION IN MOTOR LEARNING.

    PubMed

    Meira, Cassio M; Fairbrother, Jeffrey T; Perez, Carlos R

    2015-10-01

    The Introversion/Extraversion dimension may interact with contextual interference, as random and blocked practice schedules imply distinct levels of variation. This study investigated the effect of different practice schedules in the acquisition of a motor skill in extraverts and introverts. Forty male undergraduate students (M = 24.3 yr., SD = 5.6) were classified as extraverts (n = 20) and introverts (n = 20) by the Eysenck Personality Questionnaire and allocated in one of two practice schedules with different levels of contextual interference: blocked (low contextual interference) and random (high contextual interference). Half of each group was assigned to a blocked practice schedule, and the other half was assigned to a random practice schedule. The design had two phases: acquisition and transfer (5 min. and 24 hr.). The participants learned variations of a sequential timing keypressing task. Each variation required the same sequence but different timing; three variations were used in acquisition, and one variation of intermediate length was used in transfer. Results for absolute error and overall timing error (root mean square error) indicated that the contextual interference effect was more pronounced for introverts. In addition, introverts who practiced according to the blocked schedule committed more errors during the 24-hr. transfer, suggesting that introverts did not appear to be challenged by a low contextual interference practice schedule.

  3. Fluid therapy LiDCO controlled trial-optimization of volume resuscitation of extensively burned patients through noninvasive continuous real-time hemodynamic monitoring LiDCO.

    PubMed

    Tokarik, Monika; Sjöberg, Folke; Balik, Martin; Pafcuga, Igor; Broz, Ludomir

    2013-01-01

    This pilot trial aims at gaining support for the optimization of acute burn resuscitation through noninvasive continuous real-time hemodynamic monitoring using arterial pulse contour analysis. A group of 21 burned patients meeting preliminary criteria (age range 18-75 years with second- third- degree burns and TBSA ≥10-75%) was randomized during 2010. A hemodynamic monitoring through lithium dilution cardiac output was used in 10 randomized patients (LiDCO group), whereas those without LiDCO monitoring were defined as the control group. The modified Brooke/Parkland formula as a starting resuscitative formula, balanced crystalloids as the initial solutions, urine output of 0.5 ml/kg/hr as a crucial value of adequate intravascular filling were used in both groups. Additionally, the volume and vasopressor/inotropic support were based on dynamic preload parameters in the LiDCO group in the case of circulatory instability and oligouria. Statistical analysis was done using t-tests. Within the first 24 hours postburn, a significantly lower consumption of crystalloids was registered in LiDCO group (P = .04). The fluid balance under LiDCO control in combination with hourly diuresis contributed to reducing the cumulative fluid balance approximately by 10% compared with fluid management based on standard monitoring parameters. The amount of applied solutions in the LiDCO group got closer to Brooke formula whereas the urine output was at the same level in both groups (0.8 ml/kg/hr). The new finding in this study is that when a fluid resuscitation is based on the arterial waveform analysis, the initial fluid volume provided was significantly lower than that delivered on the basis of physician-directed fluid resuscitation (by urine output and mean arterial pressure).

  4. Measures of rowing performance.

    PubMed

    Smith, T Brett; Hopkins, Will G

    2012-04-01

    Accurate measures of performance are important for assessing competitive athletes in practi~al and research settings. We present here a review of rowing performance measures, focusing on the errors in these measures and the implications for testing rowers. The yardstick for assessing error in a performance measure is the random variation (typical or standard error of measurement) in an elite athlete's competitive performance from race to race: ∼1.0% for time in 2000 m rowing events. There has been little research interest in on-water time trials for assessing rowing performance, owing to logistic difficulties and environmental perturbations in performance time with such tests. Mobile ergometry via instrumented oars or rowlocks should reduce these problems, but the associated errors have not yet been reported. Measurement of boat speed to monitor on-water training performance is common; one device based on global positioning system (GPS) technology contributes negligible extra random error (0.2%) in speed measured over 2000 m, but extra error is substantial (1-10%) with other GPS devices or with an impeller, especially over shorter distances. The problems with on-water testing have led to widespread use of the Concept II rowing ergometer. The standard error of the estimate of on-water 2000 m time predicted by 2000 m ergometer performance was 2.6% and 7.2% in two studies, reflecting different effects of skill, body mass and environment in on-water versus ergometer performance. However, well trained rowers have a typical error in performance time of only ∼0.5% between repeated 2000 m time trials on this ergometer, so such trials are suitable for tracking changes in physiological performance and factors affecting it. Many researchers have used the 2000 m ergometer performance time as a criterion to identify other predictors of rowing performance. Standard errors of the estimate vary widely between studies even for the same predictor, but the lowest errors (~1-2%) have been observed for peak power output in an incremental test, some measures of lactate threshold and measures of 30-second all-out power. Some of these measures also have typical error between repeated tests suitably low for tracking changes. Combining measures via multiple linear regression needs further investigation. In summary, measurement of boat speed, especially with a good GPS device, has adequate precision for monitoring training performance, but adjustment for environmental effects needs to be investigated. Time trials on the Concept II ergometer provide accurate estimates of a rower's physiological ability to output power, and some submaximal and brief maximal ergometer performance measures can be used frequently to monitor changes in this ability. On-water performance measured via instrumented skiffs that determine individual power output may eventually surpass measures derived from the Concept II.

  5. Design, Intervention Fidelity, and Behavioral Outcomes of a School-Based Water, Sanitation, and Hygiene Cluster-Randomized Trial in Laos

    PubMed Central

    2018-01-01

    Evidence of the impact of water, sanitation, and hygiene (WASH) in schools (WinS) interventions on pupil absence and health is mixed. Few WinS evaluations rigorously report on output and outcome measures that allow for comparisons of effectiveness between interventions to be made, or for an understanding of why programs succeed. The Water, Sanitation, and Hygiene for Health and Education in Laotian Primary Schools (WASH HELPS) study was a randomized controlled trial designed to measure the impact of the United Nations Children’s Fund (UNICEF) Laos WinS project on child health and education. We also measured the sustainability of intervention outputs and outcomes, and analyzed the effectiveness of group hygiene activities on behavior change and habit formation. Here, we present the design and intermediate results from this study. We found the WinS project improved the WASH environment in intervention schools; 87.8% of schools received the intervention per design. School-level adherence to outputs was lower; on average, schools met 61.4% of adherence-related criteria. The WinS project produced positive changes in pupils’ school WASH behaviors, specifically increasing toilet use and daily group handwashing. Daily group hygiene activities are effective strategies to improve school WASH behaviors, but a complementary strategy needs to be concurrently promoted for effective and sustained individual handwashing practice at critical times. PMID:29565302

  6. The Diuretic Action of Weak and Strong Alcoholic Beverages in Elderly Men: A Randomized Diet-Controlled Crossover Trial.

    PubMed

    Polhuis, Kristel C M M; Wijnen, Annemarthe H C; Sierksma, Aafje; Calame, Wim; Tieland, Michael

    2017-06-28

    With ageing, there is a greater risk of dehydration. This study investigated the diuretic effect of alcoholic beverages varying in alcohol concentration in elderly men. Three alcoholic beverages (beer (AB), wine (AW), and spirits (S)) and their non-alcoholic counterparts (non-alcoholic beer (NAB), non-alcoholic wine (NAW), and water (W)) were tested in a diet-controlled randomized crossover trial. For the alcoholic beverages, alcohol intake equaled a moderate amount of 30 g. An equal volume of beverage was given for the non-alcoholic counterpart. After consumption, the urine output was collected every hour for 4 h and the total 24 h urine output was measured. AW and S resulted in a higher cumulative urine output compared to NAW and W during the first 4 h (effect size: 0.25 mL p < 0.003, effect size: 0.18 mL, p < 0.001, respectively), but not after the 24h urine collection ( p > 0.40, p > 0.10). AB and NAB did not differ at any time point (effect size: -0.02 mL p > 0.70). For urine osmolality, and the sodium and potassium concentration, the findings were in line. In conclusion, only moderate amounts of stronger alcoholic beverages, such as wine and spirits, resulted in a short and small diuretic effect in elderly men.

  7. Multiple-image authentication with a cascaded multilevel architecture based on amplitude field random sampling and phase information multiplexing.

    PubMed

    Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2015-04-10

    A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.

  8. Global industrial impact coefficient based on random walk process and inter-country input-output table

    NASA Astrophysics Data System (ADS)

    Xing, Lizhi; Dong, Xianlei; Guan, Jun

    2017-04-01

    Input-output table is very comprehensive and detailed in describing the national economic system with lots of economic relationships, which contains supply and demand information among industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can describe the structural characteristics of the internal structure of the research object by measuring the structural indicators of the social and economic system, revealing the complex relationship between the inner hierarchy and the external economic function. This paper builds up GIVCN-WIOT models based on World Input-Output Database in order to depict the topological structure of Global Value Chain (GVC), and assumes the competitive advantage of nations is equal to the overall performance of its domestic sectors' impact on the GVC. Under the perspective of econophysics, Global Industrial Impact Coefficient (GIIC) is proposed to measure the national competitiveness in gaining information superiority and intermediate interests. Analysis of GIVCN-WIOT models yields several insights including the following: (1) sectors with higher Random Walk Centrality contribute more to transmitting value streams within the global economic system; (2) Half-Value Ratio can be used to measure robustness of open-economy macroeconomics in the process of globalization; (3) the positive correlation between GIIC and GDP indicates that one country's global industrial impact could reveal its international competitive advantage.

  9. Minimal complexity control law synthesis

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.

    1989-01-01

    A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.

  10. Random multispace quantization as an analytic mechanism for BioHashing of biometric and random identity inputs.

    PubMed

    Teoh, Andrew B J; Goh, Alwyn; Ngo, David C L

    2006-12-01

    Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.

  11. Stability and dynamical properties of material flow systems on random networks

    NASA Astrophysics Data System (ADS)

    Anand, K.; Galla, T.

    2009-04-01

    The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.

  12. Megahertz-Rate Semi-Device-Independent Quantum Random Number Generators Based on Unambiguous State Discrimination

    NASA Astrophysics Data System (ADS)

    Brask, Jonatan Bohr; Martin, Anthony; Esposito, William; Houlmann, Raphael; Bowles, Joseph; Zbinden, Hugo; Brunner, Nicolas

    2017-05-01

    An approach to quantum random number generation based on unambiguous quantum state discrimination is developed. We consider a prepare-and-measure protocol, where two nonorthogonal quantum states can be prepared, and a measurement device aims at unambiguously discriminating between them. Because the states are nonorthogonal, this necessarily leads to a minimal rate of inconclusive events whose occurrence must be genuinely random and which provide the randomness source that we exploit. Our protocol is semi-device-independent in the sense that the output entropy can be lower bounded based on experimental data and a few general assumptions about the setup alone. It is also practically relevant, which we demonstrate by realizing a simple optical implementation, achieving rates of 16.5 Mbits /s . Combining ease of implementation, a high rate, and a real-time entropy estimation, our protocol represents a promising approach intermediate between fully device-independent protocols and commercial quantum random number generators.

  13. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    NASA Astrophysics Data System (ADS)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  14. TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, E; Phillips, M; Bojechko, C

    Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPSmore » dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.« less

  15. Uniquely identifiable tamper-evident device using coupling between subwavelength gratings

    NASA Astrophysics Data System (ADS)

    Fievre, Ange Marie Patricia

    Reliability and sensitive information protection are critical aspects of integrated circuits. A novel technique using near-field evanescent wave coupling from two subwavelength gratings (SWGs), with the input laser source delivered through an optical fiber is presented for tamper evidence of electronic components. The first grating of the pair of coupled subwavelength gratings (CSWGs) was milled directly on the output facet of the silica fiber using focused ion beam (FIB) etching. The second grating was patterned using e-beam lithography and etched into a glass substrate using reactive ion etching (RIE). The slightest intrusion attempt would separate the CSWGs and eliminate near-field coupling between the gratings. Tampering, therefore, would become evident. Computer simulations guided the design for optimal operation of the security solution. The physical dimensions of the SWGs, i.e. period and thickness, were optimized, for a 650 nm illuminating wavelength. The optimal dimensions resulted in a 560 nm grating period for the first grating etched in the silica optical fiber and 420 nm for the second grating etched in borosilicate glass. The incident light beam had a half-width at half-maximum (HWHM) of at least 7 microm to allow discernible higher transmission orders, and a HWHM of 28 microm for minimum noise. The minimum number of individual grating lines present on the optical fiber facet was identified as 15 lines. Grating rotation due to the cylindrical geometry of the fiber resulted in a rotation of the far-field pattern, corresponding to the rotation angle of moire fringes. With the goal of later adding authentication to tamper evidence, the concept of CSWGs signature was also modeled by introducing random and planned variations in the glass grating. The fiber was placed on a stage supported by a nanomanipulator, which permitted three-dimensional displacement while maintaining the fiber tip normal to the surface of the glass substrate. A 650 nm diode laser was fixed to a translation mount that transmitted the light source through the optical fiber, and the output intensity was measured using a silicon photodiode. The evanescent wave coupling output results for the CSWGs were measured and compared to the simulation results.

  16. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Space Handbook,

    DTIC Science & Technology

    1985-01-01

    thle early life * of" the system. Figure 4-2 shows the variation in power output for polonium - 210 (Po- 210 ) with a 138-day half-life, curium-242 (Cm...2-41 Third Body Effects ....................................................... 2-41 Effects of Oblate Earth...2-41 D~rag Effects ............................................................ 2-44 EQUATIONS PERTAINING TO BODIES

  18. Contrasting effects of ocean acidification on reproduction in reef fishes

    NASA Astrophysics Data System (ADS)

    Welch, Megan J.; Munday, Philip L.

    2016-06-01

    Differences in the sensitivity of marine species to ocean acidification will influence the structure of marine communities in the future. Reproduction is critical for individual and population success, yet is energetically expensive and could be adversely affected by rising CO2 levels in the ocean. We investigated the effects of projected future CO2 levels on reproductive output of two species of coral reef damselfish, Amphiprion percula and Acanthochromis polyacanthus. Adult breeding pairs were maintained at current-day control (446 μatm), moderate (652 μatm) or high CO2 (912 μatm) for a 9-month period that included the summer breeding season. The elevated CO2 treatments were consistent with CO2 levels projected by 2100 under moderate (RCP6) and high (RCP8) emission scenarios. Reproductive output increased in A. percula, with 45-75 % more egg clutches produced and a 47-56 % increase in the number of eggs per clutch in the two elevated CO2 treatments. In contrast, reproductive output decreased at high CO2 in Ac. polyacanthus, with approximately one-third as many clutches produced compared with controls. Egg survival was not affected by CO2 for A. percula, but was greater in elevated CO2 for Ac. polyacanthus. Hatching success was also greater for Ac. polyacanthus at elevated CO2, but there was no effect of CO2 treatments on offspring size. Despite the variation in reproductive output, body condition of adults did not differ between control and CO2 treatments in either species. Our results demonstrate different effects of high CO2 on fish reproduction, even among species within the same family. A greater understanding of the variation in effects of ocean acidification on reproductive performance is required to predict the consequences for future populations of marine organisms.

  19. Modelling Freshwater Resources at the Global Scale: Challenges and Prospects

    NASA Technical Reports Server (NTRS)

    Doll, Petra; Douville, Herve; Guntner, Andreas; Schmied, Hannes Muller; Wada, Yoshihide

    2015-01-01

    Quantification of spatially and temporally resolved water flows and water storage variations for all land areas of the globe is required to assess water resources, water scarcity and flood hazards, and to understand the Earth system. This quantification is done with the help of global hydrological models (GHMs). What are the challenges and prospects in the development and application of GHMs? Seven important challenges are presented. (1) Data scarcity makes quantification of human water use difficult even though significant progress has been achieved in the last decade. (2) Uncertainty of meteorological input data strongly affects model outputs. (3) The reaction of vegetation to changing climate and CO2 concentrations is uncertain and not taken into account in most GHMs that serve to estimate climate change impacts. (4) Reasons for discrepant responses of GHMs to changing climate have yet to be identified. (5) More accurate estimates of monthly time series of water availability and use are needed to provide good indicators of water scarcity. (6) Integration of gradient-based groundwater modelling into GHMs is necessary for a better simulation of groundwater-surface water interactions and capillary rise. (7) Detection and attribution of human interference with freshwater systems by using GHMs are constrained by data of insufficient quality but also GHM uncertainty itself. Regarding prospects for progress, we propose to decrease the uncertainty of GHM output by making better use of in situ and remotely sensed observations of output variables such as river discharge or total water storage variations by multi-criteria validation, calibration or data assimilation. Finally, we present an initiative that works towards the vision of hyper resolution global hydrological modelling where GHM outputs would be provided at a 1-km resolution with reasonable accuracy.

  20. Random Positional Variation Among the Skull, Mandible, and Cervical Spine With Treatment Progression During Head-and-Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Peter H.; Ahn, Andrew I.; Lee, C. Joe

    2009-02-01

    Purpose: With 54{sup o} of freedom from the skull to mandible to C7, ensuring adequate immobilization for head-and-neck radiotherapy (RT) is complex. We quantify variations in skull, mandible, and cervical spine movement between RT sessions. Methods and Materials: Twenty-three sequential head-and-neck RT patients underwent serial computed tomography. Patients underwent planned rescanning at 11, 22, and 33 fractions for a total of 93 scans. Coordinates of multiple bony elements of the skull, mandible, and cervical spine were used to calculate rotational and translational changes of bony anatomy compared with the original planning scan. Results: Mean translational and rotational variations on rescanningmore » were negligible, but showed a wide range. Changes in scoliosis and lordosis of the cervical spine between fractions showed similar variability. There was no correlation between positional variation and fraction number and no strong correlation with weight loss or skin separation. Semi-independent rotational and translation movement of the skull in relation to the lower cervical spine was shown. Positioning variability measured by means of vector displacement was largest in the mandible and lower cervical spine. Conclusions: Although only small overall variations in position between head-and-neck RT sessions exist on average, there is significant random variation in patient positioning of the skull, mandible, and cervical spine elements. Such variation is accentuated in the mandible and lower cervical spine. These random semirigid variations in positioning of the skull and spine point to a need for improved immobilization and/or confirmation of patient positioning in RT of the head and neck.« less

  1. Estimating Cross-Site Impact Variation in the Presence of Heteroscedasticity

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Porter, Kristin E.; Weiss, Michael J.; Raudenbush, Stephen

    2013-01-01

    To date, evaluation research and policy analysis have focused mainly on average program impacts and paid little systematic attention to their variation. Recently, the growing number of multi-site randomized trials that are being planned and conducted make it increasingly feasible to study "cross-site" variation in impacts. Important…

  2. Tailpulse signal generator

    DOEpatents

    Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA

    2009-06-23

    A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.

  3. Analysis of seasonal ozone budget and spring ozone latitudinal gradient variation in the boundary layer of the Asia-Pacific region

    NASA Astrophysics Data System (ADS)

    Hou, Xuewei; Zhu, Bin; Kang, Hanqing; Gao, Jinhui

    2014-09-01

    The ozone (O3) budget in the boundary layer of the Asia-Pacific region (AP) was studied from 2001 to 2007 using the output of Model of Ozone and Related chemical Tracers, version 4 (MOZART-4). The model-simulated O3 data agree well with observed values. O3 budget analysis using the model output confirms that the dominant factor controlling seasonal variation of O3 differs by region. Photochemistry was found to play a critical role over Japan, the Korean Peninsula and Eastern China. Over the northwestern Pacific Ocean, advective flux was found to drive the seasonal variation of O3 concentrations. The large latitudinal gradient in O3 with a maximum of 52 ppbv over the marine boundary layer around 35°N during the spring was mainly due to chemistry; meanwhile, advection was found to weaken the gradient. The contribution of stratospheric O3 was ranked second (20%) to the local contribution (25%) in Japan and the Korean Peninsula near 35°N. The rate of O3 export from China's boundary layer was the highest (approximately 30%) in low latitudes and decreased with increasing latitude, while the contribution of North America and Europe increased with increasing latitude, from 10% in lower latitudes to 24% in higher latitudes.

  4. Ultra-low-power and robust digital-signal-processing hardware for implantable neural interface microsystems.

    PubMed

    Narasimhan, S; Chiel, H J; Bhunia, S

    2011-04-01

    Implantable microsystems for monitoring or manipulating brain activity typically require on-chip real-time processing of multichannel neural data using ultra low-power, miniaturized electronics. In this paper, we propose an integrated-circuit/architecture-level hardware design framework for neural signal processing that exploits the nature of the signal-processing algorithm. First, we consider different power reduction techniques and compare the energy efficiency between the ultra-low frequency subthreshold and conventional superthreshold design. We show that the superthreshold design operating at a much higher frequency can achieve comparable energy dissipation by taking advantage of extensive power gating. It also provides significantly higher robustness of operation and yield under large process variations. Next, we propose an architecture level preferential design approach for further energy reduction by isolating the critical computation blocks (with respect to the quality of the output signal) and assigning them higher delay margins compared to the noncritical ones. Possible delay failures under parameter variations are confined to the noncritical components, allowing graceful degradation in quality under voltage scaling. Simulation results using prerecorded neural data from the sea-slug (Aplysia californica) show that the application of the proposed design approach can lead to significant improvement in total energy, without compromising the output signal quality under process variations, compared to conventional design approaches.

  5. Variational analysis of drifter positions and model outputs for the reconstruction of surface currents in the central Adriatic during fall 2002

    USGS Publications Warehouse

    Taillandier, V.; Griffa, A.; Poulain, P.-M.; Signell, R.; Chiggiato, J.; Carniel, S.

    2008-01-01

    In this paper we present an application of a variational method for the reconstruction of the velocity field in a coastal flow in the central Adriatic Sea, using in situ data from surface drifters and outputs from the ROMS circulation model. The variational approach, previously developed and tested for mesoscale open ocean flows, has been improved and adapted to account for inhomogeneities on boundary current dynamics over complex bathymetry and coastline and for weak Lagrangian persistence in coastal flows. The velocity reconstruction is performed using nine drifter trajectories over 45 d, and a hierarchy of indirect tests is introduced to evaluate the results as the real ocean state is not known. For internal consistency and impact of the analysis, three diagnostics characterizing the particle prediction and transport, in terms of residence times in various zones and export rates from the boundary current toward the interior, show that the reconstruction is quite effective. A qualitative comparison with sea color data from the MODIS satellite images shows that the reconstruction significantly improves the description of the boundary current with respect to the ROMS model first guess, capturing its main features and its exchanges with the interior when sampled by the drifters. Copyright 2008 by the American Geophysical Union.

  6. Laser cutting of various materials: Kerf width size analysis and life cycle assessment of cutting process

    NASA Astrophysics Data System (ADS)

    Yilbas, Bekir Sami; Shaukat, Mian Mobeen; Ashraf, Farhan

    2017-08-01

    Laser cutting of various materials including Ti-6Al-4V alloy, steel 304, Inconel 625, and alumina is carried out to assess the kerf width size variation along the cut section. The life cycle assessment is carried out to determine the environmental impact of the laser cutting in terms of the material waste during the cutting process. The kerf width size is formulated and predicted using the lump parameter analysis and it is measured from the experiments. The influence of laser output power and laser cutting speed on the kerf width size variation is analyzed using the analytical tools including scanning electron and optical microscopes. In the experiments, high pressure nitrogen assisting gas is used to prevent oxidation reactions in the cutting section. It is found that the kerf width size predicted from the lump parameter analysis agrees well with the experimental data. The kerf width size variation increases with increasing laser output power. However, this behavior reverses with increasing laser cutting speed. The life cycle assessment reveals that material selection for laser cutting is critical for the environmental protection point of view. Inconel 625 contributes the most to the environmental damages; however, recycling of the waste of the laser cutting reduces this contribution.

  7. Temperature field analysis for PZT pyroelectric cells for thermal energy harvesting.

    PubMed

    Hsiao, Chun-Ching; Ciou, Jing-Chih; Siao, An-Shen; Lee, Chi-Yuan

    2011-01-01

    This paper proposes the idea of etching PZT to improve the temperature variation rate of a thicker PZT sheet in order to enhance the energy conversion efficiency when used as pyroelectric cells. A partially covered electrode was proven to display a higher output response than a fully covered electrode did. A mesh top electrode monitored the temperature variation rate and the electrode area. The mesh electrode width affected the distribution of the temperature variation rate in a thinner pyroelectric material. However, a pyroelectric cell with a thicker pyroelectric material was beneficial in generating electricity pyroelectrically. The PZT sheet was further etched to produce deeper cavities and a smaller electrode width to induce lateral temperature gradients on the sidewalls of cavities under homogeneous heat irradiation, enhancing the temperature variation rate.

  8. Temperature Field Analysis for PZT Pyroelectric Cells for Thermal Energy Harvesting

    PubMed Central

    Hsiao, Chun-Ching; Ciou, Jing-Chih; Siao, An-Shen; Lee, Chi-Yuan

    2011-01-01

    This paper proposes the idea of etching PZT to improve the temperature variation rate of a thicker PZT sheet in order to enhance the energy conversion efficiency when used as pyroelectric cells. A partially covered electrode was proven to display a higher output response than a fully covered electrode did. A mesh top electrode monitored the temperature variation rate and the electrode area. The mesh electrode width affected the distribution of the temperature variation rate in a thinner pyroelectric material. However, a pyroelectric cell with a thicker pyroelectric material was beneficial in generating electricity pyroelectrically. The PZT sheet was further etched to produce deeper cavities and a smaller electrode width to induce lateral temperature gradients on the sidewalls of cavities under homogeneous heat irradiation, enhancing the temperature variation rate. PMID:22346652

  9. Computing the structural influence matrix for biological systems.

    PubMed

    Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco

    2016-06-01

    We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giovannetti, Vittorio; Lloyd, Seth; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139

    The Amosov-Holevo-Werner conjecture implies the additivity of the minimum Renyi entropies at the output of a channel. The conjecture is proven true for all Renyi entropies of integer order greater than two in a class of Gaussian bosonic channel where the input signal is randomly displaced or where it is coupled linearly to an external environment.

  11. Exchange Rates and Fundamentals.

    ERIC Educational Resources Information Center

    Engel, Charles; West, Kenneth D.

    2005-01-01

    We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…

  12. Impact of number of realizations on the suitability of simulated weather data for hydrologic and environmental applications

    USDA-ARS?s Scientific Manuscript database

    Stochastic weather generators are widely used in hydrological, environmental, and agricultural applications to simulate and forecast weather time series. However, such stochastic processes usually produce random outputs hence the question on how representative the generated data are if obtained fro...

  13. Sensitivity of the Eocene climate to CO2 and orbital variability

    NASA Astrophysics Data System (ADS)

    Keery, John S.; Holden, Philip B.; Edwards, Neil R.

    2018-02-01

    The early Eocene, from about 56 Ma, with high atmospheric CO2 levels, offers an analogue for the response of the Earth's climate system to anthropogenic fossil fuel burning. In this study, we present an ensemble of 50 Earth system model runs with an early Eocene palaeogeography and variation in the forcing values of atmospheric CO2 and the Earth's orbital parameters. Relationships between simple summary metrics of model outputs and the forcing parameters are identified by linear modelling, providing estimates of the relative magnitudes of the effects of atmospheric CO2 and each of the orbital parameters on important climatic features, including tropical-polar temperature difference, ocean-land temperature contrast, Asian, African and South (S.) American monsoon rains, and climate sensitivity. Our results indicate that although CO2 exerts a dominant control on most of the climatic features examined in this study, the orbital parameters also strongly influence important components of the ocean-atmosphere system in a greenhouse Earth. In our ensemble, atmospheric CO2 spans the range 280-3000 ppm, and this variation accounts for over 90 % of the effects on mean air temperature, southern winter high-latitude ocean-land temperature contrast and northern winter tropical-polar temperature difference. However, the variation of precession accounts for over 80 % of the influence of the forcing parameters on the Asian and African monsoon rainfall, and obliquity variation accounts for over 65 % of the effects on winter ocean-land temperature contrast in high northern latitudes and northern summer tropical-polar temperature difference. Our results indicate a bimodal climate sensitivity, with values of 4.36 and 2.54 °C, dependent on low or high states of atmospheric CO2 concentration, respectively, with a threshold at approximately 1000 ppm in this model, and due to a saturated vegetation-albedo feedback. Our method gives a quantitative ranking of the influence of each of the forcing parameters on key climatic model outputs, with additional spatial information from singular value decomposition providing insights into likely physical mechanisms. The results demonstrate the importance of orbital variation as an agent of change in climates of the past, and we demonstrate that emulators derived from our modelling output can be used as rapid and efficient surrogates of the full complexity model to provide estimates of climate conditions from any set of forcing parameters.

  14. Technical efficiency in milk production in underdeveloped production environment of India*.

    PubMed

    Bardhan, Dwaipayan; Sharma, Murari Lal

    2013-12-01

    The study was undertaken in Kumaon division of Uttarakhand state of India with the objective of estimating technical efficiency in milk production across different herd-size category households and factors influencing it. Total of 60 farm households having representation from different herd-size categories drawn from six randomly selected villages of plain and hilly regions of the division constituted the ultimate sampling units of the study. Stochastic frontier production function analysis was used to estimate the technical efficiency in milk production. Multivariate regression equations were fitted taking technical efficiency index as the regressand to identify the factors significantly influencing technical efficiency in milk production. The study revealed that variation in output across farms in the study area was due to difference in their technical efficiency levels. However, it was interesting to note that smallholder producers were more technically efficient in milk production than their larger counterparts, especially in the plains. Apart from herd size, intensity of market participation had significant and positive impact on technical efficiency in the plains. This provides definite indication that increasing the level of commercialization of dairy farms would have beneficial impact on their production efficiency.

  15. The 1979 X-ray outburst of Centaurus X-4

    NASA Technical Reports Server (NTRS)

    Kaluzienski, L. J.; Holt, S. S.; Swank, J. H.

    1980-01-01

    X-ray observations of the first major outburst of the classical transient X-ray source Centaurus X-4 since its discovery in 1969 are presented. The observations were obtained in May, 1979, with the all-sky monitor on board Ariel 5. The flare light curve is shown to exhibit many of the characteristics of other transients, including a double-peaked maximum, as well as significant, apparently random, variations and a lower peak flux and shorter duration than the 1969 event. Application of a standard epoch-folding technique to data corrected for linear decay trends indicates a possible source modulation at 0.3415 days (8.2 hours). Comparison of the results with previous other data on Cen X-4 and the characteristics of the soft X-ray transients allows a total X-ray output of approximately 3 x 10 to the 43rd ergs to be estimated, and reveals the duration and decay time of the 1979 Cen X-4 outburst to be the shortest yet observed from soft X-ray transients. The observations are explained in terms of episodic mass exchange from a late-type dwarf onto a neutron star companion in a relatively close binary system.

  16. Performance analysis of two-degree of freedom fractional order PID controllers for robotic manipulator with payload.

    PubMed

    Sharma, Richa; Gaur, Prerna; Mittal, A P

    2015-09-01

    The robotic manipulators are multi-input multi-output (MIMO), coupled and highly nonlinear systems. The presence of external disturbances and time-varying parameters adversely affects the performance of these systems. Therefore, the controller designed for these systems should effectively deal with such complexities, and it is an intriguing task for control engineers. This paper presents two-degree of freedom fractional order proportional-integral-derivative (2-DOF FOPID) controller scheme for a two-link planar rigid robotic manipulator with payload for trajectory tracking task. The tuning of all controller parameters is done using cuckoo search algorithm (CSA). The performance of proposed 2-DOF FOPID controllers is compared with those of their integer order designs, i.e., 2-DOF PID controllers, and with the traditional PID controllers. In order to show effectiveness of proposed scheme, the robustness testing is carried out for model uncertainties, payload variations with time, external disturbance and random noise. Numerical simulation results indicate that the 2-DOF FOPID controllers are superior to their integer order counterparts and the traditional PID controllers. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Software Validation via Model Animation

    NASA Technical Reports Server (NTRS)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  18. Passive states as optimal inputs for single-jump lossy quantum channels

    NASA Astrophysics Data System (ADS)

    De Palma, Giacomo; Mari, Andrea; Lloyd, Seth; Giovannetti, Vittorio

    2016-06-01

    The passive states of a quantum system minimize the average energy among all the states with a given spectrum. We prove that passive states are the optimal inputs of single-jump lossy quantum channels. These channels arise from a weak interaction of the quantum system of interest with a large Markovian bath in its ground state, such that the interaction Hamiltonian couples only consecutive energy eigenstates of the system. We prove that the output generated by any input state ρ majorizes the output generated by the passive input state ρ0 with the same spectrum of ρ . Then, the output generated by ρ can be obtained applying a random unitary operation to the output generated by ρ0. This is an extension of De Palma et al. [IEEE Trans. Inf. Theory 62, 2895 (2016)], 10.1109/TIT.2016.2547426, where the same result is proved for one-mode bosonic Gaussian channels. We also prove that for finite temperature this optimality property can fail already in a two-level system, where the best input is a coherent superposition of the two energy eigenstates.

  19. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  20. To what extent is joint and muscle mechanics predicted by musculoskeletal models sensitive to soft tissue artefacts?

    PubMed

    Lamberto, Giuliano; Martelli, Saulo; Cappozzo, Aurelio; Mazzà, Claudia

    2017-09-06

    Musculoskeletal models are widely used to estimate joint kinematics, intersegmental loads, and muscle and joint contact forces during movement. These estimates can be heavily affected by the soft tissue artefact (STA) when input positional data are obtained using stereophotogrammetry, but this aspect has not yet been fully characterised for muscle and joint forces. This study aims to assess the sensitivity to the STA of three open-source musculoskeletal models, implemented in OpenSim. A baseline dataset of marker trajectories was created for each model from experimental data of one healthy volunteer. Five hundred STA realizations were then statistically generated using a marker-dependent model of the pelvis and lower limb artefact and added to the baseline data. The STA׳s impact on the musculoskeletal model estimates was finally quantified using a Monte Carlo analysis. The modelled STA distributions were in line with the literature. Observed output variations were comparable across the three models, and sensitivity to the STA was evident for most investigated quantities. Shape, magnitude and timing of the joint angle and moment time histories were not significantly affected throughout the entire gait cycle, whereas magnitude variations were observed for muscle and joint forces. Ranges of contact force variations differed between joints, with hip variations up to 1.8 times body weight observed. Variations of more than 30% were observed for some of the muscle forces. In conclusion, musculoskeletal simulations using stereophotogrammetry may be safely run when only interested in overall output patterns. Caution should be paid when more accurate estimated values are needed. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  2. Videodensitometric Methods for Cardiac Output Measurements

    NASA Astrophysics Data System (ADS)

    Mischi, Massimo; Kalker, Ton; Korsten, Erik

    2003-12-01

    Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.

  3. Memory-based frame synchronizer. [for digital communication systems

    NASA Technical Reports Server (NTRS)

    Stattel, R. J.; Niswander, J. K. (Inventor)

    1981-01-01

    A frame synchronizer for use in digital communications systems wherein data formats can be easily and dynamically changed is described. The use of memory array elements provide increased flexibility in format selection and sync word selection in addition to real time reconfiguration ability. The frame synchronizer comprises a serial-to-parallel converter which converts a serial input data stream to a constantly changing parallel data output. This parallel data output is supplied to programmable sync word recognizers each consisting of a multiplexer and a random access memory (RAM). The multiplexer is connected to both the parallel data output and an address bus which may be connected to a microprocessor or computer for purposes of programming the sync word recognizer. The RAM is used as an associative memory or decorder and is programmed to identify a specific sync word. Additional programmable RAMs are used as counter decoders to define word bit length, frame word length, and paragraph frame length.

  4. An improved scan laser with a VO2 programmable mirror

    NASA Astrophysics Data System (ADS)

    Chivian, J. S.; Scott, M. W.; Case, W. E.; Krasutsky, N. J.

    1985-04-01

    A 10.6-microns scan laser has been constructed and operated with an off-axis cathode ray tube, high reflectance multilayer thin-film structures, and a tapered plasma discharge tube. Equations are given for the switching time of a high-reflectance spot on the VO2 and for the relation of scan laser output power to cavity geometry, cavity losses, and the gain of the active CO2 medium. A scan capability of 2100 easily resolvable directions was demonstrated, and sequential and randomly addressed spot rates of 100,000/sec were achieved. The equations relating output power and cavity mode size were experimentally verified using a nonscanned beam.

  5. Photorefractive detection of tagged photons in ultrasound modulated optical tomography of thick biological tissues.

    PubMed

    Ramaz, F; Forget, B; Atlan, M; Boccara, A C; Gross, M; Delaye, P; Roosen, G

    2004-11-01

    We present a new and simple method to obtain ultrasound modulated optical tomography images in thick biological tissues with the use of a photorefractive crystal. The technique offers the advantage of spatially adapting the output speckle wavefront by analysing the signal diffracted by the interference pattern between this output field and a reference beam, recorded inside the photorefractive crystal. Averaging out due to random phases of the speckle grains vanishes, and we can use a fast single photodetector to measure the ultrasound modulated optical contrast. This technique offers a promising way to make direct measurements within the decorrelation time scale of living tissues.

  6. Model of an axially strained weakly guiding optical fiber modal pattern

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.

    1991-01-01

    Axial strain may be determined by monitoring the modal pattern variation of an optical fiber. In this paper we present the results of a numerical model that has been developed to calculate the modal pattern variation at the end of a weakly guiding optical fiber under axial strain. Whenever an optical fiber is under stress, the optical path length, the index of refraction and the propagation constants of each fiber mode change. In consequence, the modal phase term of the fields and the fiber output pattern are also modified. For multimode fibers, very complicated patterns result. The predicted patterns are presented, and an expression for the phase variation with strain is derived.

  7. Dual Roles for Spike Signaling in Cortical Neural Populations

    PubMed Central

    Ballard, Dana H.; Jehee, Janneke F. M.

    2011-01-01

    A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798

  8. Distributed Optimization and Control | Grid Modernization | NREL

    Science.gov Websites

    developing an innovative, distributed photovoltaic (PV) inverter control architecture that maximizes PV communications systems to support distribution grid operations. The growth of PV capacity has introduced prescribed limits, while fast variations in PV output tend to cause transients that lead to wear-out of

  9. Tracking Bottom Waters in the Southern Adriatic Sea Applying Seismic Oceanography Techniques

    DTIC Science & Technology

    2011-10-05

    velocities from surface measurements. Geophysics 20. 68-86. Dorman. C.E.. Camiel. S., Cavaleri. L. Sclavo, M.. Chiggiato . J_ et al., 2006. February 2003...A., Poulain. P.-M.. Signell. R.P., Chiggiato . J., Carniel, S.. 2008. Variational analysis of drifter positions and model outputs for the reconstruc

  10. Precision limits of lock-in amplifiers below unity signal-to-noise ratios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillies, G.T.; Allison, S.W.

    1986-02-01

    An investigation of noise-related performance limits of commercial-grade lock-in amplifiers has been carried out. The dependence of the output measurement error on the input signal-to-noise ratio was established in each case and measurements of noise-related gain variations were made.

  11. REAL-TIME ENERGY INFORMATION AND CONSUMER BEHAVIOR: A META-ANALYSIS AND FORECAST

    EPA Science Inventory

    The meta-analysis of literature and program results will shed light on potential causes of study-to-study variation in information feedback programs and trials. Outputs from the meta-analysis, such as price elasticity, will be used in NEMS to estimate the impact of a nation...

  12. Dualchannel Fuel Control Program.

    DTIC Science & Technology

    1981-08-01

    Generator 1 S Fluidic Speed Sensor and Power Turbine Wheels T = 0.1 s (speed) Recuperator 15 to 19 s Fluidic Temperature Sensor (temperature) T = 0.7 s...tradeoff between the highest sensitivity obtainable (as small a gap as possi- ble) and the noise or output variations due to disc runout . In

  13. Relational Stability in the Expression of Normality, Variation, and Control of Thyroid Function

    PubMed Central

    Hoermann, Rudolf; Midgley, John E. M.; Larisch, Rolf; Dietrich, Johannes W.

    2016-01-01

    Thyroid hormone concentrations only become sufficient to maintain a euthyroid state through appropriate stimulation by pituitary thyroid-stimulating hormone (TSH). In such a dynamic system under constant high pressure, guarding against overstimulation becomes vital. Therefore, several defensive mechanisms protect against accidental overstimulation, such as plasma protein binding, conversion of T4 into the more active T3, active transmembrane transport, counter-regulatory activities of reverse T3 and thyronamines, and negative hypothalamic–pituitary–thyroid feedback control of TSH. TSH has gained a dominant but misguided role in interpreting thyroid function testing in assuming that its exceptional sensitivity thereby translates into superior diagnostic performance. However, TSH-dependent thyroid disease classification is heavily influenced by statistical analytic techniques such as uni- or multivariate-defined normality. This demands a separation of its conjoint roles as a sensitive screening test and accurate diagnostic tool. Homeostatic equilibria (set points) in healthy subjects are less variable and do not follow a pattern of random variation, rather indicating signs of early and progressive homeostatic control across the euthyroid range. In the event of imminent thyroid failure with a reduced FT4 output per unit TSH, conversion efficiency increases in order to maintain FT3 stability. In such situations, T3 stability takes priority over set point maintenance. This suggests a concept of relational stability. These findings have important implications for both TSH reference limits and treatment targets for patients on levothyroxine. The use of archival markers is proposed to facilitate the homeostatic interpretation of all parameters. PMID:27872610

  14. Comparison of the Haemodynamic Effects of Three Different Methods at the Induction of Anaesthesia

    PubMed Central

    Uygur, Mehmet Levent; Ersoy, Ayşın; Altan, Aysel; Ervatan, Zekeriya; Kamalı, Sedat

    2014-01-01

    Objective Haemodynamic variations are inevitable during induction of anaesthetic drugs. The present study investigates the haemodynamic variations of three different drugs (thiopental, propofol, and etomidate) used for induction of general anaesthesia together with fentanyl. Methods In a randomized, double-blind study, 45 patients were assigned to one of three groups (n=15 each). Fentanyl 1 μg kg−1 was injected over 60 sec followed by propofol 2 mg kg−1 (Group P), thiopentone 6 mg kg−1 (Group T), or etomidate 0.3 mg kg−1 (Group E). Noninvasive measurements of systolic arterial pressure (SAP), diastolic arterial pressure (DAP), mean arterial pressure (MAP), and heart rate (HR) was performed on admittance, immediately before the induction of anaesthesia, and 1, 3, and 5 min thereafter. Cardiac output (CO) values were recorded before induction, immediately after the injection of the drug, and at 1 min after the intubation. Results In all groups, during the study period, SAP, DAP, MAP, and CO values decreased with respect to time before induction. Following the administration of the induction dose of propofol (Group P), a significantly greater decrease of systolic and diastolic blood pressure was observed with etomidate (Group E) or thiopentone (Group T). Decrease in CO was also more marked with propofol (Group P) than with etomidate (Group E) or thiopentone (Group T). Conclusion It’s concluded that, in this study, the combination of fentanyl-etomidate is safer than both the groups of fentanyl-propofol and fentanyl-thiopental in terms of providing haemodynamic stability. PMID:27366443

  15. Overview of NASA Lewis Research Center free-piston Stirling engine technology activities applicable to space power systems

    NASA Technical Reports Server (NTRS)

    Slaby, J. G.

    1986-01-01

    Free piston Stirling technology is applicable for both solar and nuclear powered systems. As such, the Lewis Research Center serves as the project office to manage the newly initiated SP-100 Advanced Technology Program. This five year program provides the technology push for providing significant component and subsystem options for increased efficiency, reliability and survivability, and power output growth at reduced specific mass. One of the major elements of the program is the development of advanced power conversion concepts of which the Stirling cycle is a viable candidate. Under this program the research findings of the 25 kWe opposed piston Space Power Demonstrator Engine (SPDE) are presented. Included in the SPDE discussions are initial differences between predicted and experimental power outputs and power output influenced by variations in regenerators. Projections are made for future space power requirements over the next few decades. And a cursory comparison is presented showing the mass benefits that a Stirling system has over a Brayton system for the same peak temperature and output power.

  16. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William Monford

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  17. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE PAGES

    Wood, William Monford

    2018-02-07

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  18. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    NASA Astrophysics Data System (ADS)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  19. To publish or not to publish? On the aggregation and drivers of research performance

    PubMed Central

    De Witte, Kristof

    2010-01-01

    This paper presents a methodology to aggregate multidimensional research output. Using a tailored version of the non-parametric Data Envelopment Analysis model, we account for the large heterogeneity in research output and the individual researcher preferences by endogenously weighting the various output dimensions. The approach offers three important advantages compared to the traditional approaches: (1) flexibility in the aggregation of different research outputs into an overall evaluation score; (2) a reduction of the impact of measurement errors and a-typical observations; and (3) a correction for the influences of a wide variety of factors outside the evaluated researcher’s control. As a result, research evaluations are more effective representations of actual research performance. The methodology is illustrated on a data set of all faculty members at a large polytechnic university in Belgium. The sample includes questionnaire items on the motivation and perception of the researcher. This allows us to explore whether motivation and background characteristics (such as age, gender, retention, etc.,) of the researchers explain variations in measured research performance. PMID:21057573

  20. Use of Advanced Meteorological Model Output for Coastal Ocean Modeling in Puget Sound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Khangaonkar, Tarang; Wang, Taiping

    2011-06-01

    It is a great challenge to specify meteorological forcing in estuarine and coastal circulation modeling using observed data because of the lack of complete datasets. As a result of this limitation, water temperature is often not simulated in estuarine and coastal modeling, with the assumption that density-induced currents are generally dominated by salinity gradients. However, in many situations, temperature gradients could be sufficiently large to influence the baroclinic motion. In this paper, we present an approach to simulate water temperature using outputs from advanced meteorological models. This modeling approach was applied to simulate annual variations of water temperatures of Pugetmore » Sound, a fjordal estuary in the Pacific Northwest of USA. Meteorological parameters from North American Region Re-analysis (NARR) model outputs were evaluated with comparisons to observed data at real-time meteorological stations. Model results demonstrated that NARR outputs can be used to drive coastal ocean models for realistic simulations of long-term water-temperature distributions in Puget Sound. Model results indicated that the net flux from NARR can be further improved with the additional information from real-time observations.« less

Top