Sample records for current models assume

  1. Influence of channel base current and varying return stroke speed on the calculated fields of three important return stroke models

    NASA Technical Reports Server (NTRS)

    Thottappillil, Rajeev; Uman, Martin A.; Diendorfer, Gerhard

    1991-01-01

    Compared here are the calculated fields of the Traveling Current Source (TCS), Modified Transmission Line (MTL), and the Diendorfer-Uman (DU) models with a channel base current assumed in Nucci et al. on the one hand and with the channel base current assumed in Diendorfer and Uman on the other hand. The characteristics of the field wave shapes are shown to be very sensitive to the channel base current, especially the field zero crossing at 100 km for the TCS and DU models, and the magnetic hump after the initial peak at close range for the TCS models. Also, the DU model is theoretically extended to include any arbitrarily varying return stroke speed with height. A brief discussion is presented on the effects of an exponentially decreasing speed with height on the calculated fields for the TCS, MTL, and DU models.

  2. NOTE Effects of skeletal muscle anisotropy on induced currents from low-frequency magnetic fields

    NASA Astrophysics Data System (ADS)

    Tachas, Nikolaos J.; Samaras, Theodoros; Baskourelos, Konstantinos; Sahalos, John N.

    2009-12-01

    Studies which take into account the anisotropy of tissue dielectric properties for the numerical assessment of induced currents from low-frequency magnetic fields are scarce. In the present study, we compare the induced currents in two anatomical models, using the impedance method. In the first model, we assume that all tissues have isotropic conductivity, whereas in the second one, we assume anisotropic conductivity for the skeletal muscle. Results show that tissue anisotropy should be taken into account when investigating the exposure to low-frequency magnetic fields, because it leads to higher induced current values.

  3. Modelling of the reactive sputtering process with non-uniform discharge current density and different temperature conditions

    NASA Astrophysics Data System (ADS)

    Vašina, P; Hytková, T; Eliáš, M

    2009-05-01

    The majority of current models of the reactive magnetron sputtering assume a uniform shape of the discharge current density and the same temperature near the target and the substrate. However, in the real experimental set-up, the presence of the magnetic field causes high density plasma to form in front of the cathode in the shape of a toroid. Consequently, the discharge current density is laterally non-uniform. In addition to this, the heating of the background gas by sputtered particles, which is usually referred to as the gas rarefaction, plays an important role. This paper presents an extended model of the reactive magnetron sputtering that assumes the non-uniform discharge current density and which accommodates the gas rarefaction effect. It is devoted mainly to the study of the behaviour of the reactive sputtering rather that to the prediction of the coating properties. Outputs of this model are compared with those that assume uniform discharge current density and uniform temperature profile in the deposition chamber. Particular attention is paid to the modelling of the radial variation of the target composition near transitions from the metallic to the compound mode and vice versa. A study of the target utilization in the metallic and compound mode is performed for two different discharge current density profiles corresponding to typical two pole and multipole magnetics available on the market now. Different shapes of the discharge current density were tested. Finally, hysteresis curves are plotted for various temperature conditions in the reactor.

  4. A Model for Teacher Effects from Longitudinal Data without Assuming Vertical Scaling

    ERIC Educational Resources Information Center

    Mariano, Louis T.; McCaffrey, Daniel F.; Lockwood, J. R.

    2010-01-01

    There is an increasing interest in using longitudinal measures of student achievement to estimate individual teacher effects. Current multivariate models assume each teacher has a single effect on student outcomes that persists undiminished to all future test administrations (complete persistence [CP]) or can diminish with time but remains…

  5. Calculation of induced current densities for humans by magnetic fields from electronic article surveillance devices

    NASA Astrophysics Data System (ADS)

    Gandhi, Om P.; Kang, Gang

    2001-11-01

    This paper illustrates the use of the impedance method to calculate the electric fields and current densities induced in millimetre resolution anatomic models of the human body, namely an adult and 10- and 5-year-old children, for exposure to nonuniform magnetic fields typical of two assumed but representative electronic article surveillance (EAS) devices at 1 and 30 kHz, respectively. The devices assumed for the calculations are a solenoid type magnetic deactivator used at store checkouts and a pass-by panel-type EAS system consisting of two overlapping rectangular current-carrying coils used at entry and exit from a store. The impedance method code is modified to obtain induced current densities averaged over a cross section of 1 cm2 perpendicular to the direction of induced currents. This is done to compare the peak current densities with the limits or the basic restrictions given in the ICNIRP safety guidelines. Because of the stronger magnetic fields at lower heights for both the assumed devices, the peak 1 cm2 area-averaged current densities for the CNS tissues such as the brain and the spinal cord are increasingly larger for smaller models and are the highest for the model of the 5-year-old child. For both the EAS devices, the maximum 1 cm2 area-averaged current densities for the brain of the model of the adult are lower than the ICNIRP safety guideline, but may approach or exceed the ICNIRP basic restrictions for models of 10- and 5-year-old children if sufficiently strong magnetic fields are used.

  6. Calculation of induced current densities for humans by magnetic fields from electronic article surveillance devices.

    PubMed

    Gandhi, O P; Kang, G

    2001-11-01

    This paper illustrates the use of the impedance method to calculate the electric fields and current densities induced in millimetre resolution anatomic models of the human body, namely an adult and 10- and 5-year-old children, for exposure to nonuniform magnetic fields typical of two assumed but representative electronic article surveillance (EAS) devices at 1 and 30 kHz, respectively. The devices assumed for the calculations are a solenoid type magnetic deactivator used at store checkouts and a pass-by panel-type EAS system consisting of two overlapping rectangular current-carrying coils used at entry and exit from a store. The impedance method code is modified to obtain induced current densities averaged over a cross section of 1 cm2 perpendicular to the direction of induced currents. This is done to compare the peak current densities with the limits or the basic restrictions given in the ICNIRP safety guidelines. Because of the stronger magnetic fields at lower heights for both the assumed devices, the peak 1 cm2 area-averaged current densities for the CNS tissues such as the brain and the spinal cord are increasingly larger for smaller models and are the highest for the model of the 5-year-old child. For both the EAS devices, the maximum 1 cm2 area-averaged current densities for the brain of the model of the adult are lower than the ICNIRP safety guideline, but may approach or exceed the ICNIRP basic restrictions for models of 10- and 5-year-old children if sufficiently strong magnetic fields are used.

  7. Cathode fall model and current-voltage characteristics of field emission driven direct current microplasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkattraman, Ayyaswamy

    2013-11-15

    The post-breakdown characteristics of field emission driven microplasma are studied theoretically and numerically. A cathode fall model assuming a linearly varying electric field is used to obtain equations governing the operation of steady state field emission driven microplasmas. The results obtained from the model by solving these equations are compared with particle-in-cell with Monte Carlo collisions simulation results for parameters including the plasma potential, cathode fall thickness, ion number density in the cathode fall, and current density vs voltage curves. The model shows good overall agreement with the simulations but results in slightly overpredicted values for the plasma potential andmore » the cathode fall thickness attributed to the assumed electric field profile. The current density vs voltage curves obtained show an arc region characterized by negative slope as well as an abnormal glow discharge characterized by a positive slope in gaps as small as 10 μm operating at atmospheric pressure. The model also retrieves the traditional macroscale current vs voltage theory in the absence of field emission.« less

  8. Macroscopic Quantum Phase-Locking Model for the Quantum Hall = Effect

    NASA Astrophysics Data System (ADS)

    Wang, Te-Chun; Gou, Yih-Shun

    1997-08-01

    A macroscopic model of nonlinear dissipative phase-locking between a Josephson-like frequency and a macroscopic electron wave frequency is proposed to explain the Quantum Hall Effect. It is well known that a r.f-biased Josephson junction displays a collective phase-locking behavior which can be described by a non-autonomous second order equation or an equivalent 2+1-dimensional dynamical system. Making a direct analogy between the QHE and the Josephson system, this report proposes a computer-solving nonlinear dynamical model for the quantization of the Hall resistance. In this model, the Hall voltage is assumed to be proportional to a Josephson-like frequency and the Hall current is assumed related to a coherent electron wave frequency. The Hall resistance is shown to be quantized in units of the fine structure constant as the ratio of these two frequencies are locked into a rational winding number. To explain the sample-width dependence of the critical current, the 2DEG under large applied current is further assumed to develop a Josephson-like junction array in which all Josephson-like frequencies are synchronized. Other remarkable features of the QHE such as the resistance fluctuation and the even-denominator states are also discussed within this picture.

  9. The magnetosphere of Neptune - Its response to daily rotation

    NASA Technical Reports Server (NTRS)

    Voigt, Gerd-Hannes; Ness, Norman F.

    1990-01-01

    The Neptunian magnetosphere periodically changes every eight hours between a pole-on magnetosphere with only one polar cusp and an earth-type magnetosphere with two polar cusps. In the pole-on configuration, the tail current sheet has an almost circular shape with plasma currents closing entirely within the magnetosphere. Eight hours later the tail current sheet assumes an almost flat shape with plasma currents touching the magnetotail boundary and closing over the tail magnetopause. Magnetic field and tail current sheet configurations have been calculated in a three-dimensional model, but the plasma- and thermodynamic conditions were investigated in a simplified two-dimensional MHD equilibrium magnetosphere. It was found that the free energy in the tail region of the two-dimensional model becomes independent of the dipole tilt angle. It is conjectured that the Neptunian magnetotail might assume quasi-static equilibrium states that make the free energy of the system independent of its daily rotation.

  10. Harmonic mixing characteristics of metal-barrier-metal junctions as predicted by electron tunneling

    NASA Technical Reports Server (NTRS)

    Faris, S. M.; Gustafson, T. K.

    1974-01-01

    The bias dependence of the nonlinear mixing characteristics of metal-barrier-metal junction currents is deduced assuming an electron tunneling model. The difference-frequency beat voltage at frequency omega sub 1 - (n x omega sub 2), when n is an integer and omega sub 1 and omega sub 2 are the assumed frequencies of two induced currents, is found to have n zeros as the diode bias is varied. Recent experimental observations have demonstrated such characteristics.

  11. Analytical model for describing ion guiding through capillaries in insulating polymers

    NASA Astrophysics Data System (ADS)

    Liu, Shi-Dong; Zhao, Yong-Tao; Wang, Yu-Yu; N, Stolterfoht; Cheng, Rui; Zhou, Xian-Ming; Xu, Hu-Shan; Xiao, Guo-Qing

    2015-08-01

    An analytical description for guiding of ions through nanocapillaries is given on the basis of previous work. The current entering into the capillary is assumed to be divided into a current fraction transmitted through the capillary, a current fraction flowing away via the capillary conductivity and a current fraction remaining within the capillary, which is responsible for its charge-up. The discharging current is assumed to be governed by the Frenkel-Poole process. At higher conductivities the analytical model shows a blocking of the ion transmission, which is in agreement with recent simulations. Also, it is shown that ion blocking observed in experiments is well reproduced by the analytical formula. Furthermore, the asymptotic fraction of transmitted ions is determined. Apart from the key controlling parameter (charge-to-energy ratio), the ratio of the capillary conductivity to the incident current is included in the model. Differences resulting from the nonlinear and linear limits of the Frenkel-Poole discharge are pointed out. Project supported by the Major State Basic Research Development Program of China (Grant No. 2010CB832902) and the National Natural Science Foundation of China (Grant Nos. 11275241, 11275238, 11105192, and 11375034).

  12. Studies and comparison of currently utilized models for ablation in Electrothermal-chemical guns

    NASA Astrophysics Data System (ADS)

    Jia, Shenli; Li, Rui; Li, Xingwen

    2009-10-01

    Wall ablation is a key process taking place in the capillary plasma generator in Electrothermal-Chemical (ETC) guns, whose characteristic directly decides the generator's performance. In the present article, this ablation process is theoretically studied. Currently widely used mathematical models designed to describe such process are analyzed and compared, including a recently developed kinetic model which takes into account the unsteady state in plasma-wall transition region by dividing it into two sub-layers, a Knudsen layer and a collision dominated non-equilibrium Hydrodynamic layer, a model based on Langmuir Law, as well as a simplified model widely used in arc-wall interaction process in circuit breakers, which assumes a proportional factor and an ablation enthalpy obtained empirically. Bulk plasma state and parameters are assumed to be consistent while analyzing and comparing each model, in order to take into consideration only the difference caused by model itself. Finally ablation rate is calculated in each method respectively and differences are discussed.

  13. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  14. To predict the niche, model colonization and extinction

    Treesearch

    Charles B. Yackulic; James D. Nichols; Janice Reid; Ricky Der

    2015-01-01

    Ecologists frequently try to predict the future geographic distributions of species. Most studies assume that the current distribution of a species reflects its environmental requirements (i.e., the species’ niche). However, the current distributions of many species are unlikely to be at equilibrium with the current distribution of environmental conditions, both...

  15. Quantile Regression Models for Current Status Data

    PubMed Central

    Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen

    2016-01-01

    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307

  16. Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlicher, Bob G; Abercrombie, Robert K

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  17. ID201202961, DOE S-124,539, Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  18. Development of a Lumped Element Circuit Model for Approximation of Dielectric Barrier Discharges

    DTIC Science & Technology

    2011-08-01

    dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond pulsed DBDs, which have been proposed...species for pulsed direct current (DC) dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond...momentum-based approaches. Given the fundamental differences between the novel pulsed discharge approach and the more conventional momentum-based

  19. Analytic model of a magnetically insulated transmission line with collisional flow electrons

    NASA Astrophysics Data System (ADS)

    Stygar, W. A.; Wagoner, T. C.; Ives, H. C.; Corcoran, P. A.; Cuneo, M. E.; Douglas, J. W.; Gilliland, T. L.; Mazarakis, M. G.; Ramirez, J. J.; Seamen, J. F.; Seidel, D. B.; Spielman, R. B.

    2006-09-01

    We have developed a relativistic-fluid model of the flow-electron plasma in a steady-state one-dimensional magnetically insulated transmission line (MITL). The model assumes that the electrons are collisional and, as a result, drift toward the anode. The model predicts that in the limit of fully developed collisional flow, the relation between the voltage Va, anode current Ia, cathode current Ik, and geometric impedance Z0 of a 1D planar MITL can be expressed as Va=IaZ0h(χ), where h(χ)≡[(χ+1)/4(χ-1)]1/2-ln⁡⌊χ+(χ2-1)1/2⌋/2χ(χ-1) and χ≡Ia/Ik. The relation is valid when Va≳1MV. In the minimally insulated limit, the anode current Ia,min⁡=1.78Va/Z0, the electron-flow current If,min⁡=1.25Va/Z0, and the flow impedance Zf,min⁡=0.588Z0. {The electron-flow current If≡Ia-Ik. Following Mendel and Rosenthal [Phys. Plasmas 2, 1332 (1995)PHPAEN1070-664X10.1063/1.871345], we define the flow impedance Zf as Va/(Ia2-Ik2)1/2.} In the well-insulated limit (i.e., when Ia≫Ia,min⁡), the electron-flow current If=9Va2/8IaZ02 and the flow impedance Zf=2Z0/3. Similar results are obtained for a 1D collisional MITL with coaxial cylindrical electrodes, when the inner conductor is at a negative potential with respect to the outer, and Z0≲40Ω. We compare the predictions of the collisional model to those of several MITL models that assume the flow electrons are collisionless. We find that at given values of Va and Z0, collisions can significantly increase both Ia,min⁡ and If,min⁡ above the values predicted by the collisionless models, and decrease Zf,min⁡. When Ia≫Ia,min⁡, we find that, at given values of Va, Z0, and Ia, collisions can significantly increase If and decrease Zf. Since the steady-state collisional model is valid only when the drift of electrons toward the anode has had sufficient time to establish fully developed collisional flow, and collisionless models assume there is no net electron drift toward the anode, we expect these two types of models to provide theoretical bounds on Ia, If, and Zf.

  20. Macroscopic neural mass model constructed from a current-based network model of spiking neurons.

    PubMed

    Umehara, Hiroaki; Okada, Masato; Teramae, Jun-Nosuke; Naruse, Yasushi

    2017-02-01

    Neural mass models (NMMs) are efficient frameworks for describing macroscopic cortical dynamics including electroencephalogram and magnetoencephalogram signals. Originally, these models were formulated on an empirical basis of synaptic dynamics with relatively long time constants. By clarifying the relations between NMMs and the dynamics of microscopic structures such as neurons and synapses, we can better understand cortical and neural mechanisms from a multi-scale perspective. In a previous study, the NMMs were analytically derived by averaging the equations of synaptic dynamics over the neurons in the population and further averaging the equations of the membrane-potential dynamics. However, the averaging of synaptic current assumes that the neuron membrane potentials are nearly time invariant and that they remain at sub-threshold levels to retain the conductance-based model. This approximation limits the NMM to the non-firing state. In the present study, we newly propose a derivation of a NMM by alternatively approximating the synaptic current which is assumed to be independent of the membrane potential, thus adopting a current-based model. Our proposed model releases the constraint of the nearly constant membrane potential. We confirm that the obtained model is reducible to the previous model in the non-firing situation and that it reproduces the temporal mean values and relative power spectrum densities of the average membrane potentials for the spiking neurons. It is further ensured that the existing NMM properly models the averaged dynamics over individual neurons even if they are spiking in the populations.

  1. A Unified Model of Cloud-to-Ground Lightning Stroke

    NASA Astrophysics Data System (ADS)

    Nag, A.; Rakov, V. A.

    2014-12-01

    The first stroke in a cloud-to-ground lightning discharge is thought to follow (or be initiated by) the preliminary breakdown process which often produces a train of relatively large microsecond-scale electric field pulses. This process is poorly understood and rarely modeled. Each lightning stroke is composed of a downward leader process and an upward return-stroke process, which are usually modeled separately. We present a unified engineering model for computing the electric field produced by a sequence of preliminary breakdown, stepped leader, and return stroke processes, serving to transport negative charge to ground. We assume that a negatively-charged channel extends downward in a stepped fashion through the relatively-high-field region between the main negative and lower positive charge centers and then through the relatively-low-field region below the lower positive charge center. A relatively-high-field region is also assumed to exist near ground. The preliminary breakdown pulse train is assumed to be generated when the negatively-charged channel interacts with the lower positive charge region. At each step, an equivalent current source is activated at the lower extremity of the channel, resulting in a step current wave that propagates upward along the channel. The leader deposits net negative charge onto the channel. Once the stepped leader attaches to ground (upward connecting leader is presently neglected), an upward-propagating return stroke is initiated, which neutralizes the charge deposited by the leader along the channel. We examine the effect of various model parameters, such as step length and current propagation speed, on model-predicted electric fields. We also compare the computed fields with pertinent measurements available in the literature.

  2. 14-qubit entanglement: creation and coherence

    NASA Astrophysics Data System (ADS)

    Barreiro, Julio

    2011-05-01

    We report the creation of multiparticle entangled states with up to 14 qubits. By investigating the coherence of up to 8 ions over time, we observe a decay proportional to the square of the number of qubits. The observed decay agrees with a theoretical model which assumes a system affected by correlated, Gaussian phase noise. This model holds for the majority of current experimental systems developed towards quantum computation and quantum metrology. We report the creation of multiparticle entangled states with up to 14 qubits. By investigating the coherence of up to 8 ions over time, we observe a decay proportional to the square of the number of qubits. The observed decay agrees with a theoretical model which assumes a system affected by correlated, Gaussian phase noise. This model holds for the majority of current experimental systems developed towards quantum computation and quantum metrology. Work done in collaboration with Thomas Monz, Philipp Schindler, Michael Chwalla, Daniel Nigg, William A. Coish, Maximilian Harlander, Wolfgang Haensel, Markus Hennrich, and Rainer Blatt.

  3. Testing Signal-Detection Models of Yes/No and Two-Alternative Forced-Choice Recognition Memory

    ERIC Educational Resources Information Center

    Jang, Yoonhee; Wixted, John T.; Huber, David E.

    2009-01-01

    The current study compared 3 models of recognition memory in their ability to generalize across yes/no and 2-alternative forced-choice (2AFC) testing. The unequal-variance signal-detection model assumes a continuous memory strength process. The dual-process signal-detection model adds a thresholdlike recollection process to a continuous…

  4. Asymmetrical Capacitors for Propulsion

    NASA Technical Reports Server (NTRS)

    Canning, Francis X.; Melcher, Cory; Winet, Edwin

    2004-01-01

    Asymmetrical Capacitor Thrusters have been proposed as a source of propulsion. For over eighty years, it has been known that a thrust results when a high voltage is placed across an asymmetrical capacitor, when that voltage causes a leakage current to flow. However, there is surprisingly little experimental or theoretical data explaining this effect. This paper reports on the results of tests of several Asymmetrical Capacitor Thrusters (ACTs). The thrust they produce has been measured for various voltages, polarities, and ground configurations and their radiation in the VHF range has been recorded. These tests were performed at atmospheric pressure and at various reduced pressures. A simple model for the thrust was developed. The model assumed the thrust was due to electrostatic forces on the leakage current flowing across the capacitor. It was further assumed that this current involves charged ions which undergo multiple collisions with air. These collisions transfer momentum. All of the measured data was consistent with this model. Many configurations were tested, and the results suggest general design principles for ACTs to be used for a variety of purposes.

  5. Modeling of spacecraft charging

    NASA Technical Reports Server (NTRS)

    Whipple, E. C., Jr.

    1977-01-01

    Three types of modeling of spacecraft charging are discussed: statistical models, parametric models, and physical models. Local time dependence of circuit upset for DoD and communication satellites, and electron current to a sphere with an assumed Debye potential distribution are presented. Four regions were involved in spacecraft charging: (1) undisturbed plasma, (2) plasma sheath region, (3) spacecraft surface, and (4) spacecraft equivalent circuit.

  6. Plasmasphere Modeling with Ring Current Heating

    NASA Technical Reports Server (NTRS)

    Guiter, S. M.; Fok, M.-C.; Moore, T. E.

    1995-01-01

    Coulomb collisions between ring current ions and the thermal plasma in the plasmasphere will heat the plasmaspheric electrons and ions. During a storm such heating would lead to significant changes in the temperature and density of the thermal plasma. This was modeled using a time- dependent, one-stream hydrodynamic model for plasmaspheric flows, in which the model flux tube is connected to the ionosphere. The model simultaneously solves the coupled continuity, momentum, and energy equations of a two-ion (H(+) and O(+) quasineutral, currentless plasma. Heating rates due to collisions with ring current ions were calculated along the field line using a kinetic ring current model. First, diurnally reproducible results were found assuming only photoelectron heating of the thermal electrons. Then results were found with heating of the H(+) ions by the ring current during the recovery phase of a magnetic storm.

  7. Heating of the corona by magnetic singularities

    NASA Technical Reports Server (NTRS)

    Antiochos, Spiro K.

    1990-01-01

    Theoretical models of current-sheet formation and magnetic heating in the solar corona are examined analytically. The role of photospheric connectivity in determining the topology of the coronal magnetic field and its equilibrium properties is explored; nonequilibrium models of current-sheet formation (assuming an initially well connected field) are described; and particular attention is given to models with discontinuous connectivity, where magnetic singularities arise from smooth footpoint motions. It is shown that current sheets arise from connectivities in which the photospheric flux structure is complex, with three or more polarity regions and a magnetic null point within the corona.

  8. Context-dependent decision-making: a simple Bayesian model

    PubMed Central

    Lloyd, Kevin; Leslie, David S.

    2013-01-01

    Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or ‘contexts’ allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects. PMID:23427101

  9. Context-dependent decision-making: a simple Bayesian model.

    PubMed

    Lloyd, Kevin; Leslie, David S

    2013-05-06

    Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.

  10. Consequences of Psychotherapy Clients' Mental Health Ideology.

    ERIC Educational Resources Information Center

    Milling, Len; Kirsch, Irving

    Current theoretical approaches to understanding emotional difficulties are dominated by the medical model of mental illness, which assumes that emotional dysfunction can be viewed the same way as physical dysfunction. To examine the relationship between psychotherapy clients' beliefs about the medical model of psychotherapy and their behavior…

  11. The October 1973 expendable launch vehicle traffic model, revision 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Traffic model data for current expendable launch vehicles (assuming no space shuttle) for calendar years 1980 through 1991 are presented along with some supporting and summary data. This model was based on a payload program equivalent in scientific return to the October 1973 NASA Payload Model, the NASA estimated non NASA/non DoD Payload Model, and the 1971 DoD Mission Model.

  12. Communication Policy and Theory: Current Perspectives on Mass Communication Research.

    ERIC Educational Resources Information Center

    Bybee, Carl R.; Cahn, Dudley D.

    The integration of American and European mass communication research models would provide a broader sociocultural framework for formulating communication policy. Emphasizing a functional approach, the American diffusionist model assumes that society is a system of interrelated parts naturally tending toward a state of dynamic equilibrium. The…

  13. On the Implications of aerosol liquid water and phase separation for modeled organic aerosol mass

    EPA Science Inventory

    Current chemical transport models assume that organic aerosol (OA)-forming compounds partition mostly to a water-poor, organic-rich phase in accordance with their vapor pressures. However, in the southeast United States, a significant fraction of ambient organic compounds are wat...

  14. Everything You Need to Know about Career Development You Already Know.

    ERIC Educational Resources Information Center

    Spokane, Arnold R.; Richardson, Tina Q.

    1992-01-01

    The search model and compromise model currently dominate thinking about career development. One assumes that individuals form occupational personalities that are the basis for occupational selection; the other considers career choice a lifelong integration of experiences with environmental barriers. These can help the academic advisor guide…

  15. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  16. Discrepancy between simulated and observed ethane and propane levels explained by underestimated fossil emissions

    NASA Astrophysics Data System (ADS)

    Dalsøren, Stig B.; Myhre, Gunnar; Hodnebrog, Øivind; Myhre, Cathrine Lund; Stohl, Andreas; Pisso, Ignacio; Schwietzke, Stefan; Höglund-Isaksson, Lena; Helmig, Detlev; Reimann, Stefan; Sauvage, Stéphane; Schmidbauer, Norbert; Read, Katie A.; Carpenter, Lucy J.; Lewis, Alastair C.; Punjabi, Shalini; Wallasch, Markus

    2018-03-01

    Ethane and propane are the most abundant non-methane hydrocarbons in the atmosphere. However, their emissions, atmospheric distribution, and trends in their atmospheric concentrations are insufficiently understood. Atmospheric model simulations using standard community emission inventories do not reproduce available measurements in the Northern Hemisphere. Here, we show that observations of pre-industrial and present-day ethane and propane can be reproduced in simulations with a detailed atmospheric chemistry transport model, provided that natural geologic emissions are taken into account and anthropogenic fossil fuel emissions are assumed to be two to three times higher than is indicated in current inventories. Accounting for these enhanced ethane and propane emissions results in simulated surface ozone concentrations that are 5-13% higher than previously assumed in some polluted regions in Asia. The improved correspondence with observed ethane and propane in model simulations with greater emissions suggests that the level of fossil (geologic + fossil fuel) methane emissions in current inventories may need re-evaluation.

  17. Analysis of weak interactions and Eotvos experiments

    NASA Technical Reports Server (NTRS)

    Hsu, J. P.

    1978-01-01

    The intermediate-vector-boson model is preferred over the current-current model as a basis for calculating effects due to weak self-energy. Attention is given to a possible violation of the equivalence principle by weak-interaction effects, and it is noted that effects due to weak self-energy are at least an order of magnitude greater than those due to the weak binding energy for typical nuclei. It is assumed that the weak and electromagnetic energies are independent.

  18. Medium-Term Prospects for the Mexican Economy: Some Modeling Results

    DTIC Science & Technology

    1990-07-01

    Mexican prospects. A base case scenario illustrates that without a net inflow of foreign capital, the peso cannot be sustained at current real levels... peso and no decline in real income. The model can also produce a pessimistic scenario that suggests the worst that might happen to the Mexican economy...net inflow of foreign capital (in the form of either lending or direct investment) the peso cannot be sustained at current real levels (assuming that

  19. An Extension of IRT-Based Equating to the Dichotomous Testlet Response Theory Model

    ERIC Educational Resources Information Center

    Tao, Wei; Cao, Yi

    2016-01-01

    Current procedures for equating number-correct scores using traditional item response theory (IRT) methods assume local independence. However, when tests are constructed using testlets, one concern is the violation of the local item independence assumption. The testlet response theory (TRT) model is one way to accommodate local item dependence.…

  20. A Bayesian Model for the Estimation of Latent Interaction and Quadratic Effects When Latent Variables Are Non-Normally Distributed

    ERIC Educational Resources Information Center

    Kelava, Augustin; Nagengast, Benjamin

    2012-01-01

    Structural equation models with interaction and quadratic effects have become a standard tool for testing nonlinear hypotheses in the social sciences. Most of the current approaches assume normally distributed latent predictor variables. In this article, we present a Bayesian model for the estimation of latent nonlinear effects when the latent…

  1. Pseudo and conditional score approach to joint analysis of current count and current status data.

    PubMed

    Wen, Chi-Chung; Chen, Yi-Hau

    2018-04-17

    We develop a joint analysis approach for recurrent and nonrecurrent event processes subject to case I interval censorship, which are also known in literature as current count and current status data, respectively. We use a shared frailty to link the recurrent and nonrecurrent event processes, while leaving the distribution of the frailty fully unspecified. Conditional on the frailty, the recurrent event is assumed to follow a nonhomogeneous Poisson process, and the mean function of the recurrent event and the survival function of the nonrecurrent event are assumed to follow some general form of semiparametric transformation models. Estimation of the models is based on the pseudo-likelihood and the conditional score techniques. The resulting estimators for the regression parameters and the unspecified baseline functions are shown to be consistent with rates of square and cubic roots of the sample size, respectively. Asymptotic normality with closed-form asymptotic variance is derived for the estimator of the regression parameters. We apply the proposed method to a fracture-osteoporosis survey data to identify risk factors jointly for fracture and osteoporosis in elders, while accounting for association between the two events within a subject. © 2018, The International Biometric Society.

  2. Off-Axis Driven Current Effects on ETB and ITB Formations based on Bifurcation Concept

    NASA Astrophysics Data System (ADS)

    Pakdeewanich, J.; Onjun, T.; Chatthong, B.

    2017-09-01

    This research studies plasma performance in fusion Tokamak system by investigating parameters such as plasma pressure in the presence of an edge transport barrier (ETB) and an internal transport barrier (ITB) as the off-axis driven current position is varied. The plasma is modeled based on the bifurcation concept using a suppression function that can result in formation of transport barriers. In this model, thermal and particle transport equations, including both neoclassical and anomalous effects, are solved simultaneously in slab geometry. The neoclassical coefficients are assumed to be constant while the anomalous coefficients depend on gradients of local pressure and density. The suppression function, depending on flow shear and magnetic shear, is assumed to affect only on the anomalous channel. The flow shear can be calculated from the force balance equation, while the magnetic shear is calculated from the given plasma current. It is found that as the position of driven current peak is moved outwards from the plasma center, the central pressure is increased. But at some point it stars to decline, mostly when the driven current peak has reached the outer half of the plasma. The higher pressure value results from the combination of ETB and ITB formations. The drop in central pressure occurs because ITB stats to disappear.

  3. Modeling the spatial distribution of forest crown biomass and effects on fire behavior with FUEL3D and WFDS

    Treesearch

    Russell A. Parsons; William Mell; Peter McCauley

    2010-01-01

    Crown fire poses challenges to fire managers and can endanger fire fighters. Understanding of how fire interacts with tree crowns is essential to informed decisions about crown fire. Current operational crown fire predictions in the United States assume homogeneous crown fuels. While a new class of research fire models, which model fire behavior with computational...

  4. The Galactic Nova Rate Revisited

    NASA Astrophysics Data System (ADS)

    Shafter, A. W.

    2017-01-01

    Despite its fundamental importance, a reliable estimate of the Galactic nova rate has remained elusive. Here, the overall Galactic nova rate is estimated by extrapolating the observed rate for novae reaching m≤slant 2 to include the entire Galaxy using a two component disk plus bulge model for the distribution of stars in the Milky Way. The present analysis improves on previous work by considering important corrections for incompleteness in the observed rate of bright novae and by employing a Monte Carlo analysis to better estimate the uncertainty in the derived nova rates. Several models are considered to account for differences in the assumed properties of bulge and disk nova populations and in the absolute magnitude distribution. The simplest models, which assume uniform properties between bulge and disk novae, predict Galactic nova rates of ˜50 to in excess of 100 per year, depending on the assumed incompleteness at bright magnitudes. Models where the disk novae are assumed to be more luminous than bulge novae are explored, and predict nova rates up to 30% lower, in the range of ˜35 to ˜75 per year. An average of the most plausible models yields a rate of {50}-23+31 yr-1, which is arguably the best estimate currently available for the nova rate in the Galaxy. Virtually all models produce rates that represent significant increases over recent estimates, and bring the Galactic nova rate into better agreement with that expected based on comparison with the latest results from extragalactic surveys.

  5. Estimated effects of temperature on secondary organic aerosol concentrations.

    PubMed

    Sheehan, P E; Bowman, F M

    2001-06-01

    The temperature-dependence of secondary organic aerosol (SOA) concentrations is explored using an absorptive-partitioning model under a variety of simplified atmospheric conditions. Experimentally determined partitioning parameters for high yield aromatics are used. Variation of vapor pressures with temperature is assumed to be the main source of temperature effects. Known semivolatile products are used to define a modeling range of vaporization enthalpy of 10-25 kcal/mol-1. The effect of diurnal temperature variations on model predictions for various assumed vaporization enthalpies, precursor emission rates, and primary organic concentrations is explored. Results show that temperature is likely to have a significant influence on SOA partitioning and resulting SOA concentrations. A 10 degrees C decrease in temperature is estimated to increase SOA yields by 20-150%, depending on the assumed vaporization enthalpy. In model simulations, high daytime temperatures tend to reduce SOA concentrations by 16-24%, while cooler nighttime temperatures lead to a 22-34% increase, compared to constant temperature conditions. Results suggest that currently available constant temperature partitioning coefficients do not adequately represent atmospheric SOA partitioning behavior. Air quality models neglecting the temperature dependence of partitioning are expected to underpredict peak SOA concentrations as well as mistime their occurrence.

  6. Wave scattering in spatially inhomogeneous currents

    NASA Astrophysics Data System (ADS)

    Churilov, Semyon; Ermakov, Andrei; Stepanyants, Yury

    2017-09-01

    We analytically study a scattering of long linear surface waves on stationary currents in a duct (canal) of constant depth and variable width. It is assumed that the background velocity linearly increases or decreases with the longitudinal coordinate due to the gradual variation of duct width. Such a model admits an analytical solution of the problem in hand, and we calculate the scattering coefficients as functions of incident wave frequency for all possible cases of sub-, super-, and transcritical currents. For completeness we study both cocurrent and countercurrent wave propagation in accelerating and decelerating currents. The results obtained are analyzed in application to recent analog gravity experiments and shed light on the problem of hydrodynamic modeling of Hawking radiation.

  7. From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience.

    PubMed

    Erev, Ido; Ert, Eyal; Plonsky, Ori; Cohen, Doron; Cohen, Oded

    2017-07-01

    Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Micromagnetic analysis of current-induced domain wall motion in a bilayer nanowire with synthetic antiferromagnetic coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komine, Takashi, E-mail: komine@mx.ibaraki.ac.jp; Aono, Tomosuke

    We demonstrate current-induced domain wall motion in bilayer nanowire with synthetic antiferromagnetic (SAF) coupling by modeling two body problems for motion equations of domain wall. The influence of interlayer exchange coupling and magnetostatic interactions on current-induced domain wall motion in SAF nanowires was also investigated. By assuming the rigid wall model for translational motion, the interlayer exchange coupling and the magnetostatic interaction between walls and domains in SAF nanowires enhances domain wall speed without any spin-orbit-torque. The enhancement of domain wall speed was discussed by energy distribution as a function of wall angle configuration in bilayer nanowires.

  9. 78 FR 18187 - Magnuson-Stevens Fishery Conservation and Management Act Provisions; Fisheries of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-25

    ... natural mortality rate (M) is 0.2 (M=0.2 model). The second assessment model assumes that M has increased... AMs (e.g., state waters). Currently, only the common pool fishery has a sub-ACL for SNE/MA windowpane flounder. The stock is not allocated to sectors, and therefore, all sector and common pool catch is...

  10. Modeling the process of interaction of 10 keV electrons with a plane dielectric surface

    NASA Astrophysics Data System (ADS)

    Vokhmyanina, Kristina; Sotnikova, Valentina; Sotnikov, Alexey; Kaplii, Anna; Nikulicheva, Tatyana; Kubankin, Alexandr; Kishin, Ivan

    2018-05-01

    The effect of guiding of charged particles by dielectric channels is of noticeable interest at the present time. The phenomenon is widely studied experimentally and theoretically but some points still need to be clarified. A previously developed model of interaction of fast electrons with dielectric surface at grazing incidence is used to study the independence of electron deflection on the value of electron beam current. The calculations were performed assuming a smooth dependence of the surface conductivity on the beam current in the 40-3000 nA range.

  11. On the behavior of return stroke current and the remotely detected electric field change waveform

    NASA Astrophysics Data System (ADS)

    Shao, Xuan-Min; Lay, Erin; Jacobson, Abram R.

    2012-04-01

    After accumulating a large number of remotely recorded negative return stroke electric field change waveforms, a subtle but persistent kink was found following the main return stroke peak by several microseconds. To understand the corresponding return stroke current properties behind the kink and the general return stroke radiation waveform, we analyze strokes occurring in triggered lightning flashes for which have been measured both the channel base current and simultaneous remote electric radiation field. In this study, the channel base current is assumed to propagate along the return stroke channel in a dispersive and lossy manner. The measured channel base current is band-pass filtered, and the higher-frequency component is assumed to attenuate faster than the lower-frequency component. The radiation electric field is computed for such a current behavior and is then propagated to distant sensors. It is found that such a return stroke model is capable of very closely reproducing the measured electric waveforms at multiple stations for the triggered return strokes, and such a model is considered applicable to the common behavior of the natural return stroke as well. On the basis of the analysis, a number of other observables are derived. The time-evolving current dispersion and attenuation compare well with previously reported optical observations. The observable speed tends to agree with optical and VHF observations. Line charge density that is removed or deposited by the return stroke is derived, and the implication of the charge density distribution on leader channel decay is discussed.

  12. Atmospheric circulation of eccentric hot Jupiter HAT-P-2B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Nikole K.; Showman, Adam P.; Fortney, Jonathan J.

    The hot Jupiter HAT-P-2b has become a prime target for Spitzer Space Telescope observations aimed at understanding the atmospheric response of exoplanets on highly eccentric orbits. Here we present a suite of three-dimensional atmospheric circulation models for HAT-P-2b that investigate the effects of assumed atmospheric composition and rotation rate on global scale winds and thermal patterns. We compare and contrast atmospheric models for HAT-P-2b, which assume one and five times solar metallicity, both with and without TiO/VO as atmospheric constituents. Additionally we compare models that assume a rotation period of half, one, and two times the nominal pseudo-synchronous rotation period.more » We find that changes in assumed atmospheric metallicity and rotation rate do not significantly affect model predictions of the planetary flux as a function of orbital phase. However, models in which TiO/VO are present in the atmosphere develop a transient temperature inversion between the transit and secondary eclipse events that results in significant variations in the timing and magnitude of the peak of the planetary flux compared with models in which TiO/VO are omitted from the opacity tables. We find that no one single atmospheric model can reproduce the recently observed full orbit phase curves at 3.6, 4.5 and 8.0 μm, which is likely due to a chemical process not captured by our current atmospheric models for HAT-P-2b. Further modeling and observational efforts focused on understanding the chemistry of HAT-P-2b's atmosphere are needed and could provide key insights into the interplay between radiative, dynamical, and chemical processes in a wide range of exoplanet atmospheres.« less

  13. A manpower calculus: the implications of SUO fellowship expansion on oncologic surgeon case volumes.

    PubMed

    See, William A

    2014-01-01

    Society of Urologic Oncology (SUO)-accredited fellowship programs have undergone substantial expansion. This study developed a mathematical model to estimate future changes in urologic oncologic surgeon (UOS) manpower and analyzed the effect of those changes on per-UOS case volumes. SUO fellowship program directors were queried as to the number of positions available on an annual basis. Current US UOS manpower was estimated from the SUO membership list. Future manpower was estimated on an annual basis by linear senescence of existing manpower combined with linear growth of newly trained surgeons. Case-volume estimates for the 4 surgical disease sites (prostate, kidney/renal pelvis, bladder, and testes) were obtained from the literature. The future number of major cases was determined from current volumes based upon the US population growth rates and the historic average annual change in disease incidence. Two models were used to predict future per-UOS major case volumes. Model 1 assumed the current distribution of cases between nononcologic surgeons and UOS would continue. Model 2 assumed a progressive redistribution of cases over time such that in 2043 100% of major urologic cancer cases would be performed by UOSs. Over the 30-year period to "manpower steady-state" SUO-accredited UOSs practicing in the United States have the potential to increase from approximately 600 currently to 1,650 in 2043. During this interval, case volumes are predicted to change 0.97-, 2.4-, 1.1-, and 1.5-fold for prostatectomy, nephrectomy, cystectomy, and retroperitoneal lymph node dissection, respectively. The ratio of future to current total annual case volumes is predicted to be 0.47 and 0.9 for models 1 and 2, respectively. The number of annual US practicing graduates necessary to achieve a future to current case-volume ratio greater than 1 is 25 and 49 in models 1 and 2, respectively. The current number of SUO fellowship trainees has the potential to decrease future per-UOS case volumes relative to current levels. Redistribution of existing case volume or a decrease in the annual number of trainees or both would be required to insure sufficient surgical volumes for skill maintenance and optimal patient outcomes. Published by Elsevier Inc.

  14. Phase field modeling of brittle fracture for enhanced assumed strain shells at large deformations: formulation and finite element implementation

    NASA Astrophysics Data System (ADS)

    Reinoso, J.; Paggi, M.; Linder, C.

    2017-06-01

    Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.

  15. Faster Teaching via POMDP Planning

    ERIC Educational Resources Information Center

    Rafferty, Anna N.; Brunskill, Emma; Griffiths, Thomas L.; Shafto, Patrick

    2016-01-01

    Human and automated tutors attempt to choose pedagogical activities that will maximize student learning, informed by their estimates of the student's current knowledge. There has been substantial research on tracking and modeling student learning, but significantly less attention on how to plan teaching actions and how the assumed student model…

  16. Nowcasting and Forecasting Concentrations of Biological Contaminants at Beaches: A Feasibility and Case Study

    EPA Science Inventory

    Public concern over microbial contamination of recreational waters has increased in recent years. A common approach to evaluating beach water quality has been to use the persistence model which assumes that day-old monitoring results provide accurate estimates of current concentr...

  17. Simulation of electromagnetic ion cyclotron triggered emissions in the Earth's inner magnetosphere

    NASA Astrophysics Data System (ADS)

    Shoji, Masafumi; Omura, Yoshiharu

    2011-05-01

    In a recent observation by the Cluster spacecraft, emissions triggered by electromagnetic ion cyclotron (EMIC) waves were discovered in the inner magnetosphere. We perform hybrid simulations to reproduce the EMIC triggered emissions. We develop a self-consistent one-dimensional hybrid code with a cylindrical geometry of the background magnetic field. We assume a parabolic magnetic field to model the dipole magnetic field in the equatorial region of the inner magnetosphere. Triggering EMIC waves are driven by a left-handed polarized external current assumed at the magnetic equator in the simulation model. Cold proton, helium, and oxygen ions, which form branches of the dispersion relation of the EMIC waves, are uniformly distributed in the simulation space. Energetic protons with a loss cone distribution function are also assumed as resonant particles. We reproduce rising tone emissions in the simulation space, finding a good agreement with the nonlinear wave growth theory. In the energetic proton velocity distribution we find formation of a proton hole, which is assumed in the nonlinear wave growth theory. A substantial amount of the energetic protons are scattered into the loss cone, while some of the resonant protons are accelerated to higher pitch angles, forming a pancake velocity distribution.

  18. Robustness of Feedback Systems with Several Modelling Errors

    DTIC Science & Technology

    1990-06-01

    Patterson AFB, OH 45433-6553 to help us maintain a current mailing list. Copies of this report should not be returned unless return is required by security...Wright Research (If applicable) and Development Center WRDC/FIGC F33615-88-C-3601 8c. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS...feedback systems with several sources of modelling uncertainty. We assume that each source of uncertainty is modelled as a stable unstructured

  19. Current-voltage scaling of a Josephson-junction array at irrational frustration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granato, E.

    1996-10-01

    Numerical simulations of the current-voltage characteristics of an ordered two-dimensional Josephson-junction array at an irrational flux quantum per plaquette are presented. The results are consistent with a scaling analysis that assumes a zero-temperature vortex-glass transition. The thermal-correlation length exponent characterizing this transition is found to be significantly different from the corresponding value for vortex-glass models in disordered two-dimensional superconductors. This leads to a current scale where nonlinearities appear in the current-voltage characteristics decreasing with temperature {ital T} roughly as {ital T}{sup 2} in contrast with the {ital T}{sup 3} behavior expected for disordered models. {copyright} {ital 1996 The American Physicalmore » Society.}« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erkaev, N. V.; Semenov, V. S.; Biernat, H. K.

    Hall magnetohydrodynamic model is investigated for current sheet flapping oscillations, which implies a gradient of the normal magnetic field component. For the initial undisturbed current sheet structure, the normal magnetic field component is assumed to have a weak linear variation. The profile of the electric current velocity is described by hyperbolic functions with a maximum at the center of the current sheet. In the framework of this model, eigenfrequencies are calculated as functions of the wave number for the ''kink'' and ''sausage'' flapping wave modes. Because of the Hall effects, the flapping eigenfrequency is larger for the waves propagating alongmore » the electric current, and it is smaller for the opposite wave propagation with respect to the current. The asymmetry of the flapping wave propagation, caused by Hall effects, is pronounced stronger for thinner current sheets. This is due to the Doppler effect related to the electric current velocity.« less

  1. Human Factors in the Design and Evaluation of Air Traffic Control Systems

    DTIC Science & Technology

    1995-04-01

    the controller must filter through and decipher. Fortunately, some of this is done without the need for conscious attention ; fcr example, a clear...components of an information-processing model ? ...................... 166 5.3 ATTENTION ......................................... 172 0 5.3.1 What is...processing? support of our performance of daily activities, including our (,) job tasks. Two models of attention currently in use assume that human infor

  2. Improving Hall Thruster Plume Simulation through Refined Characterization of Near-field Plasma Properties

    NASA Astrophysics Data System (ADS)

    Huismann, Tyler D.

    Due to the rapidly expanding role of electric propulsion (EP) devices, it is important to evaluate their integration with other spacecraft systems. Specifically, EP device plumes can play a major role in spacecraft integration, and as such, accurate characterization of plume structure bears on mission success. This dissertation addresses issues related to accurate prediction of plume structure in a particular type of EP device, a Hall thruster. This is done in two ways: first, by coupling current plume simulation models with current models that simulate a Hall thruster's internal plasma behavior; second, by improving plume simulation models and thereby increasing physical fidelity. These methods are assessed by comparing simulated results to experimental measurements. Assessment indicates the two methods improve plume modeling capabilities significantly: using far-field ion current density as a metric, these approaches used in conjunction improve agreement with measurements by a factor of 2.5, as compared to previous methods. Based on comparison to experimental measurements, recent computational work on discharge chamber modeling has been largely successful in predicting properties of internal thruster plasmas. This model can provide detailed information on plasma properties at a variety of locations. Frequently, experimental data is not available at many locations that are of interest regarding computational models. Excepting the presence of experimental data, there are limited alternatives for scientifically determining plasma properties that are necessary as inputs into plume simulations. Therefore, this dissertation focuses on coupling current models that simulate internal thruster plasma behavior with plume simulation models. Further, recent experimental work on atom-ion interactions has provided a better understanding of particle collisions within plasmas. This experimental work is used to update collision models in a current plume simulation code. Previous versions of the code assume an unknown dependence between particles' pre-collision velocities and post-collision scattering angles. This dissertation focuses on updating several of these types of collisions by assuming a curve fit based on the measurements of atom-ion interactions, such that previously unknown angular dependences are well-characterized.

  3. Game Relativity: How Context Influences Strategic Decision Making

    ERIC Educational Resources Information Center

    Vlaev, Ivo; Chater, Nick

    2006-01-01

    Existing models of strategic decision making typically assume that only the attributes of the currently played game need be considered when reaching a decision. The results presented in this article demonstrate that the so-called "cooperativeness" of the previously played prisoner's dilemma games influence choices and predictions in the current…

  4. The Egalitarian Relationship in Feminist Therapy

    ERIC Educational Resources Information Center

    Rader, Jill; Gilbert, Lucia Albino

    2005-01-01

    Feminist therapy has revolutionized clinical practice and offered a model of empowerment for all therapy approaches. However, the long-assumed claim that feminist therapists are more likely to engage in power-sharing behaviors with their clients has not been supported by published quantitative research. In the current study, 42 female therapists…

  5. AN ACCELERATION MECHANISM FOR NEUTRON PRODUCTION IN Z-PINCH DISCHARGES,

    DTIC Science & Technology

    A model has been developed for the acceleration of deuterons in the tightly compressed column of a z-pinch discharge, in particular that of a plasma ... focus discharge. It was assumed that an annular current distribution undergoes a rapidly contracting transition to an axially peaked distribution, and

  6. Modelling of current loads on aquaculture net cages

    NASA Astrophysics Data System (ADS)

    Kristiansen, Trygve; Faltinsen, Odd M.

    2012-10-01

    In this paper we propose and discuss a screen type of force model for the viscous hydrodynamic load on nets. The screen model assumes that the net is divided into a number of flat net panels, or screens. It may thus be applied to any kind of net geometry. In this paper we focus on circular net cages for fish farms. The net structure itself is modelled by an existing truss model. The net shape is solved for in a time-stepping procedure that involves solving a linear system of equations for the unknown tensions at each time step. We present comparisons to experiments with circular net cages in steady current, and discuss the sensitivity of the numerical results to a set of chosen parameters. Satisfactory agreement between experimental and numerical prediction of drag and lift as function of the solidity ratio of the net and the current velocity is documented.

  7. Cost/benefit trade-offs for reducing the energy consumption of commercial air transportation (RECAT)

    NASA Technical Reports Server (NTRS)

    Gobetz, F. W.; Dubin, A. P.

    1976-01-01

    A study has been performed to evaluate the opportunities for reducing the energy requirements of the U.S. domestic air passenger transport system through improved operational techniques, modified in-service aircraft, derivatives of current production models, or new aircraft using either current or advanced technology. Each of the fuel-conserving alternatives has been investigated individually to test its potential for fuel conservation relative to a hypothetical baseline case in which current, in-production aircraft types are assumed to operate, without modification and with current operational techniques, into the future out to the year 2000.

  8. Revisiting Studies of the Statistical Property of a Strong Gravitational Lens System and Model-Independent Constraint on the Curvature of the Universe

    NASA Astrophysics Data System (ADS)

    Xia, Jun-Qing; Yu, Hai; Wang, Guo-Jian; Tian, Shu-Xun; Li, Zheng-Xiang; Cao, Shuo; Zhu, Zong-Hong

    2017-01-01

    In this paper, we use a recently compiled data set, which comprises 118 galactic-scale strong gravitational lensing (SGL) systems to constrain the statistical property of the SGL system as well as the curvature of the universe without assuming any fiducial cosmological model. Based on the singular isothermal ellipsoid (SIE) model of the SGL system, we obtain that the constrained curvature parameter {{{Ω }}}{{k}} is close to zero from the SGL data, which is consistent with the latest result of Planck measurement. More interestingly, we find that the parameter f in the SIE model is strongly correlated with the curvature {{{Ω }}}{{k}}. Neglecting this correlation in the analysis will significantly overestimate the constraining power of SGL data on the curvature. Furthermore, the obtained constraint on f is different from previous results: f=1.105+/- 0.030 (68% confidence level [C.L.]), which means that the standard singular isothermal sphere (SIS) model (f = 1) is disfavored by the current SGL data at more than a 3σ C.L. We also divide all of the SGL data into two parts according to the centric stellar velocity dispersion {σ }{{c}} and find that the larger the value of {σ }{{c}} for the subsample, the more favored the standard SIS model is. Finally, we extend the SIE model by assuming the power-law density profiles for the total mass density, ρ ={ρ }0{(r/{r}0)}-α , and luminosity density, ν ={ν }0{(r/{r}0)}-δ , and obtain the constraints on the power-law indices: α =1.95+/- 0.04 and δ =2.40+/- 0.13 at a 68% C.L. When assuming the power-law index α =δ =γ , this scenario is totally disfavored by the current SGL data, {χ }\\min ,γ 2-{χ }\\min ,{SIE}2≃ 53.

  9. Spreading Activation in an Attractor Network with Latching Dynamics: Automatic Semantic Priming Revisited

    PubMed Central

    Lerner, Itamar; Bentin, Shlomo; Shriki, Oren

    2012-01-01

    Localist models of spreading activation (SA) and models assuming distributed-representations offer very different takes on semantic priming, a widely investigated paradigm in word recognition and semantic memory research. In the present study we implemented SA in an attractor neural network model with distributed representations and created a unified framework for the two approaches. Our models assumes a synaptic depression mechanism leading to autonomous transitions between encoded memory patterns (latching dynamics), which account for the major characteristics of automatic semantic priming in humans. Using computer simulations we demonstrated how findings that challenged attractor-based networks in the past, such as mediated and asymmetric priming, are a natural consequence of our present model’s dynamics. Puzzling results regarding backward priming were also given a straightforward explanation. In addition, the current model addresses some of the differences between semantic and associative relatedness and explains how these differences interact with stimulus onset asynchrony in priming experiments. PMID:23094718

  10. Phenomenological description of depoling current in Pb0.99Nb0.02(Zr0.95Ti0.05)0.98O3 ferroelectric ceramics under shock wave compression: Relaxation model

    NASA Astrophysics Data System (ADS)

    Jiang, Dongdong; Du, Jinmei; Gu, Yan; Feng, Yujun

    2012-05-01

    By assuming a relaxation process for depolarization associated with the ferroelectric (FE) to antiferroelectric (AFE) phase transition in Pb0.99Nb0.02(Zr0.95Ti0.05)0.98O3 ferroelectric ceramics under shock wave compression, we build a new model for the depoling current, which is different from both the traditional constant current source (CCS) model and the phase transition kinetics (PTK) model. The characteristic relaxation time and new-equilibrated polarization are dependent on both the shock pressure and electric field. After incorporating a Maxwell s equation, the relaxation model developed applies to all the depoling currents under short-circuit condition and high-impedance condition. Influences of shock pressure, load resistance, dielectric property, and electrical conductivity on the depoling current are also discussed. The relaxation model gives a good description about the suppressing effect of the self-generated electric field on the FE-to-AFE phase transition at low shock pressures, which cannot be described by the traditional models. After incorporating a time- and electric-field-dependent repolarization, this model predicts that the high-impedance current eventually becomes higher than the short-circuit current, which is consistent with the experimental results in the literature. Finally, we make the comparison between our relaxation model and the traditional CCS model and PTK model.

  11. Growth history and crown vine coverage are principal factors influencing growth and mortality rates of big-leaf mahogany Swietenia macrophylla in Brazil

    Treesearch

    James Grogan; R. Matthew Landis

    2009-01-01

    1. Current efforts to model population dynamics of high-value tropical timber species largely assume that individual growth history is unimportant to population dynamics, yet growth autocorrelation is known to adversely affect model predictions. In this study, we analyse a decade of annual census data from a natural population of big-leaf mahogany Swietenia macrophylla...

  12. Dispersive FDTD analysis of induced electric field in human models due to electrostatic discharge.

    PubMed

    Hirata, Akimasa; Nagai, Toshihiro; Koyama, Teruyoshi; Hattori, Junya; Chan, Kwok Hung; Kavet, Robert

    2012-07-07

    Contact currents flow from/into a charged human body when touching a grounded conductive object. An electrostatic discharge (ESD) or spark may occur just before contact or upon release. The current may stimulate muscles and peripheral nerves. In order to clarify the difference in the induced electric field between different sized human models, the in-situ electric fields were computed in anatomically based models of adults and a child for a contact current in a human body following ESD. A dispersive finite-difference time-domain method was used, in which biological tissue is assumed to obey a four-pole Debye model. From our computational results, the first peak of the discharge current was almost identical across adult and child models. The decay of the induced current in the child was also faster due mainly to its smaller body capacitance compared to the adult models. The induced electric fields in the forefingers were comparable across different models. However, the electric field induced in the arm of the child model was found to be greater than that in the adult models primarily because of its smaller cross-sectional area. The tendency for greater doses in the child has also been reported for power frequency sinusoidal contact current exposures as reported by other investigators.

  13. Numerical and theoretical evaluations of AC losses for single and infinite numbers of superconductor strips with direct and alternating transport currents in external AC magnetic field

    NASA Astrophysics Data System (ADS)

    Kajikawa, K.; Funaki, K.; Shikimachi, K.; Hirano, N.; Nagaya, S.

    2010-11-01

    AC losses in a superconductor strip are numerically evaluated by means of a finite element method formulated with a current vector potential. The expressions of AC losses in an infinite slab that corresponds to a simple model of infinitely stacked strips are also derived theoretically. It is assumed that the voltage-current characteristics of the superconductors are represented by Bean's critical state model. The typical operation pattern of a Superconducting Magnetic Energy Storage (SMES) coil with direct and alternating transport currents in an external AC magnetic field is taken into account as the electromagnetic environment for both the single strip and the infinite slab. By using the obtained results of AC losses, the influences of the transport currents on the total losses are discussed quantitatively.

  14. Data and methodological problems in establishing state gasoline-conservation targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, D.L.; Walton, G.H.

    The Emergency Energy Conservation Act of 1979 gives the President the authority to set gasoline-conservation targets for states in the event of a supply shortage. This paper examines data and methodological problems associated with setting state gasoline-conservation targets. The target-setting method currently used is examined and found to have some flaws. Ways of correcting these deficiencies through the use of Box-Jenkins time-series analysis are investigated. A successful estimation of Box-Jenkins models for all states included the estimation of the magnitude of the supply shortages of 1979 in each state and a preliminary estimation of state short-run price elasticities, which weremore » found to vary about a median value of -0.16. The time-series models identified were very simple in structure and lent support to the simple consumption growth model assumed by the current target method. The authors conclude that the flaws in the current method can be remedied either by replacing the current procedures with time-series models or by using the models in conjunction with minor modifications of the current method.« less

  15. Simulating future supply of and requirements for human resources for health in high-income OECD countries.

    PubMed

    Tomblin Murphy, Gail; Birch, Stephen; MacKenzie, Adrian; Rigby, Janet

    2016-12-12

    As part of efforts to inform the development of a global human resources for health (HRH) strategy, a comprehensive methodology for estimating HRH supply and requirements was described in a companion paper. The purpose of this paper is to demonstrate the application of that methodology, using data publicly available online, to simulate the supply of and requirements for midwives, nurses, and physicians in the 32 high-income member countries of the Organisation for Economic Co-operation and Development (OECD) up to 2030. A model combining a stock-and-flow approach to simulate the future supply of each profession in each country-adjusted according to levels of HRH participation and activity-and a needs-based approach to simulate future HRH requirements was used. Most of the data to populate the model were obtained from the OECD's online indicator database. Other data were obtained from targeted internet searches and documents gathered as part of the companion paper. Relevant recent measures for each model parameter were found for at least one of the included countries. In total, 35% of the desired current data elements were found; assumed values were used for the other current data elements. Multiple scenarios were used to demonstrate the sensitivity of the simulations to different assumed future values of model parameters. Depending on the assumed future values of each model parameter, the simulated HRH gaps across the included countries could range from shortfalls of 74 000 midwives, 3.2 million nurses, and 1.2 million physicians to surpluses of 67 000 midwives, 2.9 million nurses, and 1.0 million physicians by 2030. Despite important gaps in the data publicly available online and the short time available to implement it, this paper demonstrates the basic feasibility of a more comprehensive, population needs-based approach to estimating HRH supply and requirements than most of those currently being used. HRH planners in individual countries, working with their respective stakeholder groups, would have more direct access to data on the relevant planning parameters and would thus be in an even better position to implement such an approach.

  16. Accurate and scalable social recommendation using mixed-membership stochastic block models.

    PubMed

    Godoy-Lorite, Antonia; Guimerà, Roger; Moore, Cristopher; Sales-Pardo, Marta

    2016-12-13

    With increasing amounts of information available, modeling and predicting user preferences-for books or articles, for example-are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users' ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user's and item's groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets.

  17. Accurate and scalable social recommendation using mixed-membership stochastic block models

    PubMed Central

    Godoy-Lorite, Antonia; Moore, Cristopher

    2016-01-01

    With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets. PMID:27911773

  18. Thermodynamics analysis of diffusion in spark plasma sintering welding Cr3C2 and Ni

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Zhang, Jinyong; Leng, Xiaoxuan; Lei, Liwen; Fu, Zhengyi

    2017-03-01

    Spark plasma sintering (SPS) welding of chromium carbide (Cr3C2) and nickel (Ni) was used to investigate the atomic diffusion caused by bypassing current. It was found that the diffusion coefficient with bypassing current was enhanced by almost 3.57 times over that without bypassing current. Different from the previous researches, the thermodynamics analysis conducted herein showed that the enhancement included a current direction-independent part besides the known current direction-dependent part. A local temperature gradient (LTG) model was proposed to explain the current direction-independent effect. Assuming that the LTG was mainly due to the interfacial electric resistance causing heterogeneous Joule heating, the theoretical results were in good agreement with the experimental results both in the present and previous studies. This new LTG model provides a reasonable physical meaning for the low-temperature advantage of SPS welding and should be useful in a wide range of applications.

  19. Field Aligned Currents Derived From Pressure Profiles Obtained From TWINS ENA Images for Geomagnetic Storms That Occurred On 01 June 2013 and 17 March 2015.

    NASA Astrophysics Data System (ADS)

    Wood, K.; Perez, J. D.; Goldstein, J.; McComas, D. J.; Valek, P. W.

    2016-12-01

    Field aligned currents (FACs) that flow from the Earth's magnetosphere into the ionosphere are an important coupling mechanism in the interaction of the solar wind with the Earth's magnetosphere and ionosphere. Assuming pressure balance and charge conservation yields an expression for the FACs in terms of plasma pressure gradients and pressure anisotropy. The Two Wide-angle Imaging Neutral Atom Spectrometers (TWINS) mission, the first stereoscopic ENA magnetospheric imager, provides global images of the inner magnetosphere from which ion pressure distributions and pressure anisotropies are obtained. Following the formulations in Vasyliunas (1970), Vasyliunas (1984), and Heinemann (1990), and using results from TWINS observations, we calculate the distributions of field aligned currents for geomagnetic storms on 1 June 2013 and 17 March 2015, in which extended ionospheric precipitation was observed. As previous work has assumed isotropic pressure distributions, we perform calculations both assuming pressure isotropy, and using the pressure anisotropy observed by TWINS, and compare the results from the two storms. References: 1. Vasyliunas, V. M. (1970). Mathematical Models of Magnetospheric Convection and its Coupling to the Ionosphere. Particles and Fields in the Magnetosphere Astrophysics and Space Science Library, 60-71. doi:10.1007/978-94-010-3284-1_6 2. Vasyliunas, V. M. (1984). Fundamentals of current description. Magnetospheric Currents Geophysical Monograph Series, 63-66. doi:10.1029/gm028p0063 3. Heinemann, M. (1990). Representations of currents and magnetic fields in anisotropic magnetohydrostatic plasma. J. Geophys. Res. Journal of Geophysical Research, 95(A6), 7789. doi:10.1029/ja095ia06p07789

  20. The Preliminary Program of University Construction Projects in Portugal: 10 Case Studies

    ERIC Educational Resources Information Center

    Carrasco Campos, M. Helena; Teixeira, J. Manuel Cardoso

    2012-01-01

    Currently, societies exert varied and sometimes contradictory pressures on universities. These pressures have been provoking discussion on the best role of these institutions to meet the needs of contemporary societies. Universities will assume different forms and models of organization, according to what each one will define as being its mission…

  1. Distributional Effects of Educational Improvements: Are We Using the Wrong Model?

    ERIC Educational Resources Information Center

    Bourguignon, Francois; Rogers, F. Halsey

    2007-01-01

    Measuring the incidence of public spending in education requires an intergenerational framework distinguishing between what current and future generations--that is, parents and children--give and receive. In standard distributional incidence analysis, households are assumed to receive a benefit equal to what is spent on their children enrolled in…

  2. Decisions Under Uncertainty III: Rationality Issues, Sex Stereotypes, and Sex Role Appropriateness.

    ERIC Educational Resources Information Center

    Bonoma, Thomas V.

    The explanatory cornerstone of most currently viable social theories is a strict cost-gain assumption. The clearest formal explication of this view is contained in subjective expected utility models (SEU), in which individuals are assumed to scale their subjective likelihood estimates of decisional consequences and the personalistic worth or…

  3. Analysis of electric current flow through the HTc multilayered superconductors

    NASA Astrophysics Data System (ADS)

    Sosnowski, J.

    2016-02-01

    Issue of the flow of the transport current through multilayered high-temperature superconductors is considered, depending on the direction of the electric current towards the surface of the superconducting CuO2 layers. For configuration of the current flow inside of the layers and for perpendicular magnetic field, it will be considered the current limitations connected with interaction of pancake type vortices with nano-sized defects, created among other during fast neutrons irradiation. So it makes this issue associated with work of nuclear energy devices, like tokamak ITER, LHC and actually developed accelerator Nuclotron-NICA, as well as cryocables. Phenomenological analysis of the pinning potential barrier formation will be in the paper given, which determines critical current flow inside the plane. Comparison of theoretical model with experimental data will be presented too as well as influence of fast neutrons irradiation dose on critical current calculated. For current direction perpendicular to superconducting planes the current-voltage characteristics are calculated basing on model assuming formation of long intrinsic Josephson's junctions in layered HTc superconductors.

  4. A global time-dependent model of thunderstorm electricity. I - Mathematical properties of the physical and numerical models

    NASA Technical Reports Server (NTRS)

    Browning, G. L.; Tzur, I.; Roble, R. G.

    1987-01-01

    A time-dependent model is introduced that can be used to simulate the interaction of a thunderstorm with its global electrical environment. The model solves the continuity equation of the Maxwell current, which is assumed to be composed of the conduction, displacement, and source currents. Boundary conditions which can be used in conjunction with the continuity equation to form a well-posed initial-boundary value problem are determined. Properties of various components of solutions of the initial-boundary value problem are analytically determined. The results indicate that the problem has two time scales, one determined by the background electrical conductivity and the other by the time variation of the source function. A numerical method for obtaining quantitative results is introduced, and its properties are studied. Some simulation results on the evolution of the displacement and conduction currents during the electrification of a storm are presented.

  5. Impact of Future Climate on Radial Growth of Four Major Boreal Tree Species in the Eastern Canadian Boreal Forest

    PubMed Central

    Huang, Jian-Guo; Bergeron, Yves; Berninger, Frank; Zhai, Lihong; Tardif, Jacques C.; Denneler, Bernhard

    2013-01-01

    Immediate phenotypic variation and the lagged effect of evolutionary adaptation to climate change appear to be two key processes in tree responses to climate warming. This study examines these components in two types of growth models for predicting the 2010–2099 diameter growth change of four major boreal species Betula papyrifera, Pinus banksiana, Picea mariana, and Populus tremuloides along a broad latitudinal gradient in eastern Canada under future climate projections. Climate-growth response models for 34 stands over nine latitudes were calibrated and cross-validated. An adaptive response model (A-model), in which the climate-growth relationship varies over time, and a fixed response model (F-model), in which the relationship is constant over time, were constructed to predict future growth. For the former, we examined how future growth of stands in northern latitudes could be forecasted using growth-climate equations derived from stands currently growing in southern latitudes assuming that current climate in southern locations provide an analogue for future conditions in the north. For the latter, we tested if future growth of stands would be maximally predicted using the growth-climate equation obtained from the given local stand assuming a lagged response to climate due to genetic constraints. Both models predicted a large growth increase in northern stands due to more benign temperatures, whereas there was a minimal growth change in southern stands due to potentially warm-temperature induced drought-stress. The A-model demonstrates a changing environment whereas the F-model highlights a constant growth response to future warming. As time elapses we can predict a gradual transition between a response to climate associated with the current conditions (F-model) to a more adapted response to future climate (A-model). Our modeling approach provides a template to predict tree growth response to climate warming at mid-high latitudes of the Northern Hemisphere. PMID:23468879

  6. Impact of future climate on radial growth of four major boreal tree species in the Eastern Canadian boreal forest.

    PubMed

    Huang, Jian-Guo; Bergeron, Yves; Berninger, Frank; Zhai, Lihong; Tardif, Jacques C; Denneler, Bernhard

    2013-01-01

    Immediate phenotypic variation and the lagged effect of evolutionary adaptation to climate change appear to be two key processes in tree responses to climate warming. This study examines these components in two types of growth models for predicting the 2010-2099 diameter growth change of four major boreal species Betula papyrifera, Pinus banksiana, Picea mariana, and Populus tremuloides along a broad latitudinal gradient in eastern Canada under future climate projections. Climate-growth response models for 34 stands over nine latitudes were calibrated and cross-validated. An adaptive response model (A-model), in which the climate-growth relationship varies over time, and a fixed response model (F-model), in which the relationship is constant over time, were constructed to predict future growth. For the former, we examined how future growth of stands in northern latitudes could be forecasted using growth-climate equations derived from stands currently growing in southern latitudes assuming that current climate in southern locations provide an analogue for future conditions in the north. For the latter, we tested if future growth of stands would be maximally predicted using the growth-climate equation obtained from the given local stand assuming a lagged response to climate due to genetic constraints. Both models predicted a large growth increase in northern stands due to more benign temperatures, whereas there was a minimal growth change in southern stands due to potentially warm-temperature induced drought-stress. The A-model demonstrates a changing environment whereas the F-model highlights a constant growth response to future warming. As time elapses we can predict a gradual transition between a response to climate associated with the current conditions (F-model) to a more adapted response to future climate (A-model). Our modeling approach provides a template to predict tree growth response to climate warming at mid-high latitudes of the Northern Hemisphere.

  7. Woody plants and the prediction of climate-change impacts on bird diversity.

    PubMed

    Kissling, W D; Field, R; Korntheuer, H; Heyder, U; Böhning-Gaese, K

    2010-07-12

    Current methods of assessing climate-induced shifts of species distributions rarely account for species interactions and usually ignore potential differences in response times of interacting taxa to climate change. Here, we used species-richness data from 1005 breeding bird and 1417 woody plant species in Kenya and employed model-averaged coefficients from regression models and median climatic forecasts assembled across 15 climate-change scenarios to predict bird species richness under climate change. Forecasts assuming an instantaneous response of woody plants and birds to climate change suggested increases in future bird species richness across most of Kenya whereas forecasts assuming strongly lagged woody plant responses to climate change indicated a reversed trend, i.e. reduced bird species richness. Uncertainties in predictions of future bird species richness were geographically structured, mainly owing to uncertainties in projected precipitation changes. We conclude that assessments of future species responses to climate change are very sensitive to current uncertainties in regional climate-change projections, and to the inclusion or not of time-lagged interacting taxa. We expect even stronger effects for more specialized plant-animal associations. Given the slow response time of woody plant distributions to climate change, current estimates of future biodiversity of many animal taxa may be both biased and too optimistic.

  8. Bayesian shrinkage approach for a joint model of longitudinal and survival outcomes assuming different association structures.

    PubMed

    Andrinopoulou, Eleni-Rosalina; Rizopoulos, Dimitris

    2016-11-20

    The joint modeling of longitudinal and survival data has recently received much attention. Several extensions of the standard joint model that consists of one longitudinal and one survival outcome have been proposed including the use of different association structures between the longitudinal and the survival outcomes. However, in general, relatively little attention has been given to the selection of the most appropriate functional form to link the two outcomes. In common practice, it is assumed that the underlying value of the longitudinal outcome is associated with the survival outcome. However, it could be that different characteristics of the patients' longitudinal profiles influence the hazard. For example, not only the current value but also the slope or the area under the curve of the longitudinal outcome. The choice of which functional form to use is an important decision that needs to be investigated because it could influence the results. In this paper, we use a Bayesian shrinkage approach in order to determine the most appropriate functional forms. We propose a joint model that includes different association structures of different biomarkers and assume informative priors for the regression coefficients that correspond to the terms of the longitudinal process. Specifically, we assume Bayesian lasso, Bayesian ridge, Bayesian elastic net, and horseshoe. These methods are applied to a dataset consisting of patients with a chronic liver disease, where it is important to investigate which characteristics of the biomarkers have an influence on survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  10. Predictions and Studies with a One-Dimensional Ice/Ocean Model.

    DTIC Science & Technology

    1987-04-01

    Description of the model c. Initial conditions and forcing Two different test cases are used for model valiCa- tion and scientific studies. One is the...the density of ice (0.92 g/cm 3), AS/J(O0 is and y-components of the current velocity, w the z-com- the salinity difference per mill, assumed to be 30... different treatments of the mixed layer on the Semtner, quickly develops in the CML simulation. In growth and decay of ice. Henceforth, Semtner’s model

  11. path integral approach to closed form pricing formulas in the Heston framework.

    NASA Astrophysics Data System (ADS)

    Lemmens, Damiaan; Wouters, Michiel; Tempere, Jacques; Foulon, Sven

    2008-03-01

    We present a path integral approach for finding closed form formulas for option prices in the framework of the Heston model. The first model for determining option prices was the Black-Scholes model, which assumed that the logreturn followed a Wiener process with a given drift and constant volatility. To provide a realistic description of the market, the Black-Scholes results must be extended to include stochastic volatility. This is achieved by the Heston model, which assumes that the volatility follows a mean reverting square root process. Current applications of the Heston model are hampered by the unavailability of fast numerical methods, due to a lack of closed-form formulae. Therefore the search for closed form solutions is an essential step before the qualitatively better stochastic volatility models will be used in practice. To attain this goal we outline a simplified path integral approach yielding straightforward results for vanilla Heston options with correlation. Extensions to barrier options and other path-dependent option are discussed, and the new derivation is compared to existing results obtained from alternative path-integral approaches (Dragulescu, Kleinert).

  12. Dark current in multilayer stabilized amorphous selenium based photoconductive x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frey, Joel B.; Belev, George; Kasap, Safa O.

    2012-07-01

    We report on experimental results which show that the dark current in n-i-p structured, amorphous selenium films is independent of i-layer thickness in samples with consistently thick blocking layers. We have observed, however, a strong dependence on the n-layer thickness and positive contact metal chosen. These results indicate that the dominant source of the dark current is carrier injection from the contacts and any contribution from carriers thermally generated in the bulk of the photoconductive layer is negligible. This conclusion is supported by a description of the dark current transients at different applied fields by a model which assumes onlymore » carrier emission over a Schottky barrier. This model also predicts that while hole injection is initially dominant, some time after the application of the bias, electron injection may become the dominant source of dark current.« less

  13. Modelling of the Vajont rockslide displacements by delayed plasticity of interacting sliding blocks

    NASA Astrophysics Data System (ADS)

    Castellanza, riccardo; Hedge, Amarnath; Crosta, Giovanni; di Prisco, Claudio; Frigerio, Gabriele

    2015-04-01

    In order to model complex sliding masses subject to continuous slow movements related to water table fluctuations it is convenient to: i) model the time-dependent mechanical behaviour of the materials by means of a viscous-plastic constitutive law; ii) assume the water table fluctuation as the main input to induce displacement acceleration; iii) consider, the 3D constrains by maintaining a level of simplicity such to allow the implementation into EWS (Early Warning System) for risk management. In this work a 1D pseudo-dynamic visco-plastic model (Secondi et al. 2011), based on Perzyna's delayed plasticity theory is applied. The sliding mass is considered as a rigid block subject to its self weight, inertial forces and seepage forces varying with time. All non-linearities are lumped in a thin layer positioned between the rigid block and the stable bedrock. The mechanical response of this interface is assumed to be visco-plastic. The viscous nucleus is assumed to be of the exponential type, so that irreversible strains develop for both positive and negative values of the yield function; the sliding mass is discretized in blocks to cope with complex rockslide geometries; the friction angle is assumed to reduce with strain rate assuming a sort of strain - rate law (Dietrich-Ruina law). To validate the improvements introduced in this paper the simulation of the displacements of the Vajont rockslide from 1960 to the failure, occurred on October the 9th 1963, is perfomed. It will be shown that, in its modified version, the model satisfactorily fits the Vajont pre-collapse displacements triggered by the fluctuation of the Vajont lake level and the associated groundwater level. The model is able to follow the critical acceleration of the motion with a minimal change in friction properties.The discretization in interacting sliding blocks confirms its suitability to model the complex 3D rockslide behaviour. We are currently implementing a multi-block model capable to include the mutual influence of multiple blocks, characterized by different geometry and groundwater levels, shear zone properties and type of interconnection. Secondi M., Crosta G., Di Prisco C., Frigerio G., Frattini P., Agliardi F. (2011) "Landslide motion forecasting by a dynamic visco-plastic model", Proc. The Second World Landslide Forum, L09 - Advances in slope modelling, Rome, 3-9 October 2011, paper WLF2-2011-0571

  14. An interfacial mechanism for cloud droplet formation on organic aerosols

    DOE PAGES

    Ruehl, C. R.; Davies, J. F.; Wilson, K. R.

    2016-03-25

    Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depressionmore » by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation.« less

  15. An interfacial mechanism for cloud droplet formation on organic aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruehl, C. R.; Davies, J. F.; Wilson, K. R.

    Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depressionmore » by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation.« less

  16. An interfacial mechanism for cloud droplet formation on organic aerosols.

    PubMed

    Ruehl, Christopher R; Davies, James F; Wilson, Kevin R

    2016-03-25

    Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depression by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation. Copyright © 2016, American Association for the Advancement of Science.

  17. The Cortical Organization of Lexical Knowledge: A Dual Lexicon Model of Spoken Language Processing

    ERIC Educational Resources Information Center

    Gow, David W., Jr.

    2012-01-01

    Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood.…

  18. Global separation of plant transpiration from groundwater and streamflow

    Treesearch

    Jaivime Evaristo; Scott Jasechko; Jeffrey J. McDonnell

    2015-01-01

    Current land surface models assume that groundwater, streamflow and plant transpiration are all sourced and mediated by the same well mixed water reservoir—the soil. However, recent work in Oregon and Mexico has shown evidence of ecohydrological separation, whereby different subsurface compartmentalized pools of water supply either plant transpiration fluxes or the...

  19. Global Rankings in the Nordic Region: Challenging the Identity of Research-Intensive Universities?

    ERIC Educational Resources Information Center

    Elken, Mari; Hovdhaugen, Elisabeth; Stensaker, Bjørn

    2016-01-01

    Global university rankings currently attract considerable attention, and it is often assumed that such rankings may cause universities to prioritize activities and outcomes that will have a positive effect in their ranking position. A possible consequence of this could be the spread of a particular model of an "ideal" university. This…

  20. Highly Proficient Bilinguals Implement Inhibition: Evidence from N-2 Language Repetition Costs

    ERIC Educational Resources Information Center

    Declerck, Mathieu; Thoma, Aniella M.; Koch, Iring; Philipp, Andrea M.

    2015-01-01

    Several, but not all, models of language control assume that highly proficient bilinguals implement little to no inhibition during bilingual language production. In the current study, we tested this assumption with a less equivocal marker of inhibition (i.e., n-2 language repetition costs) than previous language switching studies have. N-2…

  1. Commentary: Student Cognition, the Situated Learning Context, and Test Score Interpretation

    ERIC Educational Resources Information Center

    La Marca, Paul M.

    2006-01-01

    Although it is assumed that student cognition contributes to student performance on achievement tests, it may be that current testing models lack the degree of specification necessary to warrant such inferences. With test score interpretations as the referent, the authors in this special issue address the role of student cognition in learning and…

  2. Aeroservoelastic Modeling of Body Freedom Flutter for Control System Design

    NASA Technical Reports Server (NTRS)

    Ouellette, Jeffrey

    2017-01-01

    One of the most severe forms of coupling between aeroelasticity and flight dynamics is an instability called freedom flutter. The existing tools often assume relatively weak coupling, and are therefore unable to accurately model body freedom flutter. Because the existing tools were developed from traditional flutter analysis models, inconsistencies in the final models are not compatible with control system design tools. To resolve these issues, a number of small, but significant changes have been made to the existing approaches. A frequency domain transformation is used with the unsteady aerodynamics to ensure a more physically consistent stability axis rational function approximation of the unsteady aerodynamic model. The aerodynamic model is augmented with additional terms to account for limitations of the baseline unsteady aerodynamic model and to account for the gravity forces. An assumed modes method is used for the structural model to ensure a consistent definition of the aircraft states across the flight envelope. The X-56A stiff wing flight-test data were used to validate the current modeling approach. The flight-test data does not show body-freedom flutter, but does show coupling between the flight dynamics and the aeroelastic dynamics and the effects of the fuel weight.

  3. Integrating Partial Polarization into a Metal-Ferroelectric-Semiconductor Field Effect Transistor Model

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd C.; Ho, Fat Duen

    1999-01-01

    The ferroelectric channel in a Metal-Ferroelectric-Semiconductor Field Effect Transistor (MFSFET) can partially change its polarization when the gate voltage near the polarization threshold voltage. This causes the MFSFET Drain current to change with repeated pulses of the same gate voltage near the polarization threshold voltage. A previously developed model [11, based on the Fermi-Dirac function, assumed that for a given gate voltage and channel polarization, a sin-le Drain current value would be generated. A study has been done to characterize the effects of partial polarization on the Drain current of a MFSFET. These effects have been described mathematically and these equations have been incorporated into a more comprehensive mathematical model of the MFSFET. The model takes into account the hysteresis nature of the MFSFET and the time dependent decay as well as the effects of partial polarization. This model defines the Drain current based on calculating the degree of polarization from previous gate pulses, the present Gate voltage, and the amount of time since the last Gate volta-e pulse.

  4. Why learning and development can lead to poorer recognition memory.

    PubMed

    Hayes, Brett K; Heit, Evan

    2004-08-01

    Current models of inductive reasoning in children and adults assume a central role for categorical knowledge. A recent paper by Sloutsky and Fisher challenges this assumption, showing that children are more likely than adults to rely on perceptual similarity as a basis for induction, and introduces a more direct method for examining the representations activated during induction. This method has the potential to constrain models of induction in novel ways, although there are still important challenges.

  5. Markovian prediction of future values for food grains in the economic survey

    NASA Astrophysics Data System (ADS)

    Sathish, S.; Khadar Babu, S. K.

    2017-11-01

    Now-a-days prediction and forecasting are plays a vital role in research. For prediction, regression is useful to predict the future value and current value on production process. In this paper, we assume food grain production exhibit Markov chain dependency and time homogeneity. The economic generative performance evaluation the balance time artificial fertilization different level in Estrusdetection using a daily Markov chain model. Finally, Markov process prediction gives better performance compare with Regression model.

  6. Modeling the impact of novel male contraceptive methods on reductions in unintended pregnancies in Nigeria, South Africa, and the United States.

    PubMed

    Dorman, Emily; Perry, Brian; Polis, Chelsea B; Campo-Engelstein, Lisa; Shattuck, Dominick; Hamlin, Aaron; Aiken, Abigail; Trussell, James; Sokal, David

    2018-01-01

    We modeled the potential impact of novel male contraceptive methods on averting unintended pregnancies in the United States, South Africa, and Nigeria. We used an established methodology for calculating the number of couple-years of protection provided by a given contraceptive method mix. We compared a "current scenario" (reflecting current use of existing methods in each country) against "future scenarios," (reflecting whether a male oral pill or a reversible vas occlusion was introduced) in order to estimate the impact on unintended pregnancies averted. Where possible, we based our assumptions on acceptability data from studies on uptake of novel male contraceptive methods. Assuming that only 10% of interested men would take up a novel male method and that users would comprise both switchers (from existing methods) and brand-new users of contraception, the model estimated that introducing the male pill or reversible vas occlusion would decrease unintended pregnancies by 3.5% to 5.2% in the United States, by 3.2% to 5% in South Africa, and by 30.4% to 38% in Nigeria. Alternative model scenarios are presented assuming uptake as high as 15% and as low as 5% in each location. Model results were sensitive to assumptions regarding novel method uptake and proportion of switchers vs. new users. Even under conservative assumptions, the introduction of a male pill or temporary vas occlusion could meaningfully contribute to averting unintended pregnancies in a variety of contexts, especially in settings where current use of contraception is low. Novel male contraceptives could play a meaningful role in averting unintended pregnancies in a variety of contexts. The potential impact is especially great in settings where current use of contraception is low and if novel methods can attract new contraceptive users. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  7. MO-F-16A-02: Simulation of a Medical Linear Accelerator for Teaching Purposes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlone, M; Lamey, M; Anderson, R

    Purpose: Detailed functioning of linear accelerator physics is well known. Less well developed is the basic understanding of how the adjustment of the linear accelerator's electrical components affects the resulting radiation beam. Other than the text by Karzmark, there is very little literature devoted to the practical understanding of linear accelerator functionality targeted at the radiotherapy clinic level. The purpose of this work is to describe a simulation environment for medical linear accelerators with the purpose of teaching linear accelerator physics. Methods: Varian type lineacs were simulated. Klystron saturation and peak output were modelled analytically. The energy gain of anmore » electron beam was modelled using load line expressions. The bending magnet was assumed to be a perfect solenoid whose pass through energy varied linearly with solenoid current. The dose rate calculated at depth in water was assumed to be a simple function of the target's beam current. The flattening filter was modelled as an attenuator with conical shape, and the time-averaged dose rate at a depth in water was determined by calculating kerma. Results: Fifteen analytical models were combined into a single model called SIMAC. Performance was verified systematically by adjusting typical linac control parameters. Increasing klystron pulse voltage increased dose rate to a peak, which then decreased as the beam energy was further increased due to the fixed pass through energy of the bending magnet. Increasing accelerator beam current leads to a higher dose per pulse. However, the energy of the electron beam decreases due to beam loading and so the dose rate eventually maximizes and the decreases as beam current was further increased. Conclusion: SIMAC can realistically simulate the functionality of a linear accelerator. It is expected to have value as a teaching tool for both medical physicists and linear accelerator service personnel.« less

  8. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  9. Explicit consideration of topological and parameter uncertainty gives new insights into a well-established model of glycolysis.

    PubMed

    Achcar, Fiona; Barrett, Michael P; Breitling, Rainer

    2013-09-01

    Previous models of glycolysis in the sleeping sickness parasite Trypanosoma brucei assumed that the core part of glycolysis in this unicellular parasite is tightly compartimentalized within an organelle, the glycosome, which had previously been shown to contain most of the glycolytic enzymes. The glycosomes were assumed to be largely impermeable, and exchange of metabolites between the cytosol and the glycosome was assumed to be regulated by specific transporters in the glycosomal membrane. This tight compartmentalization was considered to be essential for parasite viability. Recently, size-specific metabolite pores were discovered in the membrane of glycosomes. These channels are proposed to allow smaller metabolites to diffuse across the membrane but not larger ones. In light of this new finding, we re-analyzed the model taking into account uncertainty about the topology of the metabolic system in T. brucei, as well as uncertainty about the values of all parameters of individual enzymatic reactions. Our analysis shows that these newly-discovered nonspecific pores are not necessarily incompatible with our current knowledge of the glycosomal metabolic system, provided that the known cytosolic activities of the glycosomal enzymes play an important role in the regulation of glycolytic fluxes and the concentration of metabolic intermediates of the pathway. © 2013 FEBS.

  10. Model of a multiverse providing the dark energy of our universe

    NASA Astrophysics Data System (ADS)

    Rebhan, E.

    2017-09-01

    It is shown that the dark energy presently observed in our universe can be regarded as the energy of a scalar field driving an inflation-like expansion of a multiverse with ours being a subuniverse among other parallel universes. A simple model of this multiverse is elaborated: Assuming closed space geometry, the origin of the multiverse can be explained by quantum tunneling from nothing; subuniverses are supposed to emerge from local fluctuations of separate inflation fields. The standard concept of tunneling from nothing is extended to the effect that in addition to an inflationary scalar field, matter is also generated, and that the tunneling leads to an (unstable) equilibrium state. The cosmological principle is assumed to pertain from the origin of the multiverse until the first subuniverses emerge. With increasing age of the multiverse, its spatial curvature decays exponentially so fast that, due to sharing the same space, the flatness problem of our universe resolves by itself. The dark energy density imprinted by the multiverse on our universe is time-dependent, but such that the ratio w = ϱ/(c2p) of its mass density and pressure (times c2) is time-independent and assumes a value - 1 + 𝜖 with arbitrary 𝜖 > 0. 𝜖 can be chosen so small, that the dark energy model of this paper can be fitted to the current observational data as well as the cosmological constant model.

  11. Modeling the non-recycled Fermi Gamma-ray pulsar population

    DOE PAGES

    Perera, B. B. P.; McLaughlin, M. A.; Cordes, J. M.; ...

    2013-09-25

    Here, we use Fermi Gamma-ray Space Telescope detections and upper limits on non-recycled pulsars obtained from the Large Area Telescope (LAT) to constrain how the gamma-ray luminosity L γ depends on the period P and the period derivativemore » $$\\dot{P}$$. We use a Bayesian analysis to calculate a best-fit luminosity law, or dependence of L γ on P and $$\\dot{P}$$, including different methods for modeling the beaming factor. An outer gap (OG) magnetosphere geometry provides the best-fit model, which is $$L_\\gamma \\propto P^{-a} \\dot{P}^{b}$$ where a = 1.36 ± 0.03 and b = 0.44 ± 0.02, similar to but not identical to the commonly assumed $$L_\\gamma \\propto \\sqrt{\\dot{E}} \\propto P^{-1.5} \\dot{P}^{0.5}$$. Given upper limits on gamma-ray fluxes of currently known radio pulsars and using the OG model, we find that about 92% of the radio-detected pulsars have gamma-ray beams that intersect our line of sight. By modeling the misalignment of radio and gamma-ray beams of these pulsars, we find an average gamma-ray beaming solid angle of about 3.7π for the OG model, assuming a uniform beam. Using LAT-measured diffuse fluxes, we place a 2σ upper limit on the average braking index and a 2σ lower limit on the average surface magnetic field strength of the pulsar population of 3.8 and 3.2 × 1010 G, respectively. We then predict the number of non-recycled pulsars detectable by the LAT based on our population model. Using the 2 yr sensitivity, we find that the LAT is capable of detecting emission from about 380 non-recycled pulsars, including 150 currently identified radio pulsars. Using the expected 5 yr sensitivity, about 620 non-recycled pulsars are detectable, including about 220 currently identified radio pulsars. As a result, we note that these predictions significantly depend on our model assumptions.« less

  12. A quasi-static model of global atmospheric electricity. I - The lower atmosphere

    NASA Technical Reports Server (NTRS)

    Hays, P. B.; Roble, R. G.

    1979-01-01

    A quasi-steady model of global lower atmospheric electricity is presented. The model considers thunderstorms as dipole electric generators that can be randomly distributed in various regions and that are the only source of atmospheric electricity and includes the effects of orography and electrical coupling along geomagnetic field lines in the ionosphere and magnetosphere. The model is used to calculate the global distribution of electric potential and current for model conductivities and assumed spatial distributions of thunderstorms. Results indicate that large positive electric potentials are generated over thunderstorms and penetrate to ionospheric heights and into the conjugate hemisphere along magnetic field lines. The perturbation of the calculated electric potential and current distributions during solar flares and subsequent Forbush decreases is discussed, and future measurements of atmospheric electrical parameters and modifications of the model which would improve the agreement between calculations and measurements are suggested.

  13. Electrical description of N2 capacitively coupled plasmas with the global model

    NASA Astrophysics Data System (ADS)

    Cao, Ming-Lu; Lu, Yi-Jia; Cheng, Jia; Ji, Lin-Hong; Engineering Design Team

    2016-10-01

    N2 discharges in a commercial capacitively coupled plasma reactor are modelled by a combination of an equivalent circuit and the global model, for a range of gas pressure at 1 4 Torr. The ohmic and inductive plasma bulk and the capacitive sheath are represented as LCR elements, with electrical characteristics determined by plasma parameters. The electron density and electron temperature are obtained from the global model in which a Maxwellian electron distribution is assumed. Voltages and currents are recorded by a VI probe installed after the match network. Using the measured voltage as an input, the current flowing through the discharge volume is calculated from the electrical model and shows excellent agreement with the measurements. The experimentally verified electrical model provides a simple and accurate description for the relationship between the external electrical parameters and the plasma properties, which can serve as a guideline for process window planning in industrial applications.

  14. Ocean Current Estimation Using a Multi-Model Ensemble Kalman Filter During the Grand Lagrangian Deployment Experiment (GLAD)

    DTIC Science & Technology

    2014-12-27

    oil spill plumes). Results can be used for operational applications or to derive enhanced background fields for other data assimilation systems, thus...with the ocean state and for which a forward model exists (e.g. oil spill plumes). The MEKF assumes that a prior system is running with several forecast...the ocean state and for which a forward model exists (e.g. oil spill images). The results discussed in this paper can be viewed as part of a framework

  15. Mapping flood hazards under uncertainty through probabilistic flood inundation maps

    NASA Astrophysics Data System (ADS)

    Stephens, T.; Bledsoe, B. P.; Miller, A. J.; Lee, G.

    2017-12-01

    Changing precipitation, rapid urbanization, and population growth interact to create unprecedented challenges for flood mitigation and management. Standard methods for estimating risk from flood inundation maps generally involve simulations of floodplain hydraulics for an established regulatory discharge of specified frequency. Hydraulic model results are then geospatially mapped and depicted as a discrete boundary of flood extents and a binary representation of the probability of inundation (in or out) that is assumed constant over a project's lifetime. Consequently, existing methods utilized to define flood hazards and assess risk management are hindered by deterministic approaches that assume stationarity in a nonstationary world, failing to account for spatio-temporal variability of climate and land use as they translate to hydraulic models. This presentation outlines novel techniques for portraying flood hazards and the results of multiple flood inundation maps spanning hydroclimatic regions. Flood inundation maps generated through modeling of floodplain hydraulics are probabilistic reflecting uncertainty quantified through Monte-Carlo analyses of model inputs and parameters under current and future scenarios. The likelihood of inundation and range of variability in flood extents resulting from Monte-Carlo simulations are then compared with deterministic evaluations of flood hazards from current regulatory flood hazard maps. By facilitating alternative approaches of portraying flood hazards, the novel techniques described in this presentation can contribute to a shifting paradigm in flood management that acknowledges the inherent uncertainty in model estimates and the nonstationary behavior of land use and climate.

  16. Assessment of regional management strategies for controlling seawater intrusion

    USGS Publications Warehouse

    Reichard, E.G.; Johnson, T.A.

    2005-01-01

    Simulation-optimization methods, applied with adequate sensitivity tests, can provide useful quantitative guidance for controlling seawater intrusion. This is demonstrated in an application to the West Coast Basin of coastal Los Angeles that considers two management options for improving hydraulic control of seawater intrusion: increased injection into barrier wells and in lieu delivery of surface water to replace current pumpage. For the base-case optimization analysis, assuming constant groundwater demand, in lieu delivery was determined to be most cost effective. Reduced-cost information from the optimization provided guidance for prioritizing locations for in lieu delivery. Model sensitivity to a suite of hydrologic, economic, and policy factors was tested. Raising the imposed average water-level constraint at the hydraulic-control locations resulted in nonlinear increases in cost. Systematic varying of the relative costs of injection and in lieu water yielded a trade-off curve between relative costs and injection/in lieu amounts. Changing the assumed future scenario to one of increasing pumpage in the adjacent Central Basin caused a small increase in the computed costs of seawater intrusion control. Changing the assumed boundary condition representing interaction with an adjacent basin did not affect the optimization results. Reducing the assumed hydraulic conductivity of the main productive aquifer resulted in a large increase in the model-computed cost. Journal of Water Resources Planning and Management ?? ASCE.

  17. Modulation of Galactic Cosmic Rays in the Inner Heliosphere, Comparing with PAMELA Measurements

    NASA Astrophysics Data System (ADS)

    Qin, G.; Shen, Z.-N.

    2017-09-01

    We develop a numerical model to study the time-dependent modulation of galactic cosmic rays in the inner heliosphere. In the model, a time-delayed modified Parker heliospheric magnetic field (HMF) and a new diffusion coefficient model, NLGCE-F, from Qin & Zhang, are adopted. In addition, the latitudinal dependence of magnetic turbulence magnitude is assumed to be ˜ (1+{\\sin }2θ )/2 from the observations of Ulysses, and the radial dependence is assumed to be ˜ {r}S, where we choose an expression of S as a function of the heliospheric current sheet tilt angle. We show that the analytical expression used to describe the spatial variation of HMF turbulence magnitude agrees well with the Ulysses, Voyager 1, and Voyager 2 observations. By numerically calculating the modulation code, we get the proton energy spectra as a function of time during the recent solar minimum, it is shown that the modulation results are consistent with the Payload for Antimatter-Matter Exploration and Light-nuclei Astrophysics measurements.

  18. The drift diffusion model as the choice rule in reinforcement learning.

    PubMed

    Pedersen, Mads Lund; Frank, Michael J; Biele, Guido

    2017-08-01

    Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.

  19. The drift diffusion model as the choice rule in reinforcement learning

    PubMed Central

    Frank, Michael J.

    2017-01-01

    Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyper-activity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups. PMID:27966103

  20. An Investigation of the Role of Grapheme Units in Word Recognition

    ERIC Educational Resources Information Center

    Lupker, Stephen J.; Acha, Joana; Davis, Colin J.; Perea, Manuel

    2012-01-01

    In most current models of word recognition, the word recognition process is assumed to be driven by the activation of letter units (i.e., that letters are the perceptual units in reading). An alternative possibility is that the word recognition process is driven by the activation of grapheme units, that is, that graphemes, rather than letters, are…

  1. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  2. Impact of Different Levels of Epistemic Beliefs on Learning Processes and Outcomes in Vocational Education and Training

    ERIC Educational Resources Information Center

    Berding, Florian; Rolf-Wittlake, Katharina; Buschenlange, Janes

    2017-01-01

    Epistemic beliefs are individuals' beliefs about knowledge and knowing. Modelling them is currently based on two central assumptions. First, epistemic beliefs are conceptualized as a multi-level construct, i.e. they exist on a general, academic, domain-specific and/or topic-specific level. Second, research assumes that their more concrete levels…

  3. Associative visual agnosia: a case study.

    PubMed

    Charnallet, A; Carbonnel, S; David, D; Moreaud, O

    2008-01-01

    We report a case of massive associative visual agnosia. In the light of current theories of identification and semantic knowledge organization, a deficit involving both levels of structural description system and visual semantics must be assumed to explain the case. We suggest, in line with a previous case study, an alternative account in the framework of (non abstractive) episodic models of memory.

  4. A Creatively Creative Taxonomy on Creativity: A New Model of Creativity and Other Novel Forms of Behavior.

    ERIC Educational Resources Information Center

    Stahl, Robert J.

    Some of the most used, misused, and abused terms in contemporary education are the words "create,""creative," and "creativity." One way of understanding creativity is to reject the current practice of assuming that creative behavior is directly caused by some special kind of mental operation called "creative thinking." What can be accepted is the…

  5. Playing to Teachers' Strengths: Using Multiple Measures of Teacher Effectiveness to Improve Teacher Assignments

    ERIC Educational Resources Information Center

    Fox, Lindsay

    2016-01-01

    Current uses of value-added modeling largely ignore or assume away the potential for teachers to be more effective with one type of student than another or in one subject than another. This paper explores the stability of value-added measures across different subgroups and subjects using administrative data from a large urban school district. For…

  6. An averaging battery model for a lead-acid battery operating in an electric car

    NASA Technical Reports Server (NTRS)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  7. A Description of Local Time Asymmetries in the Kronian Current Sheet

    NASA Astrophysics Data System (ADS)

    Nickerson, J. S.; Hansen, K. C.; Gombosi, T. I.

    2012-12-01

    Cassini observations imply that Saturn's magnetospheric current sheet is displaced northward above the rotational equator [C.S. Arridge et al., Warping of Saturn's magnetospheric and magnetotail current sheets, Journal of Geophysical Research, Vol. 113, August 2008]. Arridge et al. show that this hinging of the current sheet above the equator occurs over the noon, midnight, and dawn local time sectors. They present an azimuthally independent model to describe this paraboloid-like geometry. We have used our global MHD model, BATS-R-US/SWMF, to study Saturn's magnetospheric current sheet under various solar wind dynamic pressure and solar zenith angle conditions. We show that under reasonable conditions the current sheet does take on the basic shape of the Arridge model in the noon, midnight, and dawn sectors. However, the hinging distance parameter used in the Arridge model is not a constant and does in fact vary in Saturn local time. We recommend that the Arridge model should be adjusted to account for this azimuthal dependence. Arridge et al. does not discuss the shape of the current sheet in the dusk sector due to an absence of data but does presume that the current sheet will assume the same geometry in this region. On the contrary, our model shows that this is not the case. On the dusk side the current sheet hinges (aggressively) southward and cannot be accounted for by the Arridge model. We will present results from our simulations showing the deviation from axisymmetry and the general behavior of the current sheet under different conditions.

  8. High-precision radiometric tracking for planetary approach and encounter in the inner solar system

    NASA Technical Reports Server (NTRS)

    Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.

    1989-01-01

    The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.

  9. Nonlinear dielectric response and transient current: An effective potential for ferroelectric domain wall displacement

    NASA Astrophysics Data System (ADS)

    Placeres Jiménez, Rolando; Pedro Rino, José; Marino Gonçalves, André; Antonio Eiras, José

    2013-09-01

    Ferroelectric domain walls are modeled as rigid bodies moving under the action of a potential field in a dissipative medium. Assuming that the dielectric permittivity follows the dependence ɛ '∝1/(α+βE2), it obtained the exact expression for the effective potential. Simulations of polarization current correctly predict a power law. Such results could be valuable in the study of domain wall kinetic and ultrafast polarization processes. The model is extended to poled samples allowing the study of nonlinear dielectric permittivity under subswitching electric fields. Experimental nonlinear data from PZT 20/80 thin films and Fe+3 doped PZT 40/60 ceramic are reproduced.

  10. Effect of Relative Velocity Between Rough Surfaces: Hydrodynamic Lubrication of Rotary Lip Seal

    NASA Astrophysics Data System (ADS)

    Lahjouji, I.; Gadari, M. El; Fahime, B. El; Radouani, M.

    2017-05-01

    Since the sixties, most of numerical studies that model the rotary lip seal lubrication have been restricted by assuming that one of the two opposing surfaces is smooth: either the lip or the shaft. This hypothesis, although it is verified only for a shaft roughness ten times smaller than that of the seal, is the best solution to avoid the transient term "∂h/∂t" in the deterministic approach. Thus, the subject of the present study is twofold. The first part validates the current hydrodynamic model with the international literature by assuming the asperities on the lip and shaft as a two-dimensional cosine function. In the second part the Reynolds equation for rough surfaces with relative motion is solved. The numerical results show that the relative motion between rough surfaces impacts significantly the load support and the leakage rate, but affects slightly the friction torque.

  11. Theory and application of an approximate model of saltwater upconing in aquifers

    USGS Publications Warehouse

    McElwee, C.; Kemblowski, M.

    1990-01-01

    Motion and mixing of salt water and fresh water are vitally important for water-resource development throughout the world. An approximate model of saltwater upconing in aquifers is developed, which results in three non-linear coupled equations for the freshwater zone, the saltwater zone, and the transition zone. The description of the transition zone uses the concept of a boundary layer. This model invokes some assumptions to give a reasonably tractable model, considerably better than the sharp interface approximation but considerably simpler than a fully three-dimensional model with variable density. We assume the validity of the Dupuit-Forchheimer approximation of horizontal flow in each layer. Vertical hydrodynamic dispersion into the base of the transition zone is assumed and concentration of the saltwater zone is assumed constant. Solute in the transition zone is assumed to be moved by advection only. Velocity and concentration are allowed to vary vertically in the transition zone by using shape functions. Several numerical techniques can be used to solve the model equations, and simple analytical solutions can be useful in validating the numerical solution procedures. We find that the model equations can be solved with adequate accuracy using the procedures presented. The approximate model is applied to the Smoky Hill River valley in central Kansas. This model can reproduce earlier sharp interface results as well as evaluate the importance of hydrodynamic dispersion for feeding salt water to the river. We use a wide range of dispersivity values and find that unstable upconing always occurs. Therefore, in this case, hydrodynamic dispersion is not the only mechanism feeding salt water to the river. Calculations imply that unstable upconing and hydrodynamic dispersion could be equally important in transporting salt water. For example, if groundwater flux to the Smoky Hill River were only about 40% of its expected value, stable upconing could exist where hydrodynamic dispersion into a transition zone is the primary mechanism for moving salt water to the river. The current model could be useful in situations involving dense saltwater layers. ?? 1990.

  12. Battery Cell Thermal Runaway Calorimeter

    NASA Technical Reports Server (NTRS)

    Darcy, Eric

    2017-01-01

    We currently have several methods for determining total energy output of an 18650 lithium ion cell. We do not, however, have a good method for determining the fraction of energy that dissipates via conduction through the cell can vs. the energy that is released in the form of ejecta. Knowledge of this fraction informs the design of our models, battery packs, and storage devices; (a) No longer need to assume cell stays together in modeling (b) Increase efficiency of TR mitigation (c) Shave off excess protection.

  13. Strength and Cycle Time of Ventilatory Oscillations in Unacclimatized Humans at High Altitude,

    DTIC Science & Technology

    1983-03-04

    that our respiratory monitor- ing techniques were identical. We assume this difference is due to lack of acclimatization in our current subjects. In the...instability in the blood gas feedback con- trol system. Respiratory control system modeling by Khoo et al (8) has shown that such instability is...In a respiratory control sys- tem model a stronger pattern corresponds to increased loop gain at a phase angle of 180 degrees. As shown by Khoo, et

  14. Use of a spread sheet to calculate the current-density distribution produced in human and rat models by low-frequency electric fields.

    PubMed

    Hart, F X

    1990-01-01

    The current-density distribution produced inside irregularly shaped, homogeneous human and rat models by low-frequency electric fields is obtained by a two-stage finite-difference procedure. In the first stage the model is assumed to be equipotential. Laplace's equation is solved by iteration in the external region to obtain the capacitive-current densities at the model's surface elements. These values then provide the boundary conditions for the second-stage relaxation solution, which yields the internal current-density distribution. Calculations were performed with the Excel spread-sheet program on a Macintosh-II microcomputer. A spread sheet is a two-dimensional array of cells. Each cell of the sheet can represent a square element of space. Equations relating the values of the cells can represent the relationships between the potentials in the corresponding spatial elements. Extension to three dimensions is readily made. Good agreement was obtained with current densities measured on human models with both, one, or no legs grounded and on rat models in four different grounding configurations. The results also compared well with predictions of more sophisticated numerical analyses. Spread sheets can provide an inexpensive and relatively simple means to perform good, approximate dosimetric calculations on irregularly shaped objects.

  15. Integrated modelling of steady-state scenarios and heating and current drive mixes for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murakami, Masanori; Park, Jin Myung; Giruzzi, G.

    2011-01-01

    Recent progress on ITER steady-state (SS) scenario modelling by the ITPA-IOS group is reviewed. Code-to-code benchmarks as the IOS group's common activities for the two SS scenarios (weak shear scenario and internal transport barrier scenario) are discussed in terms of transport, kinetic profiles, and heating and current drive (CD) sources using various transport codes. Weak magnetic shear scenarios integrate the plasma core and edge by combining a theory-based transport model (GLF23) with scaled experimental boundary profiles. The edge profiles (at normalized radius rho = 0.8-1.0) are adopted from an edge-localized mode-averaged analysis of a DIII-D ITER demonstration discharge. A fullymore » noninductive SS scenario is achieved with fusion gain Q = 4.3, noninductive fraction f(NI) = 100%, bootstrap current fraction f(BS) = 63% and normalized beta beta(N) = 2.7 at plasma current I(p) = 8MA and toroidal field B(T) = 5.3 T using ITER day-1 heating and CD capability. Substantial uncertainties come from outside the radius of setting the boundary conditions (rho = 0.8). The present simulation assumed that beta(N)(rho) at the top of the pedestal (rho = 0.91) is about 25% above the peeling-ballooning threshold. ITER will have a challenge to achieve the boundary, considering different operating conditions (T(e)/T(i) approximate to 1 and density peaking). Overall, the experimentally scaled edge is an optimistic side of the prediction. A number of SS scenarios with different heating and CD mixes in a wide range of conditions were explored by exploiting the weak-shear steady-state solution procedure with the GLF23 transport model and the scaled experimental edge. The results are also presented in the operation space for DT neutron power versus stationary burn pulse duration with assumed poloidal flux availability at the beginning of stationary burn, indicating that the long pulse operation goal (3000s) at I(p) = 9 MA is possible. Source calculations in these simulations have been revised for electron cyclotron current drive including parallel momentum conservation effects and for neutral beam current drive with finite orbit and magnetic pitch effects.« less

  16. Cell kill by megavoltage protons with high LET.

    PubMed

    Kuperman, Vadim Y

    2016-07-21

    The aim of the current study is to develop a radiobiological model which describes the effect of linear energy transfer (LET) on cell survival and relative biological effectiveness (RBE) of megavoltage protons. By assuming the existence of critical sites within a cell, analytical expression for cell survival S as a function of LET is derived. The obtained results indicate that in cases where dose per fraction is small, [Formula: see text] is a linear-quadratic (LQ) function of dose while both alpha and beta radio-sensitivities are non-linearly dependent on LET. In particular, in the current model alpha increases with increasing LET while beta decreases. Conversely, in the case of large dose per fraction, the LQ dependence of [Formula: see text] on dose is invalid. The proposed radiobiological model predicts cell survival probability and RBE which, in general, deviate from the results obtained by using conventional LQ formalism. The differences between the LQ model and that described in the current study are reflected in the calculated RBE of protons.

  17. Comparative assessment of passive surveillance in disease-free and endemic situation: Example of Brucella melitensis surveillance in Switzerland and in Bosnia and Herzegovina

    PubMed Central

    Hadorn, Daniela C; Haracic, Sabina Seric; Stärk, Katharina DC

    2008-01-01

    Background Globalization and subsequent growth in international trade in animals and animal products has increased the importance of international disease reporting. Efficient and reliable surveillance systems are needed in order to document the disease status of a population at a given time. In this context, passive surveillance plays an important role in early warning systems. However, it is not yet routinely integrated in the assessment of disease surveillance systems because different factors like the disease awareness (DA) of people reporting suspect cases influence the detection performance of passive surveillance. In this paper, we used scenario tree methodology in order to evaluate and compare the quality and benefit of abortion testing (ABT) for Brucella melitensis (Bm) between the disease free situation in Switzerland (CH) and a hypothetical disease free situation in Bosnia and Herzegovina (BH), taking into account DA levels assumed for the current endemic situation in BH. Results The structure and input parameters of the scenario tree were identical for CH and BH with the exception of population data in small ruminants and the DA in farmers and veterinarians. The sensitivity analysis of the stochastic scenario tree model showed that the small ruminant population structure and the DA of farmers were important influential parameters with regard to the unit sensitivity of ABT in both CH and BH. The DA of both farmers and veterinarians was assumed to be higher in BH than in CH due to the current endemic situation in BH. Although the same DA cannot necessarily be assumed for the modelled hypothetical disease free situation as for the actual endemic situation, it shows the importance of the higher vigilance of people reporting suspect cases on the probability that an average unit processed in the ABT-component would test positive. Conclusion The actual sensitivity of passive surveillance approaches heavily depends on the context in which they are applied. Scenario tree modelling allows for the evaluation of such passive surveillance system components under assumed disease free situation. Despite data gaps, this is a real opportunity to compare different situations and to explore consequences of changes that could be made. PMID:19099610

  18. Comparative assessment of passive surveillance in disease-free and endemic situation: example of Brucella melitensis surveillance in Switzerland and in Bosnia and Herzegovina.

    PubMed

    Hadorn, Daniela C; Haracic, Sabina Seric; Stärk, Katharina D C

    2008-12-22

    Globalization and subsequent growth in international trade in animals and animal products has increased the importance of international disease reporting. Efficient and reliable surveillance systems are needed in order to document the disease status of a population at a given time. In this context, passive surveillance plays an important role in early warning systems. However, it is not yet routinely integrated in the assessment of disease surveillance systems because different factors like the disease awareness (DA) of people reporting suspect cases influence the detection performance of passive surveillance. In this paper, we used scenario tree methodology in order to evaluate and compare the quality and benefit of abortion testing (ABT) for Brucella melitensis (Bm) between the disease free situation in Switzerland (CH) and a hypothetical disease free situation in Bosnia and Herzegovina (BH), taking into account DA levels assumed for the current endemic situation in BH. The structure and input parameters of the scenario tree were identical for CH and BH with the exception of population data in small ruminants and the DA in farmers and veterinarians. The sensitivity analysis of the stochastic scenario tree model showed that the small ruminant population structure and the DA of farmers were important influential parameters with regard to the unit sensitivity of ABT in both CH and BH. The DA of both farmers and veterinarians was assumed to be higher in BH than in CH due to the current endemic situation in BH. Although the same DA cannot necessarily be assumed for the modelled hypothetical disease free situation as for the actual endemic situation, it shows the importance of the higher vigilance of people reporting suspect cases on the probability that an average unit processed in the ABT-component would test positive. The actual sensitivity of passive surveillance approaches heavily depends on the context in which they are applied. Scenario tree modelling allows for the evaluation of such passive surveillance system components under assumed disease free situation. Despite data gaps, this is a real opportunity to compare different situations and to explore consequences of changes that could be made.

  19. Correction for partial volume effect in PET blood flow images

    NASA Astrophysics Data System (ADS)

    Gage, Howard D.; Fahey, Fredrick H.; Santago, Peter, II; Harkness, Beth A.; Keyes, J. W.

    1996-04-01

    Current positron emission tomography techniques for the measurement of cerebral blood flow assume that voxels represent pure material regions. In this work, a method is presented which utilizes anatomical information from a high resolution modality such as MRI in conjunction with a multicompartment extension of the Kety model to obtain intravoxel, tissue specific blood flow values. In order to evaluate the proposed method, noisy time activity curves (TACs) were simulated representing different combinations of gray matter, white matter and CSF, and ratios of gray to white matter blood flow. In all experiments it was assumed that registered MR data supplied the number of materials and the fraction of each present. For each TAC, three experiments were run. In the first it was assumed that the fraction of each material determined by MRI was correct, and, in the second two, that the value was either too high or too low. Using the tree annealing method, material flows were determined which gave the best fit of the model to the simulated TAC data. The results indicate that the accuracy of the method is approximately linearly related to the error in material fraction estimated for a voxel.

  20. Performance Improvement Assuming Complexity

    ERIC Educational Resources Information Center

    Rowland, Gordon

    2007-01-01

    Individual performers, work teams, and organizations may be considered complex adaptive systems, while most current human performance technologies appear to assume simple determinism. This article explores the apparent mismatch and speculates on future efforts to enhance performance if complexity rather than simplicity is assumed. Included are…

  1. Modeling for cardiac excitation propagation based on the Nernst-Planck equation and homogenization.

    PubMed

    Okada, Jun-ichi; Sugiura, Seiryo; Hisada, Toshiaki

    2013-06-01

    The bidomain model is a commonly used mathematical model of the electrical properties of the cardiac muscle that takes into account the anisotropy of both the intracellular and extracellular spaces. However, the equations contain self-contradiction such that the update of ion concentrations does not consider intracellular or extracellular ion movements due to the gradient of electric potential and the membrane charge as capacitive currents in spite of the fact that those currents are taken into account in forming Kirchhoff's first law. To overcome this problem, we start with the Nernst-Planck equation, the ionic conservation law, and the electroneutrality condition at the cellular level, and by introducing a homogenization method and assuming uniformity of variables at the microscopic scale, we derive rational bidomain equations at the macroscopic level.

  2. Defining modeling parameters for juniper trees assuming pleistocene-like conditions at the NTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarbox, S.R.; Cochran, J.R.

    1994-12-31

    This paper addresses part of Sandia National Laboratories` (SNL) efforts to assess the long-term performance of the Greater Confinement Disposal (GCD) facility located on the Nevada Test Site (NTS). Of issue is whether the GCD site complies with 40 CFR 191 standards set for transuranic (TRU) waste burial. SNL has developed a radionuclide transport model which can be used to assess TRU radionuclide movement away from the GCD facility. An earlier iteration of the model found that radionuclide uptake and release by plants is an important aspect of the system to consider. Currently, the shallow-rooted plants at the NTS domore » not pose a threat to the integrity of the GCD facility. However, the threat increases substantially it deeper-rooted woodland species migrate to the GCD facility, given a shift to a wetter climate. The model parameters discussed here will be included in the next model iteration which assumes a climate shift will provide for the growth of juniper trees at the GCD facility. Model parameters were developed using published data and wherever possible, data were taken from juniper and pinon-juniper studies that mirrored as many aspects of the GCD facility as possible.« less

  3. A model to evaluate 100-year energy mix scenarios to facilitate deep decarbonization in the southeastern United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adkisson, Mary A.; Qualls, A. L.

    The Southeast United States consumes approximately one billion megawatt-hours of electricity annually; roughly two-thirds from carbon dioxide (CO 2) emitting sources. The balance is produced by non-CO 2 emitting sources: nuclear power, hydroelectric power, and other renewables. Approximately 40% of the total CO 2 emissions come from the electric grid. The CO 2 emitting sources, coal, natural gas, and petroleum, produce approximately 372 million metric tons of CO 2 annually. The rest is divided between the transportation sector (36%), the industrial sector (20%), the residential sector (3%), and the commercial sector (2%). An Energy Mix Modeling Analysis (EMMA) tool wasmore » developed to evaluate 100-year energy mix strategies to reduce CO 2 emissions in the southeast. Current energy sector data was gathered and used to establish a 2016 reference baseline. The spreadsheet-based calculation runs 100-year scenarios based on current nuclear plant expiration dates, assumed electrical demand changes from the grid, assumed renewable power increases and efficiency gains, and assumed rates of reducing coal generation and deployment of new nuclear reactors. Within the model, natural gas electrical generation is calculated to meet any demand not met by other sources. Thus, natural gas is viewed as a transitional energy source that produces less CO 2 than coal until non-CO 2 emitting sources can be brought online. The annual production of CO 2 and spent nuclear fuel and the natural gas consumed are calculated and summed. A progression of eight preliminary scenarios show that nuclear power can substantially reduce or eliminate demand for natural gas within 100 years if it is added at a rate of only 1000 MWe per year. Any increases in renewable energy or efficiency gains can offset the need for nuclear power. However, using nuclear power to reduce CO 2 will result in significantly more spent fuel. More efficient advanced reactors can only marginally reduce the amount of spent fuel generated in the next 100 years if they are assumed to be available beginning around 2040. Thus closing the nuclear fuel cycle to reduce nuclear spent fuel inventories should be considered. Future work includes the incorporation of economic features into the model and the extension of the evaluation to the industrial sector. It will also be necessary to identify suitable sites for additional reactors.« less

  4. Rotating gravity currents. Part 1. Energy loss theory

    NASA Astrophysics Data System (ADS)

    Martin, J. R.; Lane-Serff, G. F.

    2005-01-01

    A comprehensive energy loss theory for gravity currents in rotating rectangular channels is presented. The model is an extension of the non-rotating energy loss theory of Benjamin (J. Fluid Mech. vol. 31, 1968, p. 209) and the steady-state dissipationless theory of rotating gravity currents of Hacker (PhD thesis, 1996). The theory assumes the fluid is inviscid, there is no shear within the current, and the Boussinesq approximation is made. Dissipation is introduced using a simple method. A head loss term is introduced into the Bernoulli equation and it is assumed that the energy loss is uniform across the stream. Conservation of momentum, volume flux and potential vorticity between upstream and downstream locations is then considered. By allowing for energy dissipation, results are obtained for channels of arbitrary depth and width (relative to the current). The results match those from earlier workers in the two limits of (i) zero rotation (but including dissipation) and (ii) zero dissipation (but including rotation). Three types of flow are identified as the effect of rotation increases, characterized in terms of the location of the outcropping interface between the gravity current and the ambient fluid on the channel boundaries. The parameters for transitions between these cases are quantified, as is the detailed behaviour of the flow in all cases. In particular, the speed of the current can be predicted for any given channel depth and width. As the channel depth increases, the predicted Froude number tends to surd 2, as for non-rotating flows.

  5. In situ Observations of Heliospheric Current Sheets Evolution

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Peng, Jun; Huang, Jia; Klecker, Berndt

    2017-04-01

    We investigate the Heliospheric current sheet observation time difference of the spacecraft using the STEREO, ACE and WIND data. The observations are first compared to a simple theory in which the time difference is only determined by the radial and longitudinal separation between the spacecraft. The predictions fit well with the observations except for a few events. Then the time delay caused by the latitudinal separation is taken in consideration. The latitude of each spacecraft is calculated based on the PFSS model assuming that heliospheric current sheets propagate at the solar wind speed without changing their shapes from the origin to spacecraft near 1AU. However, including the latitudinal effects does not improve the prediction, possibly because that the PFSS model may not locate the current sheets accurately enough. A new latitudinal delay is predicted based on the time delay using the observations on ACE data. The new method improved the prediction on the time lag between spacecraft; however, further study is needed to predict the location of the heliospheric current sheet more accurately.

  6. The Adequacy of Current Interagency Doctrine

    DTIC Science & Technology

    2007-06-15

    proposed new initiatives, the CSIS model calls for the establishment of an Interagency Task Force ( IATF ) to achieve unity of effort at the tactical...level, specifically 15 for reconstruction and stability operations. The IATF would assume the lead from the COCOM once major combat operations were...complete in a given area or region. The IATF would be civilian-led, directly control full interagency resources, and have dedicated funding authority

  7. Associative Visual Agnosia: A Case Study

    PubMed Central

    Charnallet, A.; Carbonnel, S.; David, D.; Moreaud, O.

    2008-01-01

    We report a case of massive associative visual agnosia. In the light of current theories of identification and semantic knowledge organization, a deficit involving both levels of structural description system and visual semantics must be assumed to explain the case. We suggest, in line with a previous case study [1], an alternative account in the framework of (non abstractive) episodic models of memory [4]. PMID:18413915

  8. Aerodynamic Forces and Moments of a Seaplane on the Water

    NASA Technical Reports Server (NTRS)

    Kohler, M

    1933-01-01

    This report gives the results of wind-tunnel tests with a seaplane model as a contribution to the solution of the aerodynamic problems. In the tests it was assumed that the seaplane rested motionless on the water and was exposed, in various positions with respect to the supposedly flat surface of the water, to a uniform air current 0 to 360 degrees.

  9. Pack-and-Go Delivery Service: A Multi-Component Cost-Volume-Profit (CVP) Learning Resource

    ERIC Educational Resources Information Center

    Stout, David E.

    2014-01-01

    This educational case, in two parts (A and B), requires students to assume the role of a business consultant and to use Excel to develop a profit-planning or a cost-volume-profit (CVP) model for a package-delivery company opportunity currently being evaluated by a client. The name of the proposed business is Pack-and-Go, which would provide an…

  10. Decision Making Analysis: Critical Factors-Based Methodology

    DTIC Science & Technology

    2010-04-01

    the pitfalls associated with current wargaming methods such as assuming a western view of rational values in decision - making regardless of the cultures...Utilization theory slightly expands the rational decision making model as it states that “actors try to maximize their expected utility by weighing the...items to categorize the decision - making behavior of political leaders which tend to demonstrate either a rational or cognitive leaning. Leaders

  11. A study on the influence of corona on currents and electromagnetic fields predicted by a nonlinear lightning return-stroke model

    NASA Astrophysics Data System (ADS)

    De Conti, Alberto; Silveira, Fernando H.; Visacro, Silvério

    2014-05-01

    This paper investigates the influence of corona on currents and electromagnetic fields predicted by a return-stroke model that represents the lightning channel as a nonuniform transmission line with time-varying (nonlinear) resistance. The corona model used in this paper allows the calculation of corona currents as a function of the radial electric field in the vicinity of the channel. A parametric study is presented to investigate the influence of corona parameters, such as the breakdown electric field and the critical electric field for the stable propagation of streamers, on predicted currents and electromagnetic fields. The results show that, regardless of the assumed corona parameters, the incorporation of corona into the nonuniform and nonlinear transmission line model under investigation modifies the model predictions so that they consistently reproduce most of the typical features of experimentally observed lightning electromagnetic fields and return-stroke speed profiles. In particular, it is shown that the proposed model leads to close vertical electric fields presenting waveforms, amplitudes, and decay with distance in good agreement with dart leader electric field changes measured in triggered lightning experiments. A comparison with popular engineering return-stroke models further confirms the model's ability to predict consistent electric field waveforms in the close vicinity of the channel. Some differences observed in the field amplitudes calculated with the different models can be related to the fact that current distortion, while present in the proposed model, is ultimately neglected in the considered engineering return-stroke models.

  12. Mood states determine the degree of task shielding in dual-task performance.

    PubMed

    Zwosta, Katharina; Hommel, Bernhard; Goschke, Thomas; Fischer, Rico

    2013-01-01

    Current models of multitasking assume that dual-task performance and the degree of multitasking are affected by cognitive control strategies. In particular, cognitive control is assumed to regulate the amount of shielding of the prioritised task from crosstalk from the secondary task. We investigated whether and how task shielding is influenced by mood states. Participants were exposed to two short film clips, one inducing high and one inducing low arousal, of either negative or positive content. Negative mood led to stronger shielding of the prioritised task (i.e., less crosstalk) than positive mood, irrespective of arousal. These findings support the assumption that emotional states determine the parameters of cognitive control and play an important role in regulating dual-task performance.

  13. STELLAR DYNAMO MODELS WITH PROMINENT SURFACE TOROIDAL FIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonanno, Alfio

    2016-12-20

    Recent spectro-polarimetric observations of solar-type stars have shown the presence of photospheric magnetic fields with a predominant toroidal component. If the external field is assumed to be current-free it is impossible to explain these observations within the framework of standard mean-field dynamo theory. In this work, it will be shown that if the coronal field of these stars is assumed to be harmonic, the underlying stellar dynamo mechanism can support photospheric magnetic fields with a prominent toroidal component even in the presence of axisymmetric magnetic topologies. In particular, it is argued that the observed increase in the toroidal energy inmore » low-mass fast-rotating stars can be naturally explained with an underlying α Ω mechanism.« less

  14. EXPLORING BIASES OF ATMOSPHERIC RETRIEVALS IN SIMULATED JWST TRANSMISSION SPECTRA OF HOT JUPITERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocchetto, M.; Waldmann, I. P.; Tinetti, G.

    2016-12-10

    With a scheduled launch in 2018 October, the James Webb Space Telescope ( JWST ) is expected to revolutionize the field of atmospheric characterization of exoplanets. The broad wavelength coverage and high sensitivity of its instruments will allow us to extract far more information from exoplanet spectra than what has been possible with current observations. In this paper, we investigate whether current retrieval methods will still be valid in the era of JWST , exploring common approximations used when retrieving transmission spectra of hot Jupiters. To assess biases, we use 1D photochemical models to simulate typical hot Jupiter cloud-free atmospheresmore » and generate synthetic observations for a range of carbon-to-oxygen ratios. Then, we retrieve these spectra using TauREx, a Bayesian retrieval tool, using two methodologies: one assuming an isothermal atmosphere, and one assuming a parameterized temperature profile. Both methods assume constant-with-altitude abundances. We found that the isothermal approximation biases the retrieved parameters considerably, overestimating the abundances by about one order of magnitude. The retrieved abundances using the parameterized profile are usually within 1 σ of the true state, and we found the retrieved uncertainties to be generally larger compared to the isothermal approximation. Interestingly, we found that by using the parameterized temperature profile we could place tight constraints on the temperature structure. This opens the possibility of characterizing the temperature profile of the terminator region of hot Jupiters. Lastly, we found that assuming a constant-with-altitude mixing ratio profile is a good approximation for most of the atmospheres under study.« less

  15. EVIDENCE FOR QUASI-ADIABATIC MOTION OF CHARGED PARTICLES IN STRONG CURRENT SHEETS IN THE SOLAR WIND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malova, H. V.; Popov, V. Yu.; Grigorenko, E. E.

    We investigate quasi-adiabatic dynamics of charged particles in strong current sheets (SCSs) in the solar wind, including the heliospheric current sheet (HCS), both theoretically and observationally. A self-consistent hybrid model of an SCS is developed in which ion dynamics is described at the quasi-adiabatic approximation, while the electrons are assumed to be magnetized, and their motion is described in the guiding center approximation. The model shows that the SCS profile is determined by the relative contribution of two currents: (i) the current supported by demagnetized protons that move along open quasi-adiabatic orbits, and (ii) the electron drift current. The simplestmore » modeled SCS is found to be a multi-layered structure that consists of a thin current sheet embedded into a much thicker analog of a plasma sheet. This result is in good agreement with observations of SCSs at ∼1 au. The analysis of fine structure of different SCSs, including the HCS, shows that an SCS represents a narrow current layer (with a thickness of ∼10{sup 4} km) embedded into a wider region of about 10{sup 5} km, independently of the SCS origin. Therefore, multi-scale structuring is very likely an intrinsic feature of SCSs in the solar wind.« less

  16. Computation of marginal distributions of peak-heights in electropherograms for analysing single source and mixture STR DNA samples.

    PubMed

    Cowell, Robert G

    2018-05-04

    Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Atomistic modeling of shock-induced void collapse in copper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davila, L P; Erhart, P; Bringa, E M

    2005-03-09

    Nonequilibrium molecular dynamics (MD) simulations show that shock-induced void collapse in copper occurs by emission of shear loops. These loops carry away the vacancies which comprise the void. The growth of the loops continues even after they collide and form sessile junctions, creating a hardened region around the collapsing void. The scenario seen in our simulations differs from current models that assume that prismatic loop emission is responsible for void collapse. We propose a new dislocation-based model that gives excellent agreement with the stress threshold found in the MD simulations for void collapse as a function of void radius.

  18. Measurement of radiative proton capture on F 18 and implications for oxygen-neon novae reexamined

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akers, C.; Laird, A. M.; Fulton, B. R.

    The rate of the F-18(p, gamma)Ne-19 reaction affects the final abundance of the gamma-ray observable radioisotope F-18, produced in novae. However, no successful measurement of this reaction exists and the rate used is calculated from incomplete information on the contributing resonances. Of the two resonances thought to play a significant role, one has a radiative width estimated from the assumed analogue state in the mirror nucleus, F-19. The second does not have an analogue state assignment at all, resulting in an arbitrary radiative width being assumed. Here, we report the first successful direct measurement of the F-18(p, gamma)Ne-19 reaction. Themore » strength of the 665 keV resonance (E-x = 7.076 MeV) is found to be over an order of magnitude weaker than currently assumed in nova models. Reaction rate calculations show that this resonance therefore plays no significant role in the destruction of F-18 at any astrophysical energy.« less

  19. An oilspill trajectory analysis model with a variable wind deflection angle

    USGS Publications Warehouse

    Samuels, W.B.; Huang, N.E.; Amstutz, D.E.

    1982-01-01

    The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.

  20. Cost-effectiveness of human papillomavirus vaccination in the United States.

    PubMed

    Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E

    2008-02-01

    We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.

  1. A Simple Method for Estimating the Economic Cost of Productivity Loss Due to Blindness and Moderate to Severe Visual Impairment.

    PubMed

    Eckert, Kristen A; Carter, Marissa J; Lansingh, Van C; Wilson, David A; Furtado, João M; Frick, Kevin D; Resnikoff, Serge

    2015-01-01

    To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) using simple models (analogous to how a rapid assessment model relates to a comprehensive model) based on minimum wage (MW) and gross national income (GNI) per capita (US$, 2011). Cost of blindness (COB) was calculated for the age group ≥50 years in nine sample countries by assuming the loss of current MW and loss of GNI per capita. It was assumed that all individuals work until 65 years old and that half of visual impairment prevalent in the ≥50 years age group is prevalent in the 50-64 years age group. For cost of MSVI (COMSVI), individual wage and GNI loss of 30% was assumed. Results were compared with the values of the uncorrected refractive error (URE) model of productivity loss. COB (MW method) ranged from $0.1 billion in Honduras to $2.5 billion in the United States, and COMSVI ranged from $0.1 billion in Honduras to $5.3 billion in the US. COB (GNI method) ranged from $0.1 million in Honduras to $7.8 billion in the US, and COMSVI ranged from $0.1 billion in Honduras to $16.5 billion in the US. Most GNI method values were near equivalent to those of the URE model. Although most people with blindness and MSVI live in developing countries, the highest productivity losses are in high income countries. The global economy could improve if eye care were made more accessible and more affordable to all.

  2. C III spectra in WC Wolf-Rayet stars - Does collisional excitation dominate?

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1993-01-01

    A direct comparison of the spectra emitted by an improved collisionally excited C III atomic model, with observations of C III spectra in Wolf-Rayet WC stars, shows agreement for UV, visible, and near-infrared lines including lines usually considered to be recombination lines. The agreement implies high-density and temperature source conditions corresponding to log (Ne Te) is greater than 16 as a lower limit, whereas most current modeling assumes log (Ne Te) is less than 15.5. This raises questions concerning the photoionization/recombination assumptions on which most WR modeling is based. Recent models are discussed from this point of view.

  3. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  4. Predicting the continuum between corridors and barriers to animal movements using Step Selection Functions and Randomized Shortest Paths.

    PubMed

    Panzacchi, Manuela; Van Moorter, Bram; Strand, Olav; Saerens, Marco; Kivimäki, Ilkka; St Clair, Colleen C; Herfindal, Ivar; Boitani, Luigi

    2016-01-01

    The loss, fragmentation and degradation of habitat everywhere on Earth prompts increasing attention to identifying landscape features that support animal movement (corridors) or impedes it (barriers). Most algorithms used to predict corridors assume that animals move through preferred habitat either optimally (e.g. least cost path) or as random walkers (e.g. current models), but neither extreme is realistic. We propose that corridors and barriers are two sides of the same coin and that animals experience landscapes as spatiotemporally dynamic corridor-barrier continua connecting (separating) functional areas where individuals fulfil specific ecological processes. Based on this conceptual framework, we propose a novel methodological approach that uses high-resolution individual-based movement data to predict corridor-barrier continua with increased realism. Our approach consists of two innovations. First, we use step selection functions (SSF) to predict friction maps quantifying corridor-barrier continua for tactical steps between consecutive locations. Secondly, we introduce to movement ecology the randomized shortest path algorithm (RSP) which operates on friction maps to predict the corridor-barrier continuum for strategic movements between functional areas. By modulating the parameter Ѳ, which controls the trade-off between exploration and optimal exploitation of the environment, RSP bridges the gap between algorithms assuming optimal movements (when Ѳ approaches infinity, RSP is equivalent to LCP) or random walk (when Ѳ → 0, RSP → current models). Using this approach, we identify migration corridors for GPS-monitored wild reindeer (Rangifer t. tarandus) in Norway. We demonstrate that reindeer movement is best predicted by an intermediate value of Ѳ, indicative of a movement trade-off between optimization and exploration. Model calibration allows identification of a corridor-barrier continuum that closely fits empirical data and demonstrates that RSP outperforms models that assume either optimality or random walk. The proposed approach models the multiscale cognitive maps by which animals likely navigate real landscapes and generalizes the most common algorithms for identifying corridors. Because suboptimal, but non-random, movement strategies are likely widespread, our approach has the potential to predict more realistic corridor-barrier continua for a wide range of species. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  5. Impact of Basal Conditions on Grounding-Line Retreat

    NASA Astrophysics Data System (ADS)

    Koellner, S. J.; Parizek, B. R.; Alley, R. B.; Muto, A.; Holschuh, N.; Nowicki, S.

    2017-12-01

    An often-made assumption included in ice-sheet models used for sea-level projections is that basal rheology is constant throughout the domain of the simulation. The justification in support of this assumption is that physical data for determining basal rheology is limited and a constant basal flow law can adequately approximate current as well as past behavior of an ice-sheet. Prior studies indicate that beneath Thwaites Glacier (TG) there is a ridge-and-valley bedrock structure which likely promotes deformation of soft tills within the troughs and sliding, more akin to creep, over the harder peaks; giving rise to a spatially variable basal flow law. Furthermore, it has been shown that the stability of an outlet glacier varies with the assumed basal rheology, so accurate projections almost certainly need to account for basal conditions. To test the impact of basal conditions on grounding-line evolution forced by ice-shelf perturbations, we modified the PSU 2-D flowline model to enable the inclusion of spatially variable basal rheology along an idealized bedrock profile akin to TG. Synthetic outlet glacier "data" were first generated under steady-state conditions assuming a constant basal flow law and a constant basal friction coefficient field on either a linear or bumpy sloping bed. In following standard procedures, a suite of models were then initialized by assuming different basal rheologies and then determining the basal friction coefficients that produce surface velocities matching those from the synthetic "data". After running each of these to steady state, the standard and full suite of models were forced by drastically reducing ice-shelf buttressing through side-shear and prescribed basal-melting perturbations. In agreement with previous findings, results suggest a more plastic basal flow law enhances stability in response to ice-shelf perturbations by flushing ice from farther upstream to sustain the grounding-zone mass balance required to prolong the current grounding-line position. Mixed rheology beds tend to mimic the retreat of the higher-exponent bed, a behavior enhanced over bumps as the stabilizing ridges tap into ice from local valleys. Thus, accounting for variable basal conditions in ice-sheet model projections is critical for improving both the timing and magnitude of retreat.

  6. Numerical simulations of electric potential field for alternating current potential drop associated with surface cracks in low-alloy steel nuclear material

    NASA Astrophysics Data System (ADS)

    Yeh, Chun-Ping; Huang, Jiunn-Yuan

    2018-04-01

    Low-alloy steels used as structural materials in nuclear power plants are subjected to cyclic stresses during power plant operations. As a result, cracks may develop and propagate through the material. The alternating current potential drop technique is used to measure the lengths of cracks in metallic components. The depth of the penetration of the alternating current is assumed to be small compared to the crack length. This assumption allows the adoption of the unfolding technique to simplify the problem to a surface Laplacian field. The numerical modelling of the electric potential and current density distribution prediction model for a compact tension specimen and the unfolded crack model are presented in this paper. The goal of this work is to conduct numerical simulations to reduce deviations occurring in the crack length measurements. Numerical simulations were conducted on AISI 4340 low-alloy steel with different crack lengths to evaluate the electric potential distribution. From the simulated results, an optimised position for voltage measurements in the crack region was proposed.

  7. Simulation of the Universal-Time Diurnal Variation of the Global Electric Circuit Charging Rate

    NASA Technical Reports Server (NTRS)

    Mackerras, D.; Darvenzia, M.; Orville, R. E.; Williams, E. R.; Goodman, S. J.

    1999-01-01

    A global lightning model that includes diurnal and annual lightning variation, and total flash density versus latitude for each major land and ocean, has been used as the basis for simulating the global electric circuit charging rate. A particular objective has been to reconcile the difference in amplitude ratios [AR=(max-min)/mean] between global lightning diurnal variation (AR approx. = 0.8) and the diurnal variation of typical atmospheric potential gradient curves (AR approx. = 0.35). A constraint on the simulation is that the annual mean charging current should be about 1000 A. The global lightning model shows that negative ground flashes can contribute, at most, about 10-15% of the required current. For the purpose of the charging rate simulation, it was assumed that each ground flash contributes 5 C to the charging process. It was necessary to assume that all electrified clouds contribute to charging by means other than lightning, that the total flash rate can serve as an indirect indicator of the rate of charge transfer, and that oceanic electrified clouds contribute to charging even though they are relatively inefficient in producing lightning. It was also found necessary to add a diurnally invariant charging current component. By trial and error it was found that charging rate diurnal variation curves in Universal time (UT) could be produced with amplitude ratios and general shapes similar to those of the potential gradient diurnal variation curves measured over ocean and arctic regions during voyages of the Carnegie Institute research vessels.

  8. Ion conduction in the KcsA potassium channel analyzed with a minimal kinetic model.

    PubMed

    Mafé, Salvador; Pellicer, Julio

    2005-02-01

    We use a model by Nelson to study the current-voltage and conductance-concentration curves of bacterial potassium channel KcsA without assuming rapid ion translocation. Ion association to the channel filter is rate controlling at low concentrations, but dissociation and transport in the filter can limit conduction at high concentration for ions other than K+. The absolute values of the effective rate constants are tentative but the relative changes in these constants needed to qualitatively explain the experiments should be of significance.

  9. Chemical vapor deposition modeling: An assessment of current status

    NASA Technical Reports Server (NTRS)

    Gokoglu, Suleyman A.

    1991-01-01

    The shortcomings of earlier approaches that assumed thermochemical equilibrium and used chemical vapor deposition (CVD) phase diagrams are pointed out. Significant advancements in predictive capabilities due to recent computational developments, especially those for deposition rates controlled by gas phase mass transport, are demonstrated. The importance of using the proper boundary conditions is stressed, and the availability and reliability of gas phase and surface chemical kinetic information are emphasized as the most limiting factors. Future directions for CVD are proposed on the basis of current needs for efficient and effective progress in CVD process design and optimization.

  10. Low-Cost, High-Performance Analog Optical Links

    DTIC Science & Technology

    2006-12-01

    connected by tunnel junctions, which permit the forward conduction of current when they are reverse biased . Hence a key step in the development of the... bias voltage, where the measured IV is shown by the dotted curve. The common tunnel junction IV model assumed a triangular-shaped band structure. A... tunneling characteristics with negative differential resistance and a resistance under reverse bias around 12 Ω. This was higher than the previously grown

  11. On the photosynthetic potential in the very Early Archean oceans.

    PubMed

    Avila, Daile; Cardenas, Rolando; Martin, Osmel

    2013-02-01

    In this work we apply a mathematical model of photosynthesis to quantify the potential for photosynthetic life in the very Early Archean oceans. We assume the presence of oceanic blockers of ultraviolet radiation, specifically ferrous ions. For this scenario, our results suggest a potential for photosynthetic life greater than or similar to that in later eras/eons, such as the Late Archean and the current Phanerozoic eon.

  12. Prediction of Tidal Elevations and Barotropic Currents in the Gulf of Bone

    NASA Astrophysics Data System (ADS)

    Purnamasari, Rika; Ribal, Agustinus; Kusuma, Jeffry

    2018-03-01

    Tidal elevation and barotropic current predictions in the gulf of Bone have been carried out in this work based on a two-dimensional, depth-integrated Advanced Circulation (ADCIRC-2DDI) model for 2017. Eight tidal constituents which were obtained from FES2012 have been imposed along the open boundary conditions. However, even using these very high-resolution tidal constituents, the discrepancy between the model and the data from tide gauge is still very high. In order to overcome such issues, Green’s function approach has been applied which reduced the root-mean-square error (RMSE) significantly. Two different starting times are used for predictions, namely from 2015 and 2016. After improving the open boundary conditions, RMSE between observation and model decreased significantly. In fact, RMSEs for 2015 and 2016 decreased 75.30% and 88.65%, respectively. Furthermore, the prediction for tidal elevations as well as tidal current, which is barotropic current, is carried out. This prediction was compared with the prediction conducted by Geospatial Information Agency (GIA) of Indonesia and we found that our prediction is much better than one carried out by GIA. Finally, since there is no tidal current observation available in this area, we assume that, when tidal elevations have been fixed, then the tidal current will approach the actual current velocity.

  13. Transient simulations of nitrogen load for a coastal aquifer and embayment, Cape Cod, MA

    USGS Publications Warehouse

    Colman, J.A.; Masterson, J.P.

    2008-01-01

    A time-varying, multispecies, modular, three-dimensional transport model (MT3DMS) was developed to simulate groundwater transport of nitrogen from increasing sources on land to the shore of Nauset Marsh, a coastal embayment of the Cape Cod National Seashore. Simulated time-dependent nitrogen loads at the coast can be used to correlate with current observed coastal eutrophic effects, to predict current and ultimate effects of development, and to predict loads resulting from source remediation. A time-varying nitrogen load, corrected for subsurface loss, was applied to the land subsurface in the transport model based on five land-use coverages documenting increasing development from 1951 to 1999. Simulated nitrogen loads to Nauset Marsh increased from 230 kg/yr before 1930 to 4390 kg/yr in 2001 to 7130 kg/yr in 2100, assuming future nitrogen sources constant at the 1999 land-use rate. The simulated nitrogen load per area of embayment was 5 times greater for Salt Pond, a eutrophic landward extension of Nauset Marsh, than for other Nauset Marsh areas. Sensitivity analysis indicated that load results were little affected by changes in vertical discretization and annual recharge but much affected by the nitrogen loss rate assumed for a kettle lake downgradient from a landfill.

  14. Modeling of O+ ions in the plasmasphere

    NASA Astrophysics Data System (ADS)

    Guiter, S. M.; Moore, T. E.; Khazanov, G. V.

    1995-11-01

    Heavy ion (O+, O++, and N+) density enhancements in the outer plasmasphere have been observed using the retarding ion mass spectrometer instrument on the DE 1 satellite. These are seen at L shells from 2 to 5, with most occurrences in the L=3 to 4 region; the maximum L shell at which these enhancements occur varies inversely with Dst. It is also known that enhancements of O+ and O++ overlie ionospheric electron temperature peaks. It is thought that these enhancements are related to heating of plasmaspheric particles through interactions with ring current ions. This was investigated using a time-dependent one-stream hydrodynamic model for plasmaspheric flows, in which the model flux tube is connected to the ionosphere. The model simultaneously solves the coupled continuity, momentum, and energy equations of a two-ion (H+ and O+) quasi-neutral, currentless plasma. This model is fully interhemispheric and diffusive equilibrium is not assumed; it includes a corotating tilted dipole magnetic field and neutral winds. First, diurnally reproducible results were found assuming only photoelectron heating of thermal electrons. For this case the modeled equatorial O+ density was below 1 cm-3 throughout the day. The O+ results also show significant diurnal variability, with standing shocks developing when production stops and O+ flows downward under the influence of gravity. Numerical tests were done with different levels of electron heating in the plasmasphere; these show that the equatorial O+ density is highly dependent on the assumed electron heating rates. Over the range of integrated plasmaspheric electron heating (along the flux tube) from 8.7 to 280×109 eV/s, the equatorial O+ density goes like the heating raised to the power 2.3.

  15. Optimal Electrodynamic Tether Phasing Maneuvers

    NASA Technical Reports Server (NTRS)

    Bitzer, Matthew S.; Hall, Christopher D.

    2007-01-01

    We study the minimum-time orbit phasing maneuver problem for a constant-current electrodynamic tether (EDT). The EDT is assumed to be a point mass and the electromagnetic forces acting on the tether are always perpendicular to the local magnetic field. After deriving and non-dimensionalizing the equations of motion, the only input parameters become current and the phase angle. Solution examples, including initial Lagrange costates, time of flight, thrust plots, and thrust angle profiles, are given for a wide range of current magnitudes and phase angles. The two-dimensional cases presented use a non-tilted magnetic dipole model, and the solutions are compared to existing literature. We are able to compare similar trajectories for a constant thrust phasing maneuver and we find that the time of flight is longer for the constant thrust case with similar initial thrust values and phase angles. Full three-dimensional solutions, which use a titled magnetic dipole model, are also analyzed for orbits with small inclinations.

  16. Analysis of electrolyte transport through charged nanopores.

    PubMed

    Peters, P B; van Roij, R; Bazant, M Z; Biesheuvel, P M

    2016-05-01

    We revisit the classical problem of flow of electrolyte solutions through charged capillary nanopores or nanotubes as described by the capillary pore model (also called "space charge" theory). This theory assumes very long and thin pores and uses a one-dimensional flux-force formalism which relates fluxes (electrical current, salt flux, and fluid velocity) and driving forces (difference in electric potential, salt concentration, and pressure). We analyze the general case with overlapping electric double layers in the pore and a nonzero axial salt concentration gradient. The 3×3 matrix relating these quantities exhibits Onsager symmetry and we report a significant new simplification for the diagonal element relating axial salt flux to the gradient in chemical potential. We prove that Onsager symmetry is preserved under changes of variables, which we illustrate by transformation to a different flux-force matrix given by Gross and Osterle [J. Chem. Phys. 49, 228 (1968)JCPSA60021-960610.1063/1.1669814]. The capillary pore model is well suited to describe the nonlinear response of charged membranes or nanofluidic devices for electrokinetic energy conversion and water desalination, as long as the transverse ion profiles remain in local quasiequilibrium. As an example, we evaluate electrical power production from a salt concentration difference by reverse electrodialysis, using an efficiency versus power diagram. We show that since the capillary pore model allows for axial gradients in salt concentration, partial loops in current, salt flux, or fluid flow can develop in the pore. Predictions for macroscopic transport properties using a reduced model, where the potential and concentration are assumed to be invariant with radial coordinate ("uniform potential" or "fine capillary pore" model), are close to results of the full model.

  17. Comments on "Modified wind chill temperatures determined by a whole body thermoregulation model and human-based convective coefficients" by Ben Shabat, Shitzer and Fiala (2013) and "Facial convective heat exchange coefficients in cold and windy environments estimated from human experiments" by Ben Shabat and Shitzer (2012)

    NASA Astrophysics Data System (ADS)

    Osczevski, Randall J.

    2014-08-01

    Ben Shabat et al. (Int J Biometeorol 56(4):639-51, 2013) present revised charts for wind chill equivalent temperatures (WCET) and facial skin temperatures (FST) that differ significantly from currently accepted charts. They credit these differences to their more sophisticated calculation model and to the human-based equation that it used for finding the convective heat transfer coefficient (Ben Shabat and Shitzer, Int J Biometeorol 56:639-651, 2012). Because a version of the simple model that was used to create the current charts accurately reproduces their results when it uses the human-based equation, the differences that they found must be entirely due to this equation. In deriving it, Ben Shabat and Shitzer assumed that all of the heat transfer from the surface of their cylindrical model was due to forced convection alone. Because several modes of heat transfer were occurring in the human experiments they were attempting to simulate, notably radiation, their coefficients are actually total external heat transfer coefficients, not purely convective ones, as the calculation models assume. Data from the one human experiment that used heat flux sensors supports this conclusion and exposes the hazard of using a numerical model with several adjustable parameters that cannot be measured. Because the human-based equation is faulty, the values in the proposed charts are not correct. The equation that Ben Shabat et al. (Int J Biometeorol 56(4):639-51, 2013) propose to calculate WCET should not be used.

  18. SPH Modelling of Sea-ice Pack Dynamics

    NASA Astrophysics Data System (ADS)

    Staroszczyk, Ryszard

    2017-12-01

    The paper is concerned with the problem of sea-ice pack motion and deformation under the action of wind and water currents. Differential equations describing the dynamics of ice, with its very distinct mateFfigrial responses in converging and diverging flows, express the mass and linear momentum balances on the horizontal plane (the free surface of the ocean). These equations are solved by the fully Lagrangian method of smoothed particle hydrodynamics (SPH). Assuming that the ice behaviour can be approximated by a non-linearly viscous rheology, the proposed SPH model has been used to simulate the evolution of a sea-ice pack driven by wind drag stresses. The results of numerical simulations illustrate the evolution of an ice pack, including variations in ice thickness and ice area fraction in space and time. The effects of different initial ice pack configurations and of different conditions assumed at the coast-ice interface are examined. In particular, the SPH model is applied to a pack flow driven by a vortex wind to demonstrate how well the Lagrangian formulation can capture large deformations and displacements of sea ice.

  19. How do we know what makes for "best practice" in clinical supervision for psychological therapists? A content analysis of supervisory models and approaches.

    PubMed

    Simpson-Southward, Chloe; Waller, Glenn; Hardy, Gillian E

    2017-11-01

    Clinical supervision for psychotherapies is widely used in clinical and research contexts. Supervision is often assumed to ensure therapy adherence and positive client outcomes, but there is little empirical research to support this contention. Regardless, there are numerous supervision models, but it is not known how consistent their recommendations are. This review aimed to identify which aspects of supervision are consistent across models, and which are not. A content analysis of 52 models revealed 71 supervisory elements. Models focus more on supervisee learning and/or development (88.46%), but less on emotional aspects of work (61.54%) or managerial or ethical responsibilities (57.69%). Most models focused on the supervisee (94.23%) and supervisor (80.77%), rather than the client (48.08%) or monitoring client outcomes (13.46%). Finally, none of the models were clearly or adequately empirically based. Although we might expect clinical supervision to contribute to positive client outcomes, the existing models have limited client focus and are inconsistent. Therefore, it is not currently recommended that one should assume that the use of such models will ensure consistent clinician practice or positive therapeutic outcomes. There is little evidence for the effectiveness of supervision. There is a lack of consistency in supervision models. Services need to assess whether supervision is effective for practitioners and patients. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Diffusion Decision Model: Current Issues and History

    PubMed Central

    Ratcliff, Roger; Smith, Philip L.; Brown, Scott D.; McKoon, Gail

    2016-01-01

    There is growing interest in diffusion models to represent the cognitive and neural processes of speeded decision making. Sequential-sampling models like the diffusion model have a long history in psychology. They view decision making as a process of noisy accumulation of evidence from a stimulus. The standard model assumes that evidence accumulates at a constant rate during the second or two it takes to make a decision. This process can be linked to the behaviors of populations of neurons and to theories of optimality. Diffusion models have been used successfully in a range of cognitive tasks and as psychometric tools in clinical research to examine individual differences. In this article, we relate the models to both earlier and more recent research in psychology. PMID:26952739

  1. Langmuir probe analysis in electronegative plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bredin, Jerome, E-mail: jerome.bredin@lpp.polytechnique.fr; Chabert, Pascal; Aanesland, Ane

    2014-12-15

    This paper compares two methods to analyze Langmuir probe data obtained in electronegative plasmas. The techniques are developed to allow investigations in plasmas, where the electronegativity α{sub 0} = n{sub –}/n{sub e} (the ratio between the negative ion and electron densities) varies strongly. The first technique uses an analytical model to express the Langmuir probe current-voltage (I-V) characteristic and its second derivative as a function of the electron and ion densities (n{sub e}, n{sub +}, n{sub –}), temperatures (T{sub e}, T{sub +}, T{sub –}), and masses (m{sub e}, m{sub +}, m{sub –}). The analytical curves are fitted to the experimental data bymore » adjusting these variables and parameters. To reduce the number of fitted parameters, the ion masses are assumed constant within the source volume, and quasi-neutrality is assumed everywhere. In this theory, Maxwellian distributions are assumed for all charged species. We show that this data analysis can predict the various plasma parameters within 5–10%, including the ion temperatures when α{sub 0} > 100. However, the method is tedious, time consuming, and requires a precise measurement of the energy distribution function. A second technique is therefore developed for easier access to the electron and ion densities, but does not give access to the ion temperatures. Here, only the measured I-V characteristic is needed. The electron density, temperature, and ion saturation current for positive ions are determined by classical probe techniques. The electronegativity α{sub 0} and the ion densities are deduced via an iterative method since these variables are coupled via the modified Bohm velocity. For both techniques, a Child-Law sheath model for cylindrical probes has been developed and is presented to emphasize the importance of this model for small cylindrical Langmuir probes.« less

  2. MODELING THE NON-RECYCLED FERMI GAMMA-RAY PULSAR POPULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perera, B. B. P.; McLaughlin, M. A.; Cordes, J. M.

    2013-10-10

    We use Fermi Gamma-ray Space Telescope detections and upper limits on non-recycled pulsars obtained from the Large Area Telescope (LAT) to constrain how the gamma-ray luminosity L{sub γ} depends on the period P and the period derivative P-dot . We use a Bayesian analysis to calculate a best-fit luminosity law, or dependence of L{sub γ} on P and P-dot , including different methods for modeling the beaming factor. An outer gap (OG) magnetosphere geometry provides the best-fit model, which is L{sub γ}∝P{sup -a} P-dot {sup b} where a = 1.36 ± 0.03 and b = 0.44 ± 0.02, similar tomore » but not identical to the commonly assumed L{sub γ}∝√( E-dot )∝P{sup -1.5} P-dot {sup 0.5}. Given upper limits on gamma-ray fluxes of currently known radio pulsars and using the OG model, we find that about 92% of the radio-detected pulsars have gamma-ray beams that intersect our line of sight. By modeling the misalignment of radio and gamma-ray beams of these pulsars, we find an average gamma-ray beaming solid angle of about 3.7π for the OG model, assuming a uniform beam. Using LAT-measured diffuse fluxes, we place a 2σ upper limit on the average braking index and a 2σ lower limit on the average surface magnetic field strength of the pulsar population of 3.8 and 3.2 × 10{sup 10} G, respectively. We then predict the number of non-recycled pulsars detectable by the LAT based on our population model. Using the 2 yr sensitivity, we find that the LAT is capable of detecting emission from about 380 non-recycled pulsars, including 150 currently identified radio pulsars. Using the expected 5 yr sensitivity, about 620 non-recycled pulsars are detectable, including about 220 currently identified radio pulsars. We note that these predictions significantly depend on our model assumptions.« less

  3. Interpreting activity in H(2)O-H(2)SO(4) binary nucleation.

    PubMed

    Bein, Keith J; Wexler, Anthony S

    2007-09-28

    Sulfuric acid-water nucleation is thought to be a key atmospheric mechanism for forming new condensation nuclei. In earlier literature, measurements of sulfuric acid activity were interpreted as the total (monomer plus hydrate) concentration above solution. Due to recent reinterpretations, most literature values for H(2)SO(4) activity are thought to represent the number density of monomers. Based on this reinterpretation, the current work uses the most recent models of H(2)O-H(2)SO(4) binary nucleation along with perturbation analyses to predict a decrease in critical cluster mole fraction, increase in critical cluster diameter, and orders of magnitude decrease in nucleation rate. Nucleation rate parameterizations available in the literature, however, give opposite trends. To resolve these discrepancies, nucleation rates were calculated for both interpretations of H(2)SO(4) activity and directly compared to the available parameterizations as well as the perturbation analysis. Results were in excellent agreement with older parameterizations that assumed H(2)SO(4) activity represents the total concentration and duplicated the predicted trends from the perturbation analysis, but differed by orders of magnitude from more recent parameterizations that assume H(2)SO(4) activity represents only the monomer. Comparison with experimental measurements available in the literature revealed that the calculations of the current work assuming a(a) represents the total concentration are most frequently in agreement with observations.

  4. Local and integral disruption forces on the tokamak wall

    NASA Astrophysics Data System (ADS)

    Pustovitov, V. D.; Kiramov, D. I.

    2018-04-01

    The disruption-induced forces on the tokamak wall are evaluated analytically within the standard large-aspect-ratio model that implies axisymmetry, circular plasma and wall, and absence of halo currents. Additionally, the ideal-wall reaction is assumed. The disruptions are modelled as rapid changes in the plasma pressure (thermal quench (TQ)) and net current (current quench (CQ)). The force distribution over the poloidal angle is found as a function of these inputs. The derived formulas allow comparison of the TQ- and CQ-produced forces calculated differently, with and without account of the poloidal current induced in the wall. The latter variant represents the inherent property of the codes treating the wall as a set of toroidal filaments. It is proved here that such a simplification leads to unacceptably large errors in the simulated forces for both TQs and CQs. It is also shown that the TQ part of the force must prevail over that due to CQ in the high-β scenarios developed for JT-60SA and ITER.

  5. An ϵ' improvement from right-handed currents

    DOE PAGES

    Cirigliano, Vincenzo; Dekens, Wouter Gerard; de Vries, Jordy; ...

    2017-01-23

    Recent lattice QCD calculations of direct CP violation in K L → ππ decays indicate tension with the experimental results. Assuming this tension to be real, we investigate a possible beyond-the-Standard Model explanation via right-handed charged currents. By using chiral perturbation theory in combination with lattice QCD results, we accurately calculate the modification of ϵ'/ϵ induced by right-handed charged currents and extract values of the couplings that are necessary to explain the discrepancy, pointing to a scale around 10–100 TeV. We find that couplings of this size are not in conflict with constraints from other precision experiments, but next-generation hadronicmore » electric dipole moment searches (such as neutron and 225Ra) can falsify this scenario. As a result, we work out in detail a direct link, based on chiral perturbation theory, between CP violation in the kaon sector and electric dipole moments induced by right-handed currents which can be used in future analyses of left-right symmetric models.« less

  6. Probing dark energy dynamics from current and future cosmological observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Gongbo; Department of Physics, Simon Fraser University, Burnaby, BC, V5A 1S6; Zhang Xinmin

    2010-02-15

    We report the constraints on the dark energy equation-of-state w(z) using the latest 'Constitution' SNe sample combined with the WMAP5 and Sloan Digital Sky Survey data. Assuming a flat Universe, and utilizing the localized principal component analysis and the model selection criteria, we find that the {Lambda}CDM model is generally consistent with the current data, yet there exists a weak hint of the possible dynamics of dark energy. In particular, a model predicting w(z)<-1 at z is an element of [0.25,0.5) and w(z)>-1 at z is an element of [0.5,0.75), which means that w(z) crosses -1 in the range ofmore » z is an element of [0.25,0.75), is mildly favored at 95% confidence level. Given the best fit model for current data as a fiducial model, we make future forecast from the joint data sets of Joint Dark Energy Mission, Planck, and Large Synoptic Survey Telescope, and we find that the future surveys can reduce the error bars on the w bins by roughly a factor of 10 for a 5-w-bin model.« less

  7. Principles of a multistack electrochemical wastewater treatment design

    NASA Astrophysics Data System (ADS)

    Elsahwi, Essam S.; Dawson, Francis P.; Ruda, Harry E.

    2018-02-01

    Electrolyzer stacks in a bipolar architecture (cells connected in series) are desirable since power provided to a stack can be transferred at high voltages and low currents and thus the losses in the power bus can be reduced. The anode electrodes (active electrodes) considered as part of this study are single sided but there are manufacturing cost advantages to implementing double side anodes in the future. One of the main concerns with a bipolar stack implementation is the existence of leakage currents (bypass currents). The leakage current is associated with current paths that are not between adjacent anode and cathode pairs. This leads to non uniform current density distributions which compromise the electrochemical conversion efficiency of the stack and can also lead to unwanted side reactions. The objective of this paper is to develop modelling tools for a bipolar architecture consisting of two single sided cells that use single sided anodes. It is assumed that chemical reactions are single electron transfer rate limited and that diffusion and convection effects can be ignored. The design process consists of the flowing two steps: development of a large signal model for the stack, and then the extraction of a small signal model from the large signal model. The small signal model facilitates the design of a controller that satisfies current or voltage regulation requirements. A model has been developed for a single cell and two cells in series but can be generalized to more than two cells in series and to incorporate double sided anode configurations in the future. The developed model is able to determine the leakage current and thus provide a quantitative assessment on the performance of the cell.

  8. Flow studies in canine artery bifurcations using a numerical simulation method.

    PubMed

    Xu, X Y; Collins, M W; Jones, C J

    1992-11-01

    Three-dimensional flows through canine femoral bifurcation models were predicted under physiological flow conditions by solving numerically the time-dependent three-dimensional Navier-stokes equations. In the calculations, two models were assumed for the blood, those of (a) a Newtonian fluid, and (b) a non-Newtonian fluid obeying the power law. The blood vessel wall was assumed to be rigid this being the only approximation to the prediction model. The numerical procedure utilized a finite volume approach on a finite element mesh to discretize the equations, and the code used (ASTEC) incorporated the SIMPLE velocity-pressure algorithm in performing the calculations. The predicted velocity profiles were in good qualitative agreement with the in vivo measurements recently obtained by Jones et al. The non-Newtonian effects on the bifurcation flow field were also investigated, and no great differences in velocity profiles were observed. This indicated that the non-Newtonian characteristics of the blood might not be an important factor in determining the general flow patterns for these bifurcations, but could have local significance. Current work involves modeling wall distensibility in an empirically valid manner. Predictions accommodating these will permit a true quantitative comparison with experiment.

  9. Improvements to a High Spectral Resolution, Radiation-Hydrodynamics Model of a Lightning Return Stroke and Comparisons with Measured Spectra and Inferred Plasma Properties

    NASA Astrophysics Data System (ADS)

    Edwards, J. D.; Dreike, P.; Smith, M. W.; Clemenson, M. D.; Zollweg, J. D.

    2015-12-01

    We describe developments to a 1-D cylindrical, radiation-hydrodynamics model of a lightning return stroke that simulates lighting spectra with 1 Angstrom resolution in photon wavelength. In previous calculations we assumed standard density air in the return stroke channel and the resulting optical spectrum was that of an optically thick emitter, unlike measured spectra that are optically thin. In this work, we improve our model by initializing our simulation assuming that the leader-heated channel is pre-expanded to a density of 0.01-0.05 ambient and near pressure equilibrium with the surrounding ambient air and by implementing a time-dependent, external heat source to incorporate the effects of continuing current. By doing so, our simulated spectra, illustrated in the attached figure, show strong spectral emission characteristics at wavelengths similar to spectra measured by Orville (1968). In this poster, we describe our model and compare our simulated results with spectra measured by Orville (1968) and Smith (2015). We also use spectroscopic methods to compute physical properties of the plasma channel, e.g. temperature, from Smith's measurements and compare these with our simulated results.

  10. Impedance cardiography: What is the source of the signal?

    NASA Astrophysics Data System (ADS)

    Patterson, R. P.

    2010-04-01

    Impedance cardiography continues to be investigated for various applications. Instruments for its use are available commercially. Almost all of the recent presentations and articles along with commercial advertisements have assumed that aortic volume pulsation is the source of the signal. A review of the literature will reveal that there is no clear evidence for this assumption. Starting with the first paper on impedance cardiography in 1964, which assumed the lung was the source of the signal, the presentation will review many studies in the 60's, 70's and 80's, which suggest the aorta and other vessels as well as atria and again the lung as possible sources. Current studies based on high resolution thoracic models will be presented that show the aorta as contributing only approximately 1% of the total impedance measurement, making it an unlikely candidate for the major contributor to the signal. Combining the results of past studies along with recent work based on models, suggest other vessels and regions as possible sources.

  11. Modification of anisotropic plasma diffusion via auxiliary electrons emitted by a carbon nanotubes-based electron gun in an electron cyclotron resonance ion source.

    PubMed

    Malferrari, L; Odorici, F; Veronese, G P; Rizzoli, R; Mascali, D; Celona, L; Gammino, S; Castro, G; Miracoli, R; Serafino, T

    2012-02-01

    The diffusion mechanism in magnetized plasmas is a largely debated issue. A short circuit model was proposed by Simon, assuming fluxes of lost particles along the axial (electrons) and radial (ions) directions which can be compensated, to preserve the quasi-neutrality, by currents flowing throughout the conducting plasma chamber walls. We hereby propose a new method to modify Simon's currents via electrons injected by a carbon nanotubes-based electron gun. We found this improves the source performances, increasing the output current for several charge states. The method is especially sensitive to the pumping frequency. Output currents for given charge states, at different auxiliary electron currents, will be reported in the paper and the influence of the frequency tuning on the compensation mechanism will be discussed.

  12. Disordered Nuclear Pasta, Magnetic Field Decay, and Crust Cooling in Neutron Stars

    NASA Astrophysics Data System (ADS)

    Horowitz, C. J.; Berry, D. K.; Briggs, C. M.; Caplan, M. E.; Cumming, A.; Schneider, A. S.

    2015-01-01

    Nuclear pasta, with nonspherical shapes, is expected near the base of the crust in neutron stars. Large-scale molecular dynamics simulations of pasta show long lived topological defects that could increase electron scattering and reduce both the thermal and electrical conductivities. We model a possible low-conductivity pasta layer by increasing an impurity parameter Qimp . Predictions of light curves for the low-mass x-ray binary MXB 1659-29, assuming a large Qimp, find continued late time cooling that is consistent with Chandra observations. The electrical and thermal conductivities are likely related. Therefore, observations of late time crust cooling can provide insight on the electrical conductivity and the possible decay of neutron star magnetic fields (assuming these are supported by currents in the crust).

  13. Disordered nuclear pasta, magnetic field decay, and crust cooling in neutron stars.

    PubMed

    Horowitz, C J; Berry, D K; Briggs, C M; Caplan, M E; Cumming, A; Schneider, A S

    2015-01-23

    Nuclear pasta, with nonspherical shapes, is expected near the base of the crust in neutron stars. Large-scale molecular dynamics simulations of pasta show long lived topological defects that could increase electron scattering and reduce both the thermal and electrical conductivities. We model a possible low-conductivity pasta layer by increasing an impurity parameter Q_{imp}. Predictions of light curves for the low-mass x-ray binary MXB 1659-29, assuming a large Q_{imp}, find continued late time cooling that is consistent with Chandra observations. The electrical and thermal conductivities are likely related. Therefore, observations of late time crust cooling can provide insight on the electrical conductivity and the possible decay of neutron star magnetic fields (assuming these are supported by currents in the crust).

  14. Cost/benefit trade-offs for reducing the energy consumption of commercial air transportation (RECAT)

    NASA Technical Reports Server (NTRS)

    Gobetz, F. W.; Leshane, A. A.

    1976-01-01

    The RECAT study evaluated the opportunities for reducing the energy requirements of the U.S. domestic air passenger transport system through improved operational techniques, modified in-service aircraft, derivatives of current production models, or new aircraft using either current or advanced technology. Each of these fuel-conserving alternatives was investigated individually to test its potential for fuel conservation relative to a hypothetical baseline case in which current, in-production aircraft types are assumed to operate, without modification and with current operational techniques, into the future out to the year 2000. Consequently, while the RECAT results lend insight into the directions in which technology can best be pursued for improved air transport fuel economy, no single option studied in the RECAT program is indicative of a realistic future scenario.

  15. Interaction of reflected ions with the firehose marginally stable current sheet - Implications for plasma sheet convection

    NASA Technical Reports Server (NTRS)

    Pritchett, P. L.; Coroniti, F. V.

    1992-01-01

    The firehose marginally stable current sheet, which may model the flow away from the distant reconnection neutral line, assumes that the accelerated particles escape and never return to re-encounter the current region. This assumption fails on the earthward side where the accelerated ions mirror in the geomagnetic dipole field and return to the current sheet at distances up to about 30 R(E) down the tail. Two-dimensional particle simulations are used to demonstrate that the reflected ions drive a 'shock-like' structure in which the incoming flow is decelerated and the Bz field is highly compressed. These effects are similar to those produced by adiabatic choking of steady convection. Possible implications of this interaction for the dynamics of the tail are considered.

  16. Improving the Ionospheric Auroral Conductance in a Global Ring Current Model and the Effects on the Ionospheric Electrodynamics

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Jordanova, V. K.; McGranaghan, R. M.; Solomon, S. C.

    2017-12-01

    The ionospheric conductance, height-integrated electric conductivity, can regulate both the ionospheric electrodynamics and the magnetospheric dynamics because of its key role in determining the electric field within the coupled magnetosphere-ionosphere system. State-of-the-art global magnetosphere models commonly adopt empirical conductance calculators to obtain the auroral conductance. Such specification can bypass the complexity of the ionosphere-thermosphere chemistry but on the other hand breaks the self-consistent link within the coupled system. In this study, we couple a kinetic ring current model RAM-SCB-E that solves for anisotropic particle distributions with a two-stream electron transport code (GLOW) to more self-consistently compute the height-dependent electric conductivity, provided the auroral electron precipitation from the ring current model. Comparisons with the traditional empirical formula are carried out. It is found that the newly coupled modeling framework reveals smaller Hall and Pedersen conductance, resulting in a larger electric field. As a consequence, the subauroral polarization streams demonstrate a better agreement with observations from DMSP satellites. It is further found that the commonly assumed Maxwellian spectrum of the particle precipitation is not globally appropriate. Instead, a full precipitation spectrum resulted from wave particle interactions in the ring current accounts for a more comprehensive precipitation spectrum.

  17. How to interpret current-voltage relationships of blocking grain boundaries in oxygen ionic conductors.

    PubMed

    Kim, Seong K; Khodorov, Sergey; Chen, Chien-Ting; Kim, Sangtae; Lubomirsky, Igor

    2013-06-14

    A new model based on a linear diffusion equation is proposed to explain the current-voltage characteristics of blocking grain boundaries in Y-doped CeO2 in particular. One can also expect that the model can be applicable to the ionic conductors with blocking grain boundaries, in general. The model considers an infinitely long chain of identical grains separated by grain boundaries, which are treated as regions in which depletion layers of mobile ions are formed due to trapping of immobile charges that do not depend on the applied voltage as well as temperature. The model assumes that (1) the grain boundaries do not represent physical blocking layers, which implies that if there is a second phase at the grain boundaries, then it is too thin to impede ion diffusion and (2) the ions follow Boltzmann distribution throughout the materials. Despite its simplicity, the model successfully reproduces the "power law": current proportional to voltage power n and illustrated with the experimental example of Y-doped ceria. The model also correctly predicts that the product nT, where T is the temperature in K, is constant and is proportional to the grain boundary potential as long as the charge at the grain boundaries remains trapped. The latter allows its direct determination from the current-voltage characteristics and promises considerable simplification in the analysis of the electrical characteristics of the grain boundaries with respect to the models currently in use.

  18. Numerical implementation of magneto-acousto-electrical tomography (MAET) using a linear phased array transducer

    NASA Astrophysics Data System (ADS)

    Soner Gözü, Mehmet; Zengin, Reyhan; Güneri Gençer, Nevzat

    2018-02-01

    In this study, the performance and implementation of magneto-acousto-electrical tomography (MAET) is investigated using a linear phased array (LPA) transducer. The goal of MAET is to image the conductivity distribution in biological bodies. It uses the interaction between ultrasound and a static magnetic field to generate velocity current density distribution inside the body. The resultant voltage due to velocity current density is sensed by surface electrodes attached on the body. In this study, the theory of MAET is reviewed. A 16-element LPA transducer with 1 MHz excitation frequency is used to provide beam directivity and steerability of acoustic waves. Different two-dimensional numerical models of breast and tumour are formed to analyze the multiphysics problem coupled with acoustics and electromagnetic fields. In these models, velocity current density distributions are obtained for pulse type ultrasound excitations. The static magnetic field is assumed as 1 T. To sense the resultant voltage caused by the velocity current density, it is assumed that two electrodes are attached on the surface of the body. The performance of MAET is shown through sensitivity matrix analysis. The sensitivity matrix is obtained for two transducer positions with 13 steering angles between -30\\circ to 30\\circ with 5\\circ angular intervals. For the reconstruction of the images, truncated singular value decomposition method is used with different signal-to-noise ratio (SNR) values (20 dB, 40 dB, 60 dB and 80 dB). The resultant images show that the perturbation (5 mm  ×  5 mm) placed 35 mm depth can be detected even if the SNR is 20 dB.

  19. Predicting surface vibration from underground railways through inhomogeneous soil

    NASA Astrophysics Data System (ADS)

    Jones, Simon; Hunt, Hugh

    2012-04-01

    Noise and vibration from underground railways is a major source of disturbance to inhabitants near subways. To help designers meet noise and vibration limits, numerical models are used to understand vibration propagation from these underground railways. However, the models commonly assume the ground is homogeneous and neglect to include local variability in the soil properties. Such simplifying assumptions add a level of uncertainty to the predictions which is not well understood. The goal of the current paper is to quantify the effect of soil inhomogeneity on surface vibration. The thin-layer method (TLM) is suggested as an efficient and accurate means of simulating vibration from underground railways in arbitrarily layered half-spaces. Stochastic variability of the soil's elastic modulus is introduced using a K-L expansion; the modulus is assumed to have a log-normal distribution and a modified exponential covariance kernel. The effect of horizontal soil variability is investigated by comparing the stochastic results for soils varied only in the vertical direction to soils with 2D variability. Results suggest that local soil inhomogeneity can significantly affect surface velocity predictions; 90 percent confidence intervals showing 8 dB averages and peak values up to 12 dB are computed. This is a significant source of uncertainty and should be considered when using predictions from models assuming homogeneous soil properties. Furthermore, the effect of horizontal variability of the elastic modulus on the confidence interval appears to be negligible. This suggests that only vertical variation needs to be taken into account when modelling ground vibration from underground railways.

  20. Strong regularities in world wide web surfing

    PubMed

    Huberman; Pirolli; Pitkow; Lukose

    1998-04-03

    One of the most common modes of accessing information in the World Wide Web is surfing from one document to another along hyperlinks. Several large empirical studies have revealed common patterns of surfing behavior. A model that assumes that users make a sequence of decisions to proceed to another page, continuing as long as the value of the current page exceeds some threshold, yields the probability distribution for the number of pages that a user visits within a given Web site. This model was verified by comparing its predictions with detailed measurements of surfing patterns. The model also explains the observed Zipf-like distributions in page hits observed at Web sites.

  1. An Investigation of Operational Decision Making in Situ: Incident Command in the U.K. Fire and Rescue Service.

    PubMed

    Cohen-Hatton, Sabrina R; Butler, Philip C; Honey, Robert C

    2015-08-01

    The aim of this study was to better understand the nature of decision making at operational incidents in order to inform operational guidance and training. Normative models of decision making have been adopted in the guidance and training for emergency services. In these models, it is assumed that decision makers assess the current situation, formulate plans, and then execute the plans. However, our understanding of how decision making unfolds at operational incidents remains limited. Incident commanders, attending 33 incidents across six U.K. Fire and Rescue Services, were fitted with helmet-mounted cameras, and the resulting video footage was later independently coded and used to prompt participants to provide a running commentary concerning their decisions. The analysis revealed that assessment of the operational situation was most often followed by plan execution rather than plan formulation, and there was little evidence of prospection about the potential consequences of actions. This pattern of results was consistent across different types of incident, characterized by level of risk and time pressure, but was affected by the operational experience of the participants. Decision making did not follow the sequence of phases assumed by normative models and conveyed in current operational guidance but instead was influenced by both reflective and reflexive processes. These results have clear implications for understanding operational decision making as it occurs in situ and suggest a need for future guidance and training to acknowledge the role of reflexive processes. © 2015, Human Factors and Ergonomics Society.

  2. Winds and tides of Ligeia Mare, with application to the drift of the proposed time TiME (Titan Mare Explorer) capsule

    NASA Astrophysics Data System (ADS)

    Lorenz, Ralph D.; Tokano, Tetsuya; Newman, Claire E.

    2012-01-01

    We use two independent General Circulation Models (GCMs) to estimate surface winds at Titan’s Ligeia Mare (78° N, 250° W), motivated by a proposed mission to land a floating capsule in this ∼500 km hydrocarbon sea. The models agree on the overall magnitude (∼0.5-1 m/s) and seasonal variation (strongest in summer) of windspeeds, but details of seasonal and diurnal variation of windspeed and direction differ somewhat, with the role of surface exchanges being more significant than that of gravitational tides in the atmosphere. We also investigate the tidal dynamics in the sea using a numerical ocean dynamics model: assuming a rigid lithosphere, the tidal amplitude is up to ∼0.8 m. Tidal currents are overall proportional to the reciprocal of depth-with an assumed central depth of 300 m, the characteristic tidal currents are ∼1 cm/s, with notable motions being a slosh between Ligeia’s eastern and western lobes, and a clockwise flow pattern. We find that a capsule will drift at approximately one tenth of the windspeed, unless measures are adopted to augment the drag areas above or below the waterline. Thus motion of a floating capsule is dominated by the wind, and is likely to be several km per Earth day, a rate that will be readily measured from Earth by radio navigation methods. In some instances, the wind vector rotates diurnally such that the drift trajectory is epicyclic.

  3. Testing the Linearity of the Cosmic Origins Spectrograph FUV Channel Thermal Correction

    NASA Astrophysics Data System (ADS)

    Fix, Mees B.; De Rosa, Gisella; Sahnow, David

    2018-05-01

    The Far Ultraviolet Cross Delay Line (FUV XDL) detector on the Cosmic Origins Spectrograph (COS) is subject to temperature-dependent distortions. The correction performed by the COS calibration pipeline (CalCOS) assumes that these changes are linear across the detector. In this report we evaluate the accuracy of the linear approximations using data obtained on orbit. Our results show that the thermal distortions are consistent with our current linear model.

  4. Statefinder analysis of the superfluid Chaplygin gas model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popov, V.A., E-mail: vladipopov@mail.ru

    2011-10-01

    The statefinder indices are employed to test the superfluid Chaplygin gas (SCG) model describing the dark sector of the universe. The model involves Bose-Einstein condensate (BEC) as dark energy (DE) and an excited state above it as dark matter (DM). The condensate is assumed to have a negative pressure and is embodied as an exotic fluid with the Chaplygin equation of state. Excitations forms the normal component of superfluid. The statefinder diagrams show the discrimination between the SCG scenario and other models with the Chaplygin gas and indicates a pronounced effect of the DM equation of state and an indirectmore » interaction between their two components on statefinder trajectories and a current statefinder location.« less

  5. Statefinder analysis of the superfluid Chaplygin gas model

    NASA Astrophysics Data System (ADS)

    Popov, V. A.

    2011-10-01

    The statefinder indices are employed to test the superfluid Chaplygin gas (SCG) model describing the dark sector of the universe. The model involves Bose-Einstein condensate (BEC) as dark energy (DE) and an excited state above it as dark matter (DM). The condensate is assumed to have a negative pressure and is embodied as an exotic fluid with the Chaplygin equation of state. Excitations forms the normal component of superfluid. The statefinder diagrams show the discrimination between the SCG scenario and other models with the Chaplygin gas and indicates a pronounced effect of the DM equation of state and an indirect interaction between their two components on statefinder trajectories and a current statefinder location.

  6. Interpretation of Landscape Scale SWAT Model Outputs in the Western Lake Erie Basin: Potential Implications for Conservation Decision-Making

    NASA Astrophysics Data System (ADS)

    Johnson, M. V. V.; Behrman, K. D.; Atwood, J. D.; White, M. J.; Norfleet, M. L.

    2017-12-01

    There is substantial interest in understanding how conservation practices and agricultural management impact water quality, particularly phosphorus dynamics, in the Western Lake Erie Basin (WLEB). In 2016, the US and Canada accepted total phosphorus (TP) load targets recommended by the Great Lakes Water Quality Agreement Annex 4 Objectives and Targets Task Team; these were 6,000 MTA delivered to Lake Erie and 3,660 MTA delivered to WLEB. Outstanding challenges include development of metrics to determine achievement of these goals, establishment of sufficient monitoring capacity to assess progress, and identification of appropriate conservation practices to achieve the most cost-effective results. Process-based modeling can help inform decisions to address these challenges more quickly than can system observation. As part of the NRCS-led Conservation Effects Assessment Project (CEAP), the Soil Water Assessment Tool (SWAT) was used to predict impacts of conservation practice adoption reported by farmers on TP loss and load delivery dynamics in WLEB. SWAT results suggest that once the conservation practices in place in 2003-06 and 2012 are fully functional, TP loads delivered to WLEB will average 3,175 MTA and 3,084 MTA, respectively. In other words, SWAT predicts that currently adopted practices are sufficient to meet Annex 4 TP load targets. Yet, WLEB gauging stations show Annex 4 goals are unmet. There are several reasons the model predictions and current monitoring efforts are not in agreement: 1. SWAT assumes full functionality of simulated conservation practices; 2. SWAT does not simulate changing management over time, nor impacts of past management on legacy loads; 3. SWAT assumes WLEB hydrological system equilibrium under simulated management. The SWAT model runs used to construct the scenarios that informed the Annex 4 targets were similarly constrained by model assumptions. It takes time for a system to achieve equilibrium when management changes and it takes time for monitoring efforts to measure meaningful changes over time. Careful interpretation of model outputs is imperative for appropriate application of current scientific knowledge to inform decision making, especially when models are used to set spatial and temporal goals around conservation practice adoption and water quality.

  7. Neutrino Mass Bounds from 0{nu}{beta}{beta} Decays and Large Scale Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keum, Y.-Y.; Department of Physics, National Taiwan University, Taipei, Taiwan 10672; Ichiki, K.

    2008-05-21

    We investigate the way how the total mass sum of neutrinos can be constrained from the neutrinoless double beta decay and cosmological probes with cosmic microwave background (WMAP 3-year results), large scale structures including 2dFGRS and SDSS data sets. First we discuss, in brief, on the current status of neutrino mass bounds from neutrino beta decays and cosmic constrain within the flat {lambda}CMD model. In addition, we explore the interacting neutrino dark-energy model, where the evolution of neutrino masses is determined by quintessence scalar filed, which is responsable for cosmic acceleration today. Assuming the flatness of the universe, the constraintmore » we can derive from the current observation is {sigma}m{sub {nu}}<0.87 eV at the 95% confidence level, which is consistent with {sigma}m{sub {nu}}<0.68 eV in the flat {lambda}CDM model.« less

  8. The shock-heated atmosphere of an asymptotic giant branch star resolved by ALMA

    NASA Astrophysics Data System (ADS)

    Vlemmings, Wouter; Khouri, Theo; O'Gorman, Eamon; De Beck, Elvire; Humphreys, Elizabeth; Lankhaar, Boy; Maercker, Matthias; Olofsson, Hans; Ramstedt, Sofia; Tafoya, Daniel; Takigawa, Aki

    2017-12-01

    Our current understanding of the chemistry and mass-loss processes in Sun-like stars at the end of their evolution depends critically on the description of convection, pulsations and shocks in the extended stellar atmosphere1. Three-dimensional hydrodynamical stellar atmosphere models provide observational predictions2, but so far the resolution to constrain the complex temperature and velocity structures seen in the models has been lacking. Here we present submillimetre continuum and line observations that resolve the atmosphere of the asymptotic giant branch star W Hydrae. We show that hot gas with chromospheric characteristics exists around the star. Its filling factor is shown to be small. The existence of such gas requires shocks with a cooling time longer than commonly assumed. A shocked hot layer will be an important ingredient in current models of stellar convection, pulsation and chemistry at the late stages of stellar evolution.

  9. Quantifying policy options for reducing future coronary heart disease mortality in England: a modelling study.

    PubMed

    Scholes, Shaun; Bajekal, Madhavi; Norman, Paul; O'Flaherty, Martin; Hawkins, Nathaniel; Kivimäki, Mika; Capewell, Simon; Raine, Rosalind

    2013-01-01

    To estimate the number of coronary heart disease (CHD) deaths potentially preventable in England in 2020 comparing four risk factor change scenarios. Using 2007 as baseline, the IMPACTSEC model was extended to estimate the potential number of CHD deaths preventable in England in 2020 by age, gender and Index of Multiple Deprivation 2007 quintiles given four risk factor change scenarios: (a) assuming recent trends will continue; (b) assuming optimal but feasible levels already achieved elsewhere; (c) an intermediate point, halfway between current and optimal levels; and (d) assuming plateauing or worsening levels, the worst case scenario. These four scenarios were compared to the baseline scenario with both risk factors and CHD mortality rates remaining at 2007 levels. This would result in approximately 97,000 CHD deaths in 2020. Assuming recent trends will continue would avert approximately 22,640 deaths (95% uncertainty interval: 20,390-24,980). There would be some 39,720 (37,120-41,900) fewer deaths in 2020 with optimal risk factor levels and 22,330 fewer (19,850-24,300) in the intermediate scenario. In the worst case scenario, 16,170 additional deaths (13,880-18,420) would occur. If optimal risk factor levels were achieved, the gap in CHD rates between the most and least deprived areas would halve with falls in systolic blood pressure, physical inactivity and total cholesterol providing the largest contributions to mortality gains. CHD mortality reductions of up to 45%, accompanied by significant reductions in area deprivation mortality disparities, would be possible by implementing optimal preventive policies.

  10. Quantifying Policy Options for Reducing Future Coronary Heart Disease Mortality in England: A Modelling Study

    PubMed Central

    Scholes, Shaun; Bajekal, Madhavi; Norman, Paul; O’Flaherty, Martin; Hawkins, Nathaniel; Kivimäki, Mika; Capewell, Simon; Raine, Rosalind

    2013-01-01

    Aims To estimate the number of coronary heart disease (CHD) deaths potentially preventable in England in 2020 comparing four risk factor change scenarios. Methods and Results Using 2007 as baseline, the IMPACTSEC model was extended to estimate the potential number of CHD deaths preventable in England in 2020 by age, gender and Index of Multiple Deprivation 2007 quintiles given four risk factor change scenarios: (a) assuming recent trends will continue; (b) assuming optimal but feasible levels already achieved elsewhere; (c) an intermediate point, halfway between current and optimal levels; and (d) assuming plateauing or worsening levels, the worst case scenario. These four scenarios were compared to the baseline scenario with both risk factors and CHD mortality rates remaining at 2007 levels. This would result in approximately 97,000 CHD deaths in 2020. Assuming recent trends will continue would avert approximately 22,640 deaths (95% uncertainty interval: 20,390-24,980). There would be some 39,720 (37,120-41,900) fewer deaths in 2020 with optimal risk factor levels and 22,330 fewer (19,850-24,300) in the intermediate scenario. In the worst case scenario, 16,170 additional deaths (13,880-18,420) would occur. If optimal risk factor levels were achieved, the gap in CHD rates between the most and least deprived areas would halve with falls in systolic blood pressure, physical inactivity and total cholesterol providing the largest contributions to mortality gains. Conclusions CHD mortality reductions of up to 45%, accompanied by significant reductions in area deprivation mortality disparities, would be possible by implementing optimal preventive policies. PMID:23936122

  11. Constraints from Earth's heat budget on mantle dynamics

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.; Ferrachat, S.

    2006-12-01

    Recent years have seen an increase in the number of proposed models to explain Earth's mantle dynamics: while two end-members, pure layered convection with the upper and lower mantle convecting separately from each other, and pure, whole mantle convection, appear not to satisfy all the observations, several addition models have been proposed. These models include and attempt to characterize least one reservoir that is enriched in radiogenic elements relative to the mid-ocean ridge basalt (MORB) source, as is required to account for most current estimates of the Earth's heat budget. This reservoir would also be responsible for the geochemical signature in some ocean island basalts (OIBs) like Hawaii, but must be rarely sampled at the surface. Our current knowledge of the mass- and heat-budget for the bulk silicate Earth from geochemical, cosmochemical and geodynamical observations and constraints enables us to quantify the radiogenic heat enrichment required to balance the heat budget. Without assuming any particular model for the structure of the reservoir, we first determine the inherent trade-off between heat production rate and mass of the reservoir. Using these constraints, we then investigate the dynamical inferences of the heat budget, assuming that the additional heat is produced within a deep layer above the core-mantle boundary. We carry out dynamical models of layered convection using four different fixed reservoir volumes, corresponding to deep layers of thicknesses 150, 500 1000 and 1600 km, respectively, and including both temperature-dependent viscosity and an instrinsic viscosity jump between upper and lower mantle. We then assess the viability of these cases against 5 criteria: stability of the deep layer through time, topography of the interface, effective density profile, intrinsic chemical density and the heat flux at the CMB.

  12. FDTD Modeling of LEMP Propagation in the Earth-Ionosphere Waveguide With Emphasis on Realistic Representation of Lightning Source

    NASA Astrophysics Data System (ADS)

    Tran, Thang H.; Baba, Yoshihiro; Somu, Vijaya B.; Rakov, Vladimir A.

    2017-12-01

    The finite difference time domain (FDTD) method in the 2-D cylindrical coordinate system was used to compute the nearly full-frequency-bandwidth vertical electric field and azimuthal magnetic field waveforms produced on the ground surface by lightning return strokes. The lightning source was represented by the modified transmission-line model with linear current decay with height, which was implemented in the FDTD computations as an appropriate vertical phased-current-source array. The conductivity of atmosphere was assumed to increase exponentially with height, with different conductivity profiles being used for daytime and nighttime conditions. The fields were computed at distances ranging from 50 to 500 km. Sky waves (reflections from the ionosphere) were identified in computed waveforms and used for estimation of apparent ionospheric reflection heights. It was found that our model reproduces reasonably well the daytime electric field waveforms measured at different distances and simulated (using a more sophisticated propagation model) by Qin et al. (2017). Sensitivity of model predictions to changes in the parameters of atmospheric conductivity profile, as well as influences of the lightning source characteristics (current waveshape parameters, return-stroke speed, and channel length) and ground conductivity were examined.

  13. Counter-current convection in a volcanic conduit

    NASA Astrophysics Data System (ADS)

    Fowler, A. C.; Robinson, Marguerite

    2018-05-01

    Volcanoes of Strombolian type are able to maintain their semi-permanent eruptive states through the constant convective recycling of magma within the conduit leading from the magma chamber. In this paper we study the form of this convection using an analytic model of degassing two-phase flow in a vertical channel. We provide solutions for the flow at small Grashof and large Prandtl numbers, and we suggest that permanent steady-state counter-current convection is only possible if an initial bubbly counter-current flow undergoes a régime transition to a churn-turbulent flow. We also suggest that the magma in the chamber must be under-pressured in order for the flow to be maintained, and that this compromises the assumed form of the flow.

  14. Feature inference with uncertain categorization: Re-assessing Anderson's rational model.

    PubMed

    Konovalova, Elizaveta; Le Mens, Gaël

    2017-09-18

    A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.

  15. Dispersal and behavior of pacific halibut hippoglossus stenolepis in the bering sea and Aleutian islands region

    USGS Publications Warehouse

    Seitz, A.C.; Loher, Timothy; Norcross, Brenda L.; Nielsen, J.L.

    2011-01-01

    Currently, it is assumed that eastern Pacific halibut Hippoglossus stenolepis belong to a single, fully mixed population extending from California through the Bering Sea, in which adult halibut disperse randomly throughout their range during their lifetime. However, we hypothesize that hali but dispersal is more complex than currently assumed and is not spatially random. To test this hypo thesis, we studied the seasonal dispersal and behavior of Pacific halibut in the Bering Sea and Aleutian Islands (BSAI). Pop-up Archival Transmitting tags attached to halibut (82 to 154 cm fork length) during the summer provided no evidence that individuals moved out of the Bering Sea and Aleutian Islands region into the Gulf of Alaska during the mid-winter spawning season, supporting the concept that this region contains a separate spawning group of adult halibut. There was evidence for geographically localized groups of halibut along the Aleutian Island chain, as all of the individuals tagged there displayed residency, with their movements possibly impeded by tidal currents in the passes between islands. Mid-winter aggregation areas of halibut are assumed to be spawning grounds, of which 2 were previously unidentified and extend the species' presumed spawning range ~1000 km west and ~600 km north of the nearest documented spawning area. If there are indeed independent spawning groups of Pacific halibut in the BSAI, their dynamics may vary sufficiently from those of the Gulf of Alaska, so that specifically accounting for their relative segregation and unique dynamics within the larger population model will be necessary for correctly predicting how these components may respond to fishing pressure and changing environmental conditions.?? Inter-Research 2011.

  16. An examination of the effect of dipole tilt angle and cusp regions on the shape of the dayside magnetopause

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrinec, S.M.; Russell, C.T.

    1995-06-01

    The shape of the dayside magnetopause has been studied from both a theoretical and an empirical perspective for several decades. Early theoretical studies of the magnetopause shape assumed an inviscid interaction and normal pressure balance along the entire boundary, with the interior magnetic field and magnetopause currents being solved self-consistently and iteratively, using the Biot-Savart Law. The derived shapes are complicated, due to asymmetries caused by the nature of the dipole field and the direction of flow of the solar wind. These models contain a weak field region or cusp through which the solar wind has direct access to themore » ionosphere. More recent MHD model results have indicated that the closed magnetic field lines of the dayside magnetosphere can be dragged tailward of the terminator plane, so that there is no direct access of the magnetosheath to the ionosphere. Most empirical studies have assumed that the magnetopause can be approximated by a simple conic section with a specified number of coefficients, which are determined by least squares fits to spacecraft crossing positions. Thus most empirical models resemble more the MHD models than the more complex shape of the Biot-Savart models. In this work, the authors examine empirically the effect of the cusp regions on the shape of the dayside magnetopause, and they test the accuracy of these models. They find that during periods of northward IMF, crossings of the magnetopause that are close to one of the cusp regions are observed at distances closer to Earth than crossings in the equatorial plane. This result is consistent with the results of the inviscid Biot-Savart models and suggests that the magnetopause is less viscous than is assumed in many MHD models. 28 refs., 4 figs., 1 tab.« less

  17. Analysis of interacting entropy-corrected holographic and new agegraphic dark energies

    NASA Astrophysics Data System (ADS)

    Ranjit, Chayan; Debnath, Ujjal

    In the present work, we assume the flat FRW model of the universe is filled with dark matter and dark energy where they are interacting. For dark energy model, we consider the entropy-corrected HDE (ECHDE) model and the entropy-corrected NADE (ECNADE). For entropy-corrected models, we assume logarithmic correction and power law correction. For ECHDE model, length scale L is assumed to be Hubble horizon and future event horizon. The ωde-ωde‧ analysis for our different horizons are discussed.

  18. Prediction of burnout of a conduction-cooled BSCCO current lead

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seol, S.Y.; Cha, Y.S.; Niemann, R.C.

    A one-dimensional heat conduction model is employed to predict burnout of a Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8} current lead. The upper end of the lead is assumed to be at 77 K and the lower end is at 4 K. The results show that burnout always occurs at the warmer end of the lead. The lead reaches its burnout temperature in two distinct stage. Initially, the temperature rises slowly when part of the lead is in flux-flow state. As the local temperature reaches the critical temperature, it begins to increase sharply. Burnout time depends strongly on flux-flow resistivity.

  19. Characterization of reaction kinetics in a porous electrode

    NASA Technical Reports Server (NTRS)

    Fedkiw, Peter S.

    1990-01-01

    A continuum-model approach, analogous to porous electrode theory, was applied to a thin-layer cell of rectangular and cylindrical geometry. A reversible redox couple is assumed, and the local reaction current density is related to the potential through the formula of Hubbard and Anson for a uniformily accessible thin-layer cell. The placement of the reference electrode is also accounted for in the analysis. Primary emphasis is placed on the effect of the solution-phase ohmic potential drop on the voltammogram characteristics. Correlation equations for the peak-potential displacement from E(sup 0 prime) and the peak current are presented in terms of two dimensionless parameters.

  20. Multiple imputation to account for measurement error in marginal structural models

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  1. Diffusion Decision Model: Current Issues and History.

    PubMed

    Ratcliff, Roger; Smith, Philip L; Brown, Scott D; McKoon, Gail

    2016-04-01

    There is growing interest in diffusion models to represent the cognitive and neural processes of speeded decision making. Sequential-sampling models like the diffusion model have a long history in psychology. They view decision making as a process of noisy accumulation of evidence from a stimulus. The standard model assumes that evidence accumulates at a constant rate during the second or two it takes to make a decision. This process can be linked to the behaviors of populations of neurons and to theories of optimality. Diffusion models have been used successfully in a range of cognitive tasks and as psychometric tools in clinical research to examine individual differences. In this review, we relate the models to both earlier and more recent research in psychology. Copyright © 2016. Published by Elsevier Ltd.

  2. Explicit wave action conservation for water waves on vertically sheared flows

    NASA Astrophysics Data System (ADS)

    Quinn, Brenda; Toledo, Yaron; Shrira, Victor

    2016-04-01

    Water waves almost always propagate on currents with a vertical structure such as currents directed towards the beach accompanied by an under-current directed back toward the deep sea or wind-induced currents which change magnitude with depth due to viscosity effects. On larger scales they also change their direction due to the Coriolis force as described by the Ekman spiral. This implies that the existing wave models, which assume vertically-averaged currents, is an approximation which is far from realistic. In recent years, ocean circulation models have significantly improved with the capability to model vertically-sheared current profiles in contrast with the earlier vertically-averaged current profiles. Further advancements have coupled wave action models to circulation models to relate the mutual effects between the two types of motion. Restricting wave models to vertically-averaged non-turbulent current profiles is obviously problematic in these cases and the primary goal of this work is to derive and examine a general wave action equation which accounts for these shortcoming. The formulation of the wave action conservation equation is made explicit by following the work of Voronovich (1976) and using known asymptotic solutions of the boundary value problem which exploit the smallness of the current magnitude compared to the wave phase velocity and/or its vertical shear and curvature. The adopted approximations are shown to be sufficient for most of the conceivable applications. This provides correction terms to the group velocity and wave action definition accounting for the shear effects, which are fitting for application to operational wave models. In the limit of vanishing current shear, the new formulation reduces to the commonly used Bretherton & Garrett (1968) no-shear wave action equation where the invariant is calculated with the current magnitude taken at the free surface. It is shown that in realistic oceanic conditions, the neglect of the vertical structure of the currents in wave modelling which is currently universal, might lead to significant errors in wave amplitude and the predicted wave ray paths. An extension of the work toward the more complex case of turbulent currents will also be discussed.

  3. Oil and the world economy: some possible futures.

    PubMed

    Kumhof, Michael; Muir, Dirk

    2014-01-13

    This paper, using a six-region dynamic stochastic general equilibrium model of the world economy, assesses the output and current account implications of permanent oil supply shocks hitting the world economy. For modest-sized shocks and conventional production technologies, the effects are modest. But for larger shocks, for elasticities of substitution that decline as oil usage is reduced to a minimum, and for production functions in which oil acts as a critical enabler of technologies, output growth could drop significantly. Also, oil prices could become so high that smooth adjustment, as assumed in the model, may become very difficult.

  4. Propulsion and Power Rapid Response R&D Support Delivery Order 0041: Power Dense Solid Oxide Fuel Cell Systems: High Performance, High Power Density Solid Oxide Fuel Cells - Materials and Load Control

    DTIC Science & Technology

    2008-12-01

    respectively. 2.3.1.2 Brushless DC Motor Brushless direct current ( BLDC ) motors feature high efficiency, ease of control , and astonishingly high power...modeling purposes, we ignore the modeling complexity of the BLDC controller and treat the motor and controller “as commutated”, i.e. we assume the...High Performance, High Power Density Solid Oxide Fuel Cells− Materials and Load Control Stephen W. Sofie, Steven R. Shaw, Peter A. Lindahl, and Lee H

  5. Laser propulsion for orbit transfer - Laser technology issues

    NASA Technical Reports Server (NTRS)

    Horvath, J. C.; Frisbee, R. H.

    1985-01-01

    Using reasonable near-term mission traffic models (1991-2000 being the assumed operational time of the system) and the most current unclassified laser and laser thruster information available, it was found that space-based laser propulsion orbit transfer vehicles (OTVs) can outperform the aerobraked chemical OTV over a 10-year life-cycle. The conservative traffic models used resulted in an optimum laser power of about 1 MW per laser. This is significantly lower than the power levels considered in other studies. Trip time was taken into account only to the extent that the system was sized to accomplish the mission schedule.

  6. Remote Estimation of River Discharge and Bathymetry: Sensitivity to Turbulent Dissipation and Bottom Friction

    NASA Astrophysics Data System (ADS)

    Simeonov, J.; Holland, K. T.

    2016-12-01

    We investigated the fidelity of a hierarchy of inverse models that estimate river bathymetry and discharge using measurements of surface currents and water surface elevation. Our most comprehensive depth inversion was based on the Shiono and Knight (1991) model that considers the depth-averaged along-channel momentum balance between the downstream pressure gradient due to gravity, the bottom drag and the lateral stresses induced by turbulence. The discharge was determined by minimizing the difference between the predicted and the measured streamwise variation of the total head. The bottom friction coefficient was assumed to be known or determined by alternative means. We also considered simplifications of the comprehensive inversion model that exclude the lateral mixing term from the momentum balance and assessed the effect of neglecting this term on the depth and discharge estimates for idealized in-bank flow in symmetric trapezoidal channels with width/depth ratio of 40 and different side-wall slopes. For these simple gravity-friction models, we used two different bottom friction parameterizations - a constant Darcy-Weisbach local friction and a depth-dependent friction related to the local depth and a constant Manning (roughness) coefficient. Our results indicated that the Manning gravity-friction model provides accurate estimates of the depth and the discharge that are within 1% of the assumed values for channels with side-wall slopes between 1/2 and 1/17. On the other hand, the constant Darcy-Weisbach friction model underpredicted the true depth and discharge by 7% and 9%, respectively, for the channel with side-wall slope of 1/17. These idealized modeling results suggest that a depth-dependent parameterization of the bottom friction is important for accurate inversion of depth and discharge and that the lateral turbulent mixing is not important. We also tested the comprehensive and the simplified inversion models for the Kootenai River near Bonners Ferry (Idaho) using in situ and remote sensing measurements of surface currents and water surface elevation obtained during a 2010 field experiment.

  7. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    PubMed

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.

  8. Coupling of PIES 3-D Equilibrium Code and NIFS Bootstrap Code with Applications to the Computation of Stellarator Equilibria

    NASA Astrophysics Data System (ADS)

    Monticello, D. A.; Reiman, A. H.; Watanabe, K. Y.; Nakajima, N.; Okamoto, M.

    1997-11-01

    The existence of bootstrap currents in both tokamaks and stellarators was confirmed, experimentally, more than ten years ago. Such currents can have significant effects on the equilibrium and stability of these MHD devices. In addition, stellarators, with the notable exception of W7-X, are predicted to have such large bootstrap currents that reliable equilibrium calculations require the self-consistent evaluation of bootstrap currents. Modeling of discharges which contain islands requires an algorithm that does not assume good surfaces. Only one of the two 3-D equilibrium codes that exist, PIES( Reiman, A. H., Greenside, H. S., Compt. Phys. Commun. 43), (1986)., can easily be modified to handle bootstrap current. Here we report on the coupling of the PIES 3-D equilibrium code and NIFS bootstrap code(Watanabe, K., et al., Nuclear Fusion 35) (1995), 335.

  9. The influence of ground conductivity on the structure of RF radiation from return strokes

    NASA Technical Reports Server (NTRS)

    Levine, D. M.; Gesell, L.

    1984-01-01

    The combination of the finite conductivity of the Earth plus the propagation of the return stroke current up the channel which results in an apparent time delay between the fast field changes and RF radiation for distant observers is shown. The time delay predicted from model return strokes is on the order of 20 micro and the received signal has the characteristics of the data observed in Virginia and Florida. A piecewise linear model for the return stroke channel and a transmission line model for current propagation on each segment was used. Radiation from each segment is calculated over a flat Earth with finite conductivity using asymptotics approximations for the Sommerfeld integrals. The radiation at the observer is processed by a model AM radio receiver. The output voltage was calculated for several frequencies between HF-UHF assuming a system bandwidth (300 kHz) characteristic of the system used to collect data in Florida and Virginia. Comparison with the theoretical fast field changes indicates a time delay of 20 microns.

  10. Numerical simulation of proton exchange membrane fuel cells at high operating temperature

    NASA Astrophysics Data System (ADS)

    Peng, Jie; Lee, Seung Jae

    A three-dimensional, single-phase, non-isothermal numerical model for proton exchange membrane (PEM) fuel cell at high operating temperature (T ≥ 393 K) was developed and implemented into a computational fluid dynamic (CFD) code. The model accounts for convective and diffusive transport and allows predicting the concentration of species. The heat generated from electrochemical reactions, entropic heat and ohmic heat arising from the electrolyte ionic resistance were considered. The heat transport model was coupled with the electrochemical and mass transport models. The product water was assumed to be vaporous and treated as ideal gas. Water transportation across the membrane was ignored because of its low water electro-osmosis drag force in the polymer polybenzimidazole (PBI) membrane. The results show that the thermal effects strongly affect the fuel cell performance. The current density increases with the increasing of operating temperature. In addition, numerical prediction reveals that the width and distribution of gas channel and current collector land area are key optimization parameters for the cell performance improvement.

  11. Numerical Simulation of Multiphase Magnetohydrodynamic Flow and Deformation of Electrolyte-Metal Interface in Aluminum Electrolysis Cells

    NASA Astrophysics Data System (ADS)

    Hua, Jinsong; Rudshaug, Magne; Droste, Christian; Jorgensen, Robert; Giskeodegard, Nils-Haavard

    2018-06-01

    A computational fluid dynamics based multiphase magnetohydrodynamic (MHD) flow model for simulating the melt flow and bath-metal interface deformation in realistic aluminum reduction cells is presented. The model accounts for the complex physics of the MHD problem in aluminum reduction cells by coupling two immiscible fluids, electromagnetic field, Lorentz force, flow turbulence, and complex cell geometry with large length scale. Especially, the deformation of bath-metal interface is tracked directly in the simulation, and the condition of constant anode-cathode distance (ACD) is maintained by moving anode bottom dynamically with the deforming bath-metal interface. The metal pad deformation and melt flow predicted by the current model are compared to the predictions using a simplified model where the bath-metal interface is assumed flat. The effects of the induced electric current due to fluid flow and the magnetic field due to the interior cell current on the metal pad deformation and melt flow are investigated. The presented model extends the conventional simplified box model by including detailed cell geometry such as the ledge profile and all channels (side, central, and cross-channels). The simulations show the model sensitivity to different side ledge profiles and the cross-channel width by comparing the predicted melt flow and metal pad heaving. In addition, the model dependencies upon the reduction cell operation conditions such as ACD, current distribution on cathode surface and open/closed channel top, are discussed.

  12. 78 FR 16311 - Self-Regulatory Organizations; Chicago Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-14

    ... NBB.\\45\\ \\45\\ 17 CFR 242.201(b)(1)(iii)(A). Example 6. Assume now that another market center posts a...). Example 1. Assume that the NBBO for security XYZ is $10.10 x $10.12 and the short sale price test... be the result under either the current or proposed Post Only definition. Example 2. Assume the same...

  13. The hyperbolic problem

    NASA Astrophysics Data System (ADS)

    Gualdesi, Lavinio

    2017-04-01

    Mooring lines in the Ocean might be seen as a pretty simple seamanlike activity. Connecting valuable scientific instrumentation to it transforms this simple activity into a sophisticated engineering support which needs to be accurately designed, developed, deployed, monitored and hopefully recovered with its precious load of scientific data. This work is an historical travel along the efforts carried out by scientists all over the world to successfully predict mooring line behaviour through both mathematical simulation and experimental verifications. It is at first glance unexpected how many factors one must observe to get closer and closer to a real ocean situation. Most models have dual applications for mooring lines and towed bodies lines equations. Numerous references are provided starting from the oldest one due to Isaac Newton. In his "Philosophiae Naturalis Principia Matematica" (1687) the English scientist, while discussing about the law of motion for bodies in resistant medium, is envisaging a hyperbolic fitting to the phenomenon including asymptotic behaviour in non-resistant media. A non-exhaustive set of mathematical simulations of the mooring lines trajectory prediction is listed hereunder to document how the subject has been under scientific focus over almost a century. Pode (1951) Prior personal computers diffusion a tabular form of calculus of cable geometry was used by generations of engineers keeping in mind the following limitations and approximations: tangential drag coefficients were assumed to be negligible. A steady current flow was assumed as in the towed configuration. Cchabra (1982) Finite Element Method that assumes an arbitrary deflection angle for the top first section and calculates equilibrium equations down to the sea floor iterating up to a compliant solution. Gualdesi (1987) ANAMOOR. A Fortran Program based on iterative methods above including experimental data from intensive mooring campaign. Database of experimental drag coefficients obtained in wind tunnel for the instrumentation verified in ocean mooring. Dangov (1987) A set of Fortran routines, due to a Canadian scientist, to analyse discrepancies between model and experimental data due to strumming effect on mooring line. Acoustic Doppler Current Profiler's data were adopted for the first time as an input for the model. Skop and O' Hara (1968) Static analysis of a three dimensional multi-leg model Knutson (1987) A model developed at David taylor Model basin based on towed models. Henry Berteaux (1990) SFMOOR Iterative FEM analysis fully fitted with mooring components data base developed by a WHOI scientist. Henry Berteaux (1990) SSMOOR Same model applied to sub-surface moorings. Gobats and Grosenbaugh (1998) Fully developed Method based on Strip Theory developed by WHOI scientists. Experimental validation results are not known.

  14. Resistive switching near electrode interfaces: Estimations by a current model

    NASA Astrophysics Data System (ADS)

    Schroeder, Herbert; Zurhelle, Alexander; Stemmer, Stefanie; Marchewka, Astrid; Waser, Rainer

    2013-02-01

    The growing resistive switching database is accompanied by many detailed mechanisms which often are pure hypotheses. Some of these suggested models can be verified by checking their predictions with the benchmarks of future memory cells. The valence change memory model assumes that the different resistances in ON and OFF states are made by changing the defect density profiles in a sheet near one working electrode during switching. The resulting different READ current densities in ON and OFF states were calculated by using an appropriate simulation model with variation of several important defect and material parameters of the metal/insulator (oxide)/metal thin film stack such as defect density and its profile change in density and thickness, height of the interface barrier, dielectric permittivity, applied voltage. The results were compared to the benchmarks and some memory windows of the varied parameters can be defined: The required ON state READ current density of 105 A/cm2 can only be achieved for barriers smaller than 0.7 eV and defect densities larger than 3 × 1020 cm-3. The required current ratio between ON and OFF states of at least 10 requests defect density reduction of approximately an order of magnitude in a sheet of several nanometers near the working electrode.

  15. Manual. According to the Calculation of Wires and Cables,

    DTIC Science & Technology

    1980-04-23

    Tpezxaaaro TON. As, C Kay: (1). Designation. t2). Designation. (3). Current (permanent. Voltage constant. (4). Currant ivariable/ alternating . Voltage is the...variable/ alternating , general designatior. (5). Current variable/ alternating three-phase 5C Hz. (6). Ez. (7). Zero line (neutral). It is allcwed/assumed...diagrams of powsr supply iz is allowed/assumed high-voltage switch to dapict, as it is shown. (10). iinuings by relay, contactor and magnetic starter. It

  16. On the electromagnetic fields, Poynting vector, and peak power radiated by lightning return strokes

    NASA Technical Reports Server (NTRS)

    Krider, E. P.

    1992-01-01

    The initial radiation fields, Poynting vector, and total electromagnetic power that a vertical return stroke radiates into the upper half space have been computed when the speed of the stroke, nu, is a significant fraction of the speed of light, c, assuming that at large distances and early times the source is an infinitesimal dipole. The initial current is also assumed to satisfy the transmission-line model with a constant nu and to be perpendicular to an infinite, perfectly conducting ground. The effect of a large nu is to increase the radiation fields by a factor of (1-beta-sq cos-sq theta) exp -1, where beta = nu/c and theta is measured from the vertical, and the Poynting vector by a factor of (1-beta-sq cos-sq theta) exp -2.

  17. Application of Superconducting Power Cables to DC Electric Railway Systems

    NASA Astrophysics Data System (ADS)

    Ohsaki, Hiroyuki; Lv, Zhen; Sekino, Masaki; Tomita, Masaru

    For novel design and efficient operation of next-generation DC electric railway systems, especially for their substantial energy saving, we have studied the feasibility of applying superconducting power cables to them. In this paper it is assumed that a superconducting power cable is applied to connect substations supplying electric power to trains. An analysis model line was described by an electric circuit, which was analyzed with MATLAB-Simulink. From the calculated voltages and currents of the circuit, the regenerative brake and the energy losses were estimated. In addition, assuming the heat loads of superconducting power cables and the cryogenic efficiency, the energy saving of the total system was evaluated. The results show that the introduction of superconducting power cables could achieve the improved use of regenerative brake, the loss reduction, the decreased number of substations, the reduced maintenance, etc.

  18. Modeling the impact of screening policy and screening compliance on incidence and mortality of cervical cancer in the post-HPV vaccination era.

    PubMed

    de Blasio, Birgitte Freiesleben; Neilson, Aileen Rae; Klemp, Marianne; Skjeldestad, Finn Egil

    2012-12-01

    In Norway, pap smear screening target women aged 25-69 years on a triennial basis. The introduction of human papillomavirus (HPV) mass immunization in 2009 raises questions regarding the cost-saving future changes to current screening strategies. We calibrated a dynamic HPV transmission model to Norwegian data and assessed the impact of changing screening 20 or 30 years after vaccine introduction, assuming 60 or 90% vaccination coverage. Screening compliance among vaccinated women was assumed at 80 or 50%. Strategies considered: (i) 5-yearly screening of women of 25-69 years, (ii) 3-yearly screening of women of 30-69 years and (iii) 3-yearly screening of women of 25-59 years. Greatest health gains were accomplished by ensuring a high vaccine uptake. In 2060, cervical cancer incidence was reduced by an estimated 36-57% compared with that of no vaccination. Stopping screening at the age of 60 years, excluding opportunistic screening, increased cervical cancer incidence by 3% (2060) compared with maintaining the current screening strategy, resulting in 1.0-2.4% extra cancers (2010-2060). The 5-yearly screening strategy elevated cervical cancer incidence by 30% resulting in 4.7-11.3% additional cancers. High vaccine uptake in the years to come is of primary concern. Screening of young women <30 years remains important, even under the conditions of high vaccine coverage.

  19. Sub-corotating region of Saturn's magnetosphere: Cassini observations of the azimuthal field and implications for the ionospheric Pederesen Current (Invited)

    NASA Astrophysics Data System (ADS)

    Smith, E. J.; Dougherty, M. K.; Zhou, X.

    2010-12-01

    A consensus model of Saturn’s magnetosphere that has broad acceptance consists of four regions in which the plasma and field are corotating, sub-corotating or undergoing Vasyliunas or Dungey convection. In this model, the sub-corotating magnetosphere contains a large scale circuital current system comprised of radial, field-aligned and ionospheric currents. A quantitative rendering of this system developed by S. Cowley and E. Bunch relates the azimuthal field component, B phi, that causes the field to spiral to the ionospheric Pedersen current , Ip. Cassini measurements of B phi over the four year interval between 2005 and 2008 that are widely distributed in radial distance, latitude and local time have been used to compute Ip from a Bunce-Cowley formula. A striking north-south asymmetry of the global magnetosphere has been found. In the southern hemisphere, the magnitude and variation of Ip with invariant colatitude, θ, agree qualitatively with the model but Ip (θ) is shifted poleward by about 10°. In the northern hemisphere, however, the data fail to reproduce the profile of Ip (θ) predicted by the model but are dominated by two high latitude currents having the wrong polarities. Possible causes of this asymmetry are seasonal variations (summer in the southern hemisphere) and/or asymmetric plasma outflow from the inner magnetosphere such as the plumes extending southward from Enceladus. Another finding is a significant local time dependence of Ip(θ) rather than the axisymmetry assumed in the model. There is a close correspondence with the model in the noon sector. The currents in the midnight and dawn sectors are significantly larger than in the noon sector and the current in the dusk sector is dramatically weaker.

  20. Modeling and stabilization results for a charge or current-actuated active constrained layer (ACL) beam model with the electrostatic assumption

    NASA Astrophysics Data System (ADS)

    Özer, Ahmet Özkan

    2016-04-01

    An infinite dimensional model for a three-layer active constrained layer (ACL) beam model, consisting of a piezoelectric elastic layer at the top and an elastic host layer at the bottom constraining a viscoelastic layer in the middle, is obtained for clamped-free boundary conditions by using a thorough variational approach. The Rao-Nakra thin compliant layer approximation is adopted to model the sandwich structure, and the electrostatic approach (magnetic effects are ignored) is assumed for the piezoelectric layer. Instead of the voltage actuation of the piezoelectric layer, the piezoelectric layer is proposed to be activated by a charge (or current) source. We show that, the closed-loop system with all mechanical feedback is shown to be uniformly exponentially stable. Our result is the outcome of the compact perturbation argument and a unique continuation result for the spectral problem which relies on the multipliers method. Finally, the modeling methodology of the paper is generalized to the multilayer ACL beams, and the uniform exponential stabilizability result is established analogously.

  1. Numerical modeling of hydrodynamics and sediment transport—an integrated approach

    NASA Astrophysics Data System (ADS)

    Gic-Grusza, Gabriela; Dudkowska, Aleksandra

    2017-10-01

    Point measurement-based estimation of bedload transport in the coastal zone is very difficult. The only way to assess the magnitude and direction of bedload transport in larger areas, particularly those characterized by complex bottom topography and hydrodynamics, is to use a holistic approach. This requires modeling of waves, currents, and the critical bed shear stress and bedload transport magnitude, with a due consideration to the realistic bathymetry and distribution of surface sediment types. Such a holistic approach is presented in this paper which describes modeling of bedload transport in the Gulf of Gdańsk. Extreme storm conditions defined based on 138-year NOAA data were assumed. The SWAN model (Booij et al. 1999) was used to define wind-wave fields, whereas wave-induced currents were calculated using the Kołodko and Gic-Grusza (2015) model, and the magnitude of bedload transport was estimated using the modified Meyer-Peter and Müller (1948) formula. The calculations were performed using a GIS model. The results obtained are innovative. The approach presented appears to be a valuable source of information on bedload transport in the coastal zone.

  2. Measuring the radius of PSR J0437 -4715 using NICER observations of X-ray oscillations

    NASA Astrophysics Data System (ADS)

    Lamb, Frederick; Miller, M. Coleman

    2017-01-01

    The Neutron Star Interior Composition Explorer (NICER) will launch early in 2017. Its first scientific objective is to precisely and reliably measure the radius of several neutron stars, thereby constraining the properties of cold matter at supranuclear densities. This will be done by fitting energy-dependent waveform models to the observed thermal X-ray waveforms of selected rotation-powered millisecond pulsars. A key target is the 174-Hz pulsar PSR J0437 -4715. Using synthetic waveform data and Bayesian methods, we have estimated the precisions with which its mass M and radius R can be measured by NICER. When generating the synthetic data, we assumed M = 1 . 4M⊙ and R = 13 km. When generating the data and when analyzing it, we assumed the X-ray spectrum and radiation beaming pattern given by models with cool hydrogen atmospheres and two hot spots. Assuming NICER observations lasting a total of 1.0 Msec, current knowledge of M and the distance, and knowledge of the pulsar's spin axis to within 1°, the 1 σ credible region in R extends from 11.83 to 13.73 km (7.4%) and in M, from 1.307 to 1.567 M⊙ (9.1%). Marginalizing over M, we find the 1 σ credible interval for R alone extends from 12.62 to 13.68 km (4%).

  3. Effect of misspecification of gene frequency on the two-point LOD score.

    PubMed

    Pal, D K; Durner, M; Greenberg, D A

    2001-11-01

    In this study, we used computer simulation of simple and complex models to ask: (1) What is the penalty in evidence for linkage when the assumed gene frequency is far from the true gene frequency? (2) If the assumed model for gene frequency and inheritance are misspecified in the analysis, can this lead to a higher maximum LOD score than that obtained under the true parameters? Linkage data simulated under simple dominant, recessive, dominant and recessive with reduced penetrance, and additive models, were analysed assuming a single locus with both the correct and incorrect dominance model and assuming a range of different gene frequencies. We found that misspecifying the analysis gene frequency led to little penalty in maximum LOD score in all models examined, especially if the assumed gene frequency was lower than the generating one. Analysing linkage data assuming a gene frequency of the order of 0.01 for a dominant gene, and 0.1 for a recessive gene, appears to be a reasonable tactic in the majority of realistic situations because underestimating the gene frequency, even when the true gene frequency is high, leads to little penalty in the LOD score.

  4. Will male advertisement be a reliable indicator of paternal care, if offspring survival depends on male care?

    PubMed Central

    Kelly, Natasha B.; Alonzo, Suzanne H.

    2009-01-01

    Existing theory predicts that male signalling can be an unreliable indicator of paternal care, but assumes that males with high levels of mating success can have high current reproductive success, without providing any parental care. As a result, this theory does not hold for the many species where offspring survival depends on male parental care. We modelled male allocation of resources between advertisement and care for species with male care where males vary in quality, and the effect of care and advertisement on male fitness is multiplicative rather than additive. Our model predicts that males will allocate proportionally more of their resources to whichever trait (advertisement or paternal care) is more fitness limiting. In contrast to previous theory, we find that male advertisement is always a reliable indicator of paternal care and male phenotypic quality (e.g. males with higher levels of advertisement never allocate less to care than males with lower levels of advertisement). Our model shows that the predicted pattern of male allocation and the reliability of male signalling depend very strongly on whether paternal care is assumed to be necessary for offspring survival and how male care affects offspring survival and male fitness. PMID:19520802

  5. Will male advertisement be a reliable indicator of paternal care, if offspring survival depends on male care?

    PubMed

    Kelly, Natasha B; Alonzo, Suzanne H

    2009-09-07

    Existing theory predicts that male signalling can be an unreliable indicator of paternal care, but assumes that males with high levels of mating success can have high current reproductive success, without providing any parental care. As a result, this theory does not hold for the many species where offspring survival depends on male parental care. We modelled male allocation of resources between advertisement and care for species with male care where males vary in quality, and the effect of care and advertisement on male fitness is multiplicative rather than additive. Our model predicts that males will allocate proportionally more of their resources to whichever trait (advertisement or paternal care) is more fitness limiting. In contrast to previous theory, we find that male advertisement is always a reliable indicator of paternal care and male phenotypic quality (e.g. males with higher levels of advertisement never allocate less to care than males with lower levels of advertisement). Our model shows that the predicted pattern of male allocation and the reliability of male signalling depend very strongly on whether paternal care is assumed to be necessary for offspring survival and how male care affects offspring survival and male fitness.

  6. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  7. A note on two-dimensional asymptotic magnetotail equilibria

    NASA Technical Reports Server (NTRS)

    Voigt, Gerd-Hannes; Moore, Brian D.

    1994-01-01

    In order to understand, on the fluid level, the structure, the time evolution, and the stability of current sheets, such as the magnetotail plasma sheet in Earth's magnetosphere, one has to consider magnetic field configurations that are in magnetohydrodynamic (MHD) force equilibrium. Any reasonable MHD current sheet model has to be two-dimensional, at least in an asymptotic sense (B(sub z)/B (sub x)) = epsilon much less than 1. The necessary two-dimensionality is described by a rather arbitrary function f(x). We utilize the free function f(x) to construct two-dimensional magnetotail equilibria are 'equivalent' to current sheets in empirical three-dimensional models. We obtain a class of asymptotic magnetotail equilibria ordered with respect to the magnetic disturbance index Kp. For low Kp values the two-dimensional MHD equilibria reflect some of the realistic, observation-based, aspects of three-dimensional models. For high Kp values the three-dimensional models do not fit the asymptotic MHD equlibria, which is indicative of their inconsistency with the assumed pressure function. This, in turn, implies that high magnetic activity levels of the real magnetosphere might be ruled by thermodynamic conditions different from local thermodynamic equilibrium.

  8. Modeling of dynamic bipolar plasma sheaths

    NASA Astrophysics Data System (ADS)

    Grossmann, J. M.; Swanekamp, S. B.; Ottinger, P. F.

    1991-08-01

    The behavior of a one dimensional plasma sheath is described in regimes where the sheath is not in equilibrium because it carries current densities that are either time dependent, or larger than the bipolar Child-Langmuir level determined from the injected ion flux. Earlier models of dynamic bipolar sheaths assumed that ions and electrons evolve in a series of quasi-equilibria. In addition, sheath growth was described by the equation Zenoxs = (ji)-Zenouo, where xs is the velocity of the sheath edge, ji is the ion current density, nouo is the injected ion flux density, and Ze is the ion charge. In this paper, a generalization of the bipolar electron-to-ion current density ratio formula is derived to study regimes where ions are not in equilibrium. A generalization of the above sheath growth equation is also developed which is consistent with the ion continuity equation and which reveals new physics of sheath behavior associated with the emitted electrons and their evolution. Based on these findings, two new models of dynamic bipolar sheaths are developed. Larger sheath sizes and potentials than those of earlier models are found. In certain regimes, explosive sheath growth is predicted.

  9. Magnetic resonance imaging and spectroscopy of the murine cardiovascular system.

    PubMed

    Akki, Ashwin; Gupta, Ashish; Weiss, Robert G

    2013-03-01

    Magnetic resonance imaging (MRI) has emerged as a powerful and reliable tool to noninvasively study the cardiovascular system in clinical practice. Because transgenic mouse models have assumed a critical role in cardiovascular research, technological advances in MRI have been extended to mice over the last decade. These have provided critical insights into cardiac and vascular morphology, function, and physiology/pathophysiology in many murine models of heart disease. Furthermore, magnetic resonance spectroscopy (MRS) has allowed the nondestructive study of myocardial metabolism in both isolated hearts and in intact mice. This article reviews the current techniques and important pathophysiological insights from the application of MRI/MRS technology to murine models of cardiovascular disease.

  10. Magnetic resonance imaging and spectroscopy of the murine cardiovascular system

    PubMed Central

    Akki, Ashwin; Gupta, Ashish

    2013-01-01

    Magnetic resonance imaging (MRI) has emerged as a powerful and reliable tool to noninvasively study the cardiovascular system in clinical practice. Because transgenic mouse models have assumed a critical role in cardiovascular research, technological advances in MRI have been extended to mice over the last decade. These have provided critical insights into cardiac and vascular morphology, function, and physiology/pathophysiology in many murine models of heart disease. Furthermore, magnetic resonance spectroscopy (MRS) has allowed the nondestructive study of myocardial metabolism in both isolated hearts and in intact mice. This article reviews the current techniques and important pathophysiological insights from the application of MRI/MRS technology to murine models of cardiovascular disease. PMID:23292717

  11. Modelling the evolution and diversity of cumulative culture

    PubMed Central

    Enquist, Magnus; Ghirlanda, Stefano; Eriksson, Kimmo

    2011-01-01

    Previous work on mathematical models of cultural evolution has mainly focused on the diffusion of simple cultural elements. However, a characteristic feature of human cultural evolution is the seemingly limitless appearance of new and increasingly complex cultural elements. Here, we develop a general modelling framework to study such cumulative processes, in which we assume that the appearance and disappearance of cultural elements are stochastic events that depend on the current state of culture. Five scenarios are explored: evolution of independent cultural elements, stepwise modification of elements, differentiation or combination of elements and systems of cultural elements. As one application of our framework, we study the evolution of cultural diversity (in time as well as between groups). PMID:21199845

  12. Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes

    NASA Astrophysics Data System (ADS)

    Pietsch, Stephan

    2017-04-01

    DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong, since: (i) No given ecosystem ever is at steady state! (ii) Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.

  13. Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes.

    NASA Astrophysics Data System (ADS)

    Pietsch, S.

    2016-12-01

    DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong. Any given ecosystem never is at steady state! Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.

  14. Dark matter admixed strange quark stars in the Starobinsky model

    NASA Astrophysics Data System (ADS)

    Lopes, Ilídio; Panotopoulos, Grigoris

    2018-01-01

    We compute the mass-to-radius profiles for dark matter admixed strange quark stars in the Starobinsky model of modified gravity. For quark matter, we assume the MIT bag model, while self-interacting dark matter inside the star is modeled as a Bose-Einstein condensate with a polytropic equation of state. We numerically integrate the structure equations in the Einstein frame, adopting the two-fluid formalism, and we treat the curvature correction term nonperturbatively. The effects on the properties of the stars of the amount of dark matter as well as the higher curvature term are investigated. We find that strange quark stars (in agreement with current observational constraints) with the highest masses are equally affected by dark matter and modified gravity.

  15. Dust Composition in Climate Models: Current Status and Prospects

    NASA Astrophysics Data System (ADS)

    Pérez García-Pando, C.; Miller, R. L.; Perlwitz, J. P.; Kok, J. F.; Scanza, R.; Mahowald, N. M.

    2015-12-01

    Mineral dust created by wind erosion of soil particles is the dominant aerosol by mass in the atmosphere. It exerts significant effects on radiative fluxes, clouds, ocean biogeochemistry, and human health. Models that predict the lifecycle of mineral dust aerosols generally assume a globally uniform mineral composition. However, this simplification limits our understanding of the role of dust in the Earth system, since the effects of dust strongly depend on the particles' physical and chemical properties, which vary with their mineral composition. Hence, not only a detailed understanding of the processes determining the dust emission flux is needed, but also information about its size dependent mineral composition. Determining the mineral composition of dust aerosols is complicated. The largest uncertainty derives from the current atlases of soil mineral composition. These atlases provide global estimates of soil mineral fractions, but they are based upon massive extrapolation of a limited number of soil samples assuming that mineral composition is related to soil type. This disregards the potentially large variability of soil properties within each defined soil type. In addition, the analysis of these soil samples is based on wet sieving, a technique that breaks the aggregates found in the undisturbed parent soil. During wind erosion, these aggregates are subject to partial fragmentation, which generates differences on the size distribution and composition between the undisturbed parent soil and the emitted dust aerosols. We review recent progress on the representation of the mineral and chemical composition of dust in climate models. We discuss extensions of brittle fragmentation theory to prescribe the emitted size-resolved dust composition, and we identify key processes and uncertainties based upon model simulations and an unprecedented compilation of observations.

  16. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  17. Depletion region surface effects in electron beam induced current measurements.

    PubMed

    Haney, Paul M; Yoon, Heayoung P; Gaury, Benoit; Zhitenev, Nikolai B

    2016-09-07

    Electron beam induced current (EBIC) is a powerful characterization technique which offers the high spatial resolution needed to study polycrystalline solar cells. Current models of EBIC assume that excitations in the p - n junction depletion region result in perfect charge collection efficiency. However we find that in CdTe and Si samples prepared by focused ion beam (FIB) milling, there is a reduced and nonuniform EBIC lineshape for excitations in the depletion region. Motivated by this, we present a model of the EBIC response for excitations in the depletion region which includes the effects of surface recombination from both charge-neutral and charged surfaces. For neutral surfaces we present a simple analytical formula which describes the numerical data well, while the charged surface response depends qualitatively on the location of the surface Fermi level relative to the bulk Fermi level. We find the experimental data on FIB-prepared Si solar cells is most consistent with a charged surface, and discuss the implications for EBIC experiments on polycrystalline materials.

  18. Interspike interval correlation in a stochastic exponential integrate-and-fire model with subthreshold and spike-triggered adaptation.

    PubMed

    Shiau, LieJune; Schwalger, Tilo; Lindner, Benjamin

    2015-06-01

    We study the spike statistics of an adaptive exponential integrate-and-fire neuron stimulated by white Gaussian current noise. We derive analytical approximations for the coefficient of variation and the serial correlation coefficient of the interspike interval assuming that the neuron operates in the mean-driven tonic firing regime and that the stochastic input is weak. Our result for the serial correlation coefficient has the form of a geometric sequence and is confirmed by the comparison to numerical simulations. The theory predicts various patterns of interval correlations (positive or negative at lag one, monotonically decreasing or oscillating) depending on the strength of the spike-triggered and subthreshold components of the adaptation current. In particular, for pure subthreshold adaptation we find strong positive ISI correlations that are usually ascribed to positive correlations in the input current. Our results i) provide an alternative explanation for interspike-interval correlations observed in vivo, ii) may be useful in fitting point neuron models to experimental data, and iii) may be instrumental in exploring the role of adaptation currents for signal detection and signal transmission in single neurons.

  19. The impact of individual-level heterogeneity on estimated infectious disease burden: a simulation study.

    PubMed

    McDonald, Scott A; Devleesschauwer, Brecht; Wallinga, Jacco

    2016-12-08

    Disease burden is not evenly distributed within a population; this uneven distribution can be due to individual heterogeneity in progression rates between disease stages. Composite measures of disease burden that are based on disease progression models, such as the disability-adjusted life year (DALY), are widely used to quantify the current and future burden of infectious diseases. Our goal was to investigate to what extent ignoring the presence of heterogeneity could bias DALY computation. Simulations using individual-based models for hypothetical infectious diseases with short and long natural histories were run assuming either "population-averaged" progression probabilities between disease stages, or progression probabilities that were influenced by an a priori defined individual-level frailty (i.e., heterogeneity in disease risk) distribution, and DALYs were calculated. Under the assumption of heterogeneity in transition rates and increasing frailty with age, the short natural history disease model predicted 14% fewer DALYs compared with the homogenous population assumption. Simulations of a long natural history disease indicated that assuming homogeneity in transition rates when heterogeneity was present could overestimate total DALYs, in the present case by 4% (95% quantile interval: 1-8%). The consequences of ignoring population heterogeneity should be considered when defining transition parameters for natural history models and when interpreting the resulting disease burden estimates.

  20. Data-based Modeling of the Dynamical Inner Magnetosphere During Strong Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Tsyganenko, N.; Sitnov, M.

    2004-12-01

    This work builds on and extends our previous effort [Tsyganenko et al., 2003] to develop a dynamical model of the storm-time geomagnetic field in the inner magnetosphere, using space magnetometer data taken during 37 major events in 1996--2000 and concurrent observations of the solar wind and IMF. The essence of the approach is to derive from the data the temporal variation of all major current systems contributing to the geomagnetic field during the entire storm cycle, using a simple model of their growth and decay. Each principal source of the external magnetic field (magnetopause, cross-tail current sheet, axisymmetric and partial ring currents, Birkeland currents) is controlled by a separate driving variable that includes a combination of geoeffective parameters in the form Nλ Vβ Bsγ , where N, V, and Bs are the solar wind density, speed, and the magnitude of the southward component of the IMF, respectively. Each source was also assumed to have an individual relaxation timescale and residual quiet-time strength, so that its partial contribution to the total field was calculated for any moment as a time integral, taking into account the entire history of the external driving of the magnetosphere during each storm. In addition, the magnitudes of the principal field sources were assumed to saturate during extremely large storms with abnormally strong external driving. All the parameters of the model field sources, including their magnitudes, geometrical characteristics, solar wind/IMF driving functions, decay timescales, and saturation thresholds were treated as free variables, to be derived from the data by the least squares. The relaxation timescales of the individual magnetospheric field sources were found to largely differ between each other, from as large as ˜30 hours for the symmetrical ring current to only ˜50 min for the region~1 Birkeland current. The total magnitudes of the currents were also found to dramatically vary in the course of major storms, with the peak values as large as 5--8 MA for the symmetric ring current and region 1 field-aligned current. At the peak of the main phase, the total partial ring current can largely exceed the symmetric one, reaching ˜10 MA and even more, but it quickly subsides as the external solar wind driving disappears, with the relaxation time ≤2 hours. The tail current dramatically increases during the main phase and shifts earthward, so that the peak current concentrates at unusually close distances ˜4-6RE. This is accompanied by a significant thinning of the current sheet and enormous tailward stretching of the inner geomagnetic field lines. As an independent consistency test, we calculated the expected Dst-variation based on the model output at Earth's surface and compared it with the actual observed Dst. A good agreement (cumulative correlation coefficient R=0.92) was found, in spite of that ˜90% of the spacecraft data used in the fitting were taken at synchronous orbit and beyond, while only 3.7% of those data came from distances 2.5≤ R≤4 RE. The obtained results demonstrate the possibility to develop a dynamical model of the magnetic field, based on magnetospheric and interplanetary data and allowing one to reproduce and forecast the entire process of a geomagnetic storm, as it unfolds in time and space. Reference: N. A. Tsyganenko, H. J. Singer, J. C. Kasper, Storm-time distortion of the inner magnetosphere: How severe can it get ? J. Geophys. Res., v. 108(A5), 1209, 2003.

  1. New Model for Europa's Tidal Response Based after Laboratory Measurements

    NASA Astrophysics Data System (ADS)

    Castillo, J. C.; McCarthy, C.; Choukroun, M.; Rambaux, N.

    2009-12-01

    We explore the application of the Andrade model to the modeling of Europa’s tidal response at the orbital period and for different librations. Previous models have generally assumed that the satellite behaves as a Maxwell body. However, at the frequencies exciting Europa’s tides and librations, material anelasticity tends to dominate the satellite’s response for a wide range of temperatures, a feature that is not accounted for by the Maxwell model. Many experimental studies on the anelasticity of rocks, ice, and hydrates, suggest that the Andrade model usually provides a good fit to the dissipation spectra obtained for a wide range of frequencies, encompassing the tidal frequencies of most icy satellites. These data indicate that, at Europa’s orbital frequency, the Maxwell model overestimates water ice attenuation at temperature warmer than ~240 K, while it tends to significantly underestimate it at lower temperatures. Based on the available data we suggest an educated extrapolation of available data to Europa’s conditions. We compute the tidal response of a model of Europa differentiated in a rocky core and a water-rich shell. We assume various degrees of stratification of the core involving hydrated and anhydrous silicates, as well as an iron core. The water-rich shell of Europa is assumed to be fully frozen, or to have preserved a deep liquid layer. In both cases we consider a range of thermal structures, based on existing models. These structures take into account the presence of non-ice materials, especially hydrated salts. This new approach yields a greater tidal response (amplitude and phase lag) than previously expected. This is due to the fact that a greater volume of material dissipates tidal energy in comparison to models assuming a Maxwell body. Another feature of interest is that the tidal stress expected in Europa is at about the threshold between a linear and non-linear mechanical response of water ice as a function of stress. Increased stress at a time when Europa’s eccentricity was greater than its current value is likely to have resulted in significant dissipation increase. We will assess how this new approach affects our understanding of Europa, and we will quantify the tidal response of this satellite and the amount of tidal heating available to its evolution. Acknowledgements: Part of this work has been conducted at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. Government sponsorship acknowledged. Part of the experimental work was conducted at Brown University, funded by NASA. MC is supported by a NASA Postdoctoral Fellowship, administered by Oak Ridge Associated Universities.

  2. High-energy neutrino fluxes from AGN populations inferred from X-ray surveys

    NASA Astrophysics Data System (ADS)

    Jacobsen, Idunn B.; Wu, Kinwah; On, Alvina Y. L.; Saxton, Curtis J.

    2015-08-01

    High-energy neutrinos and photons are complementary messengers, probing violent astrophysical processes and structural evolution of the Universe. X-ray and neutrino observations jointly constrain conditions in active galactic nuclei (AGN) jets: their baryonic and leptonic contents, and particle production efficiency. Testing two standard neutrino production models for local source Cen A (Koers & Tinyakov and Becker & Biermann), we calculate the high-energy neutrino spectra of single AGN sources and derive the flux of high-energy neutrinos expected for the current epoch. Assuming that accretion determines both X-rays and particle creation, our parametric scaling relations predict neutrino yield in various AGN classes. We derive redshift-dependent number densities of each class, from Chandra and Swift/BAT X-ray luminosity functions (Silverman et al. and Ajello et al.). We integrate the neutrino spectrum expected from the cumulative history of AGN (correcting for cosmological and source effects, e.g. jet orientation and beaming). Both emission scenarios yield neutrino fluxes well above limits set by IceCube (by ˜4-106 × at 1 PeV, depending on the assumed jet models for neutrino production). This implies that: (i) Cen A might not be a typical neutrino source as commonly assumed; (ii) both neutrino production models overestimate the efficiency; (iii) neutrino luminosity scales with accretion power differently among AGN classes and hence does not follow X-ray luminosity universally; (iv) some AGN are neutrino-quiet (e.g. below a power threshold for neutrino production); (v) neutrino and X-ray emission have different duty cycles (e.g. jets alternate between baryonic and leptonic flows); or (vi) some combination of the above.

  3. Dust Density Distribution and Imaging Analysis of Different Ice Lines in Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Pinilla, P.; Pohl, A.; Stammler, S. M.; Birnstiel, T.

    2017-08-01

    Recent high angular resolution observations of protoplanetary disks at different wavelengths have revealed several kinds of structures, including multiple bright and dark rings. Embedded planets are the most used explanation for such structures, but there are alternative models capable of shaping the dust in rings as it has been observed. We assume a disk around a Herbig star and investigate the effect that ice lines have on the dust evolution, following the growth, fragmentation, and dynamics of multiple dust size particles, covering from 1 μm to 2 m sized objects. We use simplified prescriptions of the fragmentation velocity threshold, which is assumed to change radially at the location of one, two, or three ice lines. We assume changes at the radial location of main volatiles, specifically H2O, CO2, and NH3. Radiative transfer calculations are done using the resulting dust density distributions in order to compare with current multiwavelength observations. We find that the structures in the dust density profiles and radial intensities at different wavelengths strongly depend on the disk viscosity. A clear gap of emission can be formed between ice lines and be surrounded by ring-like structures, in particular between the H2O and CO2 (or CO). The gaps are expected to be shallower and narrower at millimeter emission than at near-infrared, opposite to model predictions of particle trapping. In our models, the total gas surface density is not expected to show strong variations, in contrast to other gap-forming scenarios such as embedded giant planets or radial variations of the disk viscosity.

  4. How well do basic models describe the turbidity currents coming down Monterey and Congo Canyon?

    NASA Astrophysics Data System (ADS)

    Cartigny, M.; Simmons, S.; Heerema, C.; Xu, J. P.; Azpiroz, M.; Clare, M. A.; Cooper, C.; Gales, J. A.; Maier, K. L.; Parsons, D. R.; Paull, C. K.; Sumner, E. J.; Talling, P.

    2017-12-01

    Turbidity currents rival rivers in their global capacity to transport sediment and organic carbon. Furthermore, turbidity currents break submarine cables that now transport >95% of our global data traffic. Accurate turbidity current models are thus needed to quantify their transport capacity and to predict the forces exerted on seafloor structures. Despite this need, existing numerical models are typically only calibrated with scaled-down laboratory measurements due to the paucity of direct measurements of field-scale turbidity currents. This lack of calibration thus leaves much uncertainty in the validity of existing models. Here we use the most detailed observations of turbidity currents yet acquired to validate one of the most fundamental models proposed for turbidity currents, the modified Chézy model. Direct measurements on which the validation is based come from two sites that feature distinctly different flow modes and grain sizes. The first are from the multi-institution Coordinated Canyon Experiment (CCE) in Monterey Canyon, California. An array of six moorings along the canyon axis captured at least 15 flow events that lasted up to hours. The second is the deep-sea Congo Canyon, where 10 finer grained flows were measured by a single mooring, each lasting several days. Moorings captured depth-resolved velocity and suspended sediment concentration at high resolution (<30 second) for each of the 25 events. We use both datasets to test the most basic model available for turbidity currents; the modified Chézy model. This basic model has been very useful for river studies over the past 200 years, as it provides a rapid estimate of how flow velocity varies with changes in river level and energy slope. Chézy-type models assume that the gravitational force of the flow equals the friction of the river-bed. Modified Chézy models have been proposed for turbidity currents. However, the absence of detailed measurements of friction and sediment concentration within full-scale turbidity currents has forced modellers to make rough assumptions for these parameters. Here we use mooring data to deduce observation-based relations that can replace the previous assumptions. This improvement will significantly enhance the model predictions and allow us to better constrain the behaviour of turbidity currents.

  5. Societal and Economic Effect of Meniscus Scaffold Procedures for Irreparable Meniscus Injuries.

    PubMed

    Rongen, Jan J; Govers, Tim M; Buma, Pieter; Grutters, Janneke P C; Hannink, Gerjon

    2016-07-01

    Meniscus scaffolds are currently evaluated clinically for their efficacy in preventing the development of osteoarthritis as well as for their efficacy in treating patients with chronic symptoms. Procedural costs, therapeutic consequences, clinical efficacy, and future events should all be considered to maximize the monetary value of this intervention. To examine the socioeconomic effect of treating patients with irreparable medial meniscus injuries with a meniscus scaffold. Economic and decision analysis; Level of evidence, 2. Two Markov simulation models for patients with an irreparable medial meniscus injury were developed. Model 1 was used to investigate the lifetime cost-effectiveness of a meniscus scaffold compared with standard partial meniscectomy by the possibility of preventing the development of osteoarthritis. Model 2 was used to investigate the short-term (5-year) cost-effectiveness of a meniscus scaffold compared with standard partial meniscectomy by alleviating clinical symptoms, specifically in chronic patients with previous meniscus surgery. For both models, probabilistic Monte Carlo simulations were applied. Treatment effectiveness was expressed as quality-adjusted life-years (QALYs), while costs (estimated in euros) were assessed from a societal perspective. We assumed €20,000 as a reference value for the willingness to pay per QALY. Next, comprehensive sensitivity analyses were performed to identify the most influential variables on the cost-effectiveness of meniscus scaffolds. Model 1 demonstrated an incremental cost-effectiveness ratio of a meniscus scaffold treatment of €54,463 per QALY (€5991/0.112). A threshold analysis demonstrated that a meniscus scaffold should offer a relative risk reduction of at least 0.34 to become cost-effective, assuming a willingness to pay of €20,000. Decreasing the costs of the meniscus scaffold procedure by 33% (€10,160 instead of €15,233; an absolute change of €5073) resulted in an incremental cost-effectiveness ratio of €7876 per QALY. Model 2 demonstrated an incremental cost-effectiveness ratio of a meniscus scaffold treatment of €297,727 per QALY (€9825/0.033). On the basis of the current efficacy data, a meniscus scaffold provides a relative risk reduction of "limited benefit" postoperatively of 0.37 compared with standard treatment. A threshold analysis revealed that assuming a willingness to pay of €20,000, a meniscus scaffold would not be cost-effective within a period of 5 years. Most influential variables on the cost-effectiveness of meniscus scaffolds were the cost of the scaffold procedure, cost associated with osteoarthritis, and quality of life before and after the scaffold procedure. Results of the current health technology assessment emphasize that the monetary value of meniscus scaffold procedures is very much dependent on a number of influential variables. Therefore, before implementing the technology in the health care system, it is important to critically assess these variables in a relevant context. The models can be improved as additional clinical data regarding the efficacy of the meniscus scaffold become available. © 2016 The Author(s).

  6. Gas dynamics in the impulsive phase of solar flares. I Thick-target heating by nonthermal electrons

    NASA Technical Reports Server (NTRS)

    Nagai, F.; Emslie, A. G.

    1984-01-01

    A numerical investigation is carried out of the gas dynamical response of the solar atmosphere to a flare energy input in the form of precipitating nonthermal electrons. Rather than discussing the origin of these electrons, the spectral and temporal characteristics of the injected flux are inferred through a thick-target model of hard X-ray bremsstrahlung production. It is assumed that the electrons spiral about preexisting magnetic field lines, making it possible for a one-dimensional spatial treatment to be performed. It is also assumed that all electron energy losses are due to Coulomb collisions with ambient particles; that is, return-current ohmic effects and collective plasma processes are neglected. The results are contrasted with earlier work on conductive heating of the flare atmosphere. A local temperature peak is seen at a height of approximately 1500 km above the photosphere. This derives from a spatial maximum in the energy deposition rate from an electron beam. It is noted that such a feature is not present in conductively heated models. The associated localized region of high pressure drives material both upward and downward.

  7. Cost-effectiveness of 13-valent pneumococcal conjugate vaccine in Switzerland.

    PubMed

    Blank, Patricia R; Szucs, Thomas D

    2012-06-13

    The 7-valent pneumococcal conjugate vaccine (PCV7) has been shown to be highly cost-effective. The 13-valent pneumococcal conjugate vaccine (PCV13) offers seroprotection against six additional serotypes. A decision-analytic model was constructed to estimate direct medical costs and clinical effectiveness of PCV13 vaccination on invasive pneumococcal disease (IPD), pneumonia, and otitis media relative to PCV7 vaccination. The option with an one-dose catch-up vaccination in children of 15-59 months was also considered. Assuming 83% vaccination coverage and considering indirect effects, 1808 IPD, 5558 pneumonia and 74,136 otitis media cases could be eliminated from the entire population during a 10-year modelling period. The PCV13 vaccination programme would lead to additional costs (+€26.2 Mio), but saved medical costs of -€77.1 Mio due to cases averted and deaths avoided, overcompensate these costs (total cost savings -€50.9 Mio). The national immunisation programmes with PCV13 can be assumed cost saving when compared with the current vaccine PCV7 in Switzerland. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Integrated approach for managing health risks at work--the role of occupational health nurses.

    PubMed

    Marinescu, Luiza G

    2007-02-01

    Currently, many organizations are using a department-centered approach to manage health risks at work. In such a model, segregated departments are providing employee benefits such as health insurance, workers' compensation, and short- and long-term disability or benefits addressing work-life issues. In recent years, a new model has emerged: health and productivity management (HPM). This is an employee-centered, integrated approach, designed to increase efficiency, reduce competition for scarce resources, and increase employee participation in prevention activities. Evidence suggests that corporations using integrated HPM programs achieve better health outcomes for their employees, with consequent increased productivity and decreased absenteeism. Occupational health nurses are well positioned to assume leadership roles in their organizations by coordinating efforts and programs across departments that offer health, wellness, and safety benefits. To assume their role as change agents to improve employees' health, nurses should start using the language of business more often by improving their communication skills, computer skills, and ability to quantify and articulate results of programs and services to senior management.

  9. Overcoming Spatial and Temporal Barriers to Public Access Defibrillators Via Optimization

    PubMed Central

    Sun, Christopher L. F.; Demirtas, Derya; Brooks, Steven C.; Morrison, Laurie J.; Chan, Timothy C.Y.

    2016-01-01

    BACKGROUND Immediate access to an automated external defibrillator (AED) increases the chance of survival from out-of-hospital cardiac arrest (OHCA). Current deployment usually considers spatial AED access, assuming AEDs are available 24 h a day. OBJECTIVES We sought to develop an optimization model for AED deployment, accounting for spatial and temporal accessibility, to evaluate if OHCA coverage would improve compared to deployment based on spatial accessibility alone. METHODS This was a retrospective population-based cohort study using data from the Toronto Regional RescuNET cardiac arrest database. We identified all nontraumatic public-location OHCAs in Toronto, Canada (January 2006 through August 2014) and obtained a list of registered AEDs (March 2015) from Toronto emergency medical services. We quantified coverage loss due to limited temporal access by comparing the number of OHCAs that occurred within 100 meters of a registered AED (assumed 24/7 coverage) with the number that occurred both within 100 meters of a registered AED and when the AED was available (actual coverage). We then developed a spatiotemporal optimization model that determined AED locations to maximize OHCA actual coverage and overcome the reported coverage loss. We computed the coverage gain between the spatiotemporal model and a spatial-only model using 10-fold cross-validation. RESULTS We identified 2,440 atraumatic public OHCAs and 737 registered AED locations. A total of 451 OHCAs were covered by registered AEDs under assumed 24/7 coverage, and 354 OHCAs under actual coverage, representing a coverage loss of 21.5% (p < 0.001). Using the spatiotemporal model to optimize AED deployment, a 25.3% relative increase in actual coverage was achieved over the spatial-only approach (p < 0.001). CONCLUSIONS One in 5 OHCAs occurred near an inaccessible AED at the time of the OHCA. Potential AED use was significantly improved with a spatiotemporal optimization model guiding deployment. PMID:27539176

  10. Superscaling in electron-nucleus scattering and its link to CC and NC QE neutrino-nucleus scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbaro, M. B.; Amaro, J. E.; Caballero, J. A.

    2015-05-15

    The superscaling approach (SuSA) to neutrino-nucleus scattering, based on the assumed universality of the scaling function for electromagnetic and weak interactions, is reviewed. The predictions of the SuSA model for bot CC and NC differential and total cross sections are presented and compared with the MiniBooNE data. The role of scaling violations, in particular the contribution of meson exchange currents in the two-particle two-hole sector, is explored.

  11. Origin of traps and charge transport mechanism in hafnia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Islamov, D. R., E-mail: damir@isp.nsc.ru; Gritsenko, V. A., E-mail: grits@isp.nsc.ru; Novosibirsk State University, Novosibirsk 630090

    2014-12-01

    In this study, we demonstrated experimentally and theoretically that oxygen vacancies are responsible for the charge transport in HfO{sub 2}. Basing on the model of phonon-assisted tunneling between traps, and assuming that the electron traps are oxygen vacancies, good quantitative agreement between the experimental and theoretical data of current-voltage characteristics was achieved. The thermal trap energy of 1.25 eV in HfO{sub 2} was determined based on the charge transport experiments.

  12. Spin-dependence of the electron scattering cross section by a magnetic layer system and the magneto-resistance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J.T.; Tang, F.; Brown, W.D.

    1998-12-20

    The authors present a theoretical model for calculating the spin-dependent cross section of the scattering of electrons by a magnetic layer system. The model demonstrates that the cross sections of the scattering are different for spin up and spin down electrons. The model assumes that the electrical resistivity in a conductor is proportional to the scattering cross section of the electron in it. It is believed to support the two channel mechanism in interpreting magneto-resistance (MR). Based on the model without considering the scattering due to the interfacial roughness and the spin flipping scattering, the authors have established a relationshipmore » between MR and the square of the magnetic moment in the bulk sample without considering the scattering due to the interfacial roughness and the spin flipping scattering. It can also qualitatively explain the MR difference between the current in plane (CIP) and current perpendicular to the plane (CPP) configurations. The predictions by the model agree well with the experimental findings.« less

  13. Dynamic modeling for flow-activated chloride-selective membrane current in vascular endothelial cells.

    PubMed

    Qin, Kai-Rong; Xiang, Cheng; Cao, Ling-Ling

    2011-10-01

    In this paper, a dynamic model is proposed to quantify the relationship between fluid flow and Cl(-)-selective membrane current in vascular endothelial cells (VECs). It is assumed that the external shear stress would first induce channel deformation in VECs. This deformation could activate the Cl(-) channels on the membrane, thus allowing Cl(-) transport across the membrane. A modified Hodgkin-Huxley model is embedded into our dynamic system to describe the electrophysiological properties of the membrane, such as the Cl(-)-selective membrane current (I), voltage (V) and conductance. Three flow patterns, i. e., steady flow, oscillatory flow, and pulsatile flow, are applied in our simulation studies. When the extracellular Cl(-) concentration is constant, the I-V characteristics predicted by our dynamic model shows strong consistency with the experimental observations. It is also interesting to note that the Cl(-) currents under different flow patterns show some differences, indicating that VECs distinguish among and respond differently to different types of flows. When the extracellular Cl(-) concentration keeps constant or varies slowly with time (i.e. oscillates at 0.02 Hz), the convection and diffusion of Cl(-) in extracellular space can be ignored and the Cl(-) current is well captured by the modified Hodgkin-Huxley model alone. However, when the extracellular Cl(-) varies fast (i.e., oscillates at 0.2 Hz), the convection and diffusion effect should be considered because the Cl(-) current dynamics is different from the case where the convection-diffusion effect is simply ignored. The proposed dynamic model along with the simulation results could not only provide more insights into the flow-regulated electrophysiological behavior of the cell membrane but also help to reveal new findings in the electrophysiological experimental investigations of VECs in response to dynamic flow and biochemical stimuli.

  14. Ring current Atmosphere interactions Model with Self-Consistent Magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordanova, Vania; Jeffery, Christopher; Welling, Daniel

    The Ring current Atmosphere interactions Model with Self-Consistent magnetic field (B) is a unique code that combines a kinetic model of ring current plasma with a three dimensional force-balanced model of the terrestrial magnetic field. The kinetic portion, RAM, solves the kinetic equation to yield the bounce-averaged distribution function as a function of azimuth, radial distance, energy and pitch angle for three ion species (H+, He+, and O+) and, optionally, electrons. The domain is a circle in the Solar-Magnetic (SM) equatorial plane with a radial span of 2 to 6.5 RE. It has an energy range of approximately 100 eVmore » to 500 KeV. The 3-D force balanced magnetic field model, SCB, balances the JxB force with the divergence of the general pressure tensor to calculate the magnetic field configuration within its domain. The domain ranges from near the Earth’s surface, where the field is assumed dipolar, to the shell created by field lines passing through the SM equatorial plane at a radial distance of 6.5 RE. The two codes work in tandem, with RAM providing anisotropic pressure to SCB and SCB returning the self-consistent magnetic field through which RAM plasma is advected.« less

  15. Fractal scaling laws of black carbon aerosol and their influence on spectral radiative properties

    NASA Astrophysics Data System (ADS)

    Tiwari, S.; Chakrabarty, R. K.; Heinson, W.

    2016-12-01

    Current estimates of the direct radiative forcing for Black Carbon (BC) aerosol span over a poorly constrained range between 0.2 and 1 W.m-2. To improve this large uncertainty, tighter constraints need to be placed on BC's key wavelength-dependent optical properties, namely, the absorption (MAC) and scattering (MSC) cross sections per unit mass and hemispherical upscatter fraction (β; a dimensionless scattering directionality parameter). These parameters are very sensitive to changes in particle morphology and complex refractive index nindex. Their interplay determines the magnitude of net positive or negative radiative forcing efficiencies. The current approach among climate modelers for estimating MAC and MSC values of BC is from their optical cross-sections calculated assuming spherical particle morphology with homogeneous, constant-valued refractive index in the visible solar spectrum. The β values are typically assumed to be a constant across this spectrum. This approach, while being computationally inexpensive and convenient, ignores the inherent fractal morphology of BC and its scaling behaviors, and resulting optical properties. In this talk, I will present recent results from my laboratory on determination of the fractal scaling laws of BC aggregate packing density and its complex refractive index for size spanning across three orders of magnitude, and their effects on spectral (Visible-infrared wavelength) scaling of MAC, MSC, and β values. Our experiments synergistically combined novel BC generation techniques, aggregation models, contact-free multi-wavelength optical measurements, and electron microscopy analysis. The scale dependence of nindex on aggregate size followed power-law exponents of -1.4 and -0.5 for sub- and super-micron size aggregates, respectively. The spherical Rayleigh-optics approximation limits, used by climate models for spectral extrapolation of BC optical cross-sections and deconvolution of multi-species mixing ratios, are redefined using the concept of phase shift parameter. I will highlight the importance of size-dependent β values and its role in offsetting the strong light absorbing nature of BC. Finally, the errors introduced in forcing efficiency calculations of BC by assuming spherical homogeneous morphology will be evaluated.

  16. Future requirements for and supply of ophthalmologists for an aging population in Singapore.

    PubMed

    Ansah, John P; De Korne, Dirk; Bayer, Steffen; Pan, Chong; Jayabaskar, Thiyagarajan; Matchar, David B; Lew, Nicola; Phua, Andrew; Koh, Victoria; Lamoureux, Ecosse; Quek, Desmond

    2015-11-17

    Singapore's population, as that of many other countries, is aging; this is likely to lead to an increase in eye diseases and the demand for eye care. Since ophthalmologist training is long and expensive, early planning is essential. This paper forecasts workforce and training requirements for Singapore up to the year 2040 under several plausible future scenarios. The Singapore Eye Care Workforce Model was created as a continuous time compartment model with explicit workforce stocks using system dynamics. The model has three modules: prevalence of eye disease, demand, and workforce requirements. The model is used to simulate the prevalence of eye diseases, patient visits, and workforce requirements for the public sector under different scenarios in order to determine training requirements. Four scenarios were constructed. Under the baseline business-as-usual scenario, the required number of ophthalmologists is projected to increase by 117% from 2015 to 2040. Under the current policy scenario (assuming an increase of service uptake due to increased awareness, availability, and accessibility of eye care services), the increase will be 175%, while under the new model of care scenario (considering the additional effect of providing some services by non-ophthalmologists) the increase will only be 150%. The moderated workload scenario (assuming in addition a reduction of the clinical workload) projects an increase in the required number of ophthalmologists of 192% by 2040. Considering the uncertainties in the projected demand for eye care services, under the business-as-usual scenario, a residency intake of 8-22 residents per year is required, 17-21 under the current policy scenario, 14-18 under the new model of care scenario, and, under the moderated workload scenario, an intake of 18-23 residents per year is required. The results show that under all scenarios considered, Singapore's aging and growing population will result in an almost doubling of the number of Singaporeans with eye conditions, a significant increase in public sector eye care demand and, consequently, a greater requirement for ophthalmologists.

  17. NON-EQUILIBRIUM HELIUM IONIZATION IN AN MHD SIMULATION OF THE SOLAR ATMOSPHERE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golding, Thomas Peter; Carlsson, Mats; Leenaarts, Jorrit, E-mail: thomas.golding@astro.uio.no, E-mail: mats.carlsson@astro.uio.no, E-mail: jorrit.leenaarts@astro.su.se

    The ionization state of the gas in the dynamic solar chromosphere can depart strongly from the instantaneous statistical equilibrium commonly assumed in numerical modeling. We improve on earlier simulations of the solar atmosphere that only included non-equilibrium hydrogen ionization by performing a 2D radiation-magnetohydrodynamics simulation featuring non-equilibrium ionization of both hydrogen and helium. The simulation includes the effect of hydrogen Lyα and the EUV radiation from the corona on the ionization and heating of the atmosphere. Details on code implementation are given. We obtain helium ion fractions that are far from their equilibrium values. Comparison with models with local thermodynamicmore » equilibrium (LTE) ionization shows that non-equilibrium helium ionization leads to higher temperatures in wavefronts and lower temperatures in the gas between shocks. Assuming LTE ionization results in a thermostat-like behavior with matter accumulating around the temperatures where the LTE ionization fractions change rapidly. Comparison of DEM curves computed from our models shows that non-equilibrium ionization leads to more radiating material in the temperature range 11–18 kK, compared to models with LTE helium ionization. We conclude that non-equilibrium helium ionization is important for the dynamics and thermal structure of the upper chromosphere and transition region. It might also help resolve the problem that intensities of chromospheric lines computed from current models are smaller than those observed.« less

  18. SEATSAT programs option analysis

    NASA Technical Reports Server (NTRS)

    Luckl, L.

    1976-01-01

    A preliminary analysis of the costs of SEASAT follow-on options is presented. All the options assume the existence of SEASAT-A as currently defined in the SEASAT Economic Assessment. It is assumed that each option will continue through the year 2000 and approach operational system status in the 1983-1986 period, depending upon the sensor package selected. The launch vehicle assumed through 1983 is the Atlas Agena. After 1983, it is assumed SEASAT-A will switch to the use of the Space Shuttle. All cost estimates are 1976 dollars for fiscal year cost accounting, with no inflation rate included.

  19. Cosmic strings

    NASA Technical Reports Server (NTRS)

    Bennett, David P.

    1988-01-01

    Cosmic strings are linear topological defects which are predicted by some grand unified theories to form during a spontaneous symmetry breaking phase transition in the early universe. They are the basis for the only theories of galaxy formation aside from quantum fluctuations from inflation based on fundamental physics. In contrast to inflation, they can also be observed directly through gravitational lensing and their characterisitc microwave background anisotropy. It was recently discovered that details of cosmic string evolution are very differnt from the so-called standard model that was assumed in most of the string-induced galaxy formation calculations. Therefore, the details of galaxy formation in the cosmic string models are currently very uncertain.

  20. Public opinion by a poll process: model study and Bayesian view

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Keun; Kim, Yong Woon

    2018-05-01

    We study the formation of public opinion in a poll process where the current score is open to the public. The voters are assumed to vote probabilistically for or against their own preference considering the group opinion collected up to then in the score. The poll-score probability is found to follow the beta distribution in the large polls limit. We demonstrate that various poll results, even those contradictory to the population preference, are possible with non-zero probability density and that such deviations are readily triggered by initial bias. It is mentioned that our poll model can be understood in the Bayesian viewpoint.

  1. Disordered nuclear pasta, magnetic field decay, and crust cooling in neutron stars

    NASA Astrophysics Data System (ADS)

    Horowitz, C. J.; Berry, D. K.; Briggs, C. M.; Caplan, M. E.; Cumming, A.; Schneider, A. S.

    2015-04-01

    Nuclear pasta, with non-spherical shapes, is expected near the base of the crust in neutron stars. Large scale molecular dynamics simulations of pasta show long lived topological defects that could increase electron scattering and reduce both the thermal and electrical conductivities. We model a possible low conductivity pasta layer by increasing an impurity parameter Qimp. Predictions of light curves for the low mass X-ray binary MXB 1659-29, assuming a large Qimp, find continued late time cooling that is consistent with Chandra observations. The electrical and thermal conductivities are likely related. Therefore observations of late time crust cooling can provide insight on the electrical conductivity and the possible decay of neutron star magnetic fields (assuming these are supported by currents in the crust). This research was supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  2. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Method for confining the magnetic field of the cross-tail current inside the magnetopause

    NASA Technical Reports Server (NTRS)

    Sotirelis, T.; Tsyganenko, N. A.; Stern, D. P.

    1994-01-01

    A method is presented for analytically representing the magnetic field due to the cross-tail current and its closure on the magnetopause. It is an extension of a method used by Tsyganenko (1989b) to confine the dipole field inside an ellipsoidal magnetopause using a scalar potential. Given a model of the cross-tail current, the implied net magnetic field is obtained by adding to the cross-tail current field a potential field B = - del gamma, which makes all field lines divide into two disjoint groups, separated by the magnetopause (i.e., the combined field is made to have zero normal component with the magnetopause). The magnetopause is assumed to be an ellipsoid of revolution (a prolate spheroid) as an approximation to observations (Sibeck et al., 1991). This assumption permits the potential gamma to be expressed in spheroidal coordinates, expanded in spheroidal harmonics and its terms evaluated by performing inversion integrals. Finally, the field outside the magnetopause is replaced by zero, resulting in a consistent current closure along the magnetopause. This procedure can also be used to confine the modeled field of any other interior magnetic source, though the model current must always flow in closed circuits. The method is demonstrated on the T87 cross-tail current, examples illustrate the effect of changing the size and shape of the prescribed magnetopause and a comparison is made to an independent numerical scheme based on the Biot-Savart equation.

  4. A tale of two cities: Comparison of impacts on CO2 emissions, the indoor environment and health of home energy efficiency strategies in London and Milton Keynes

    NASA Astrophysics Data System (ADS)

    Shrubsole, C.; Das, P.; Milner, J.; Hamilton, I. G.; Spadaro, J. V.; Oikonomou, E.; Davies, M.; Wilkinson, P.

    2015-11-01

    Dwellings are a substantial source of global CO2 emissions. The energy used in homes for heating, cooking and running electrical appliances is responsible for a quarter of current total UK emissions and is a key target of government policies for greenhouse gas abatement. Policymakers need to understand the potential impact that such decarbonization policies have on the indoor environment and health for a full assessment of costs and benefits. We investigated these impacts in two contrasting settings of the UK: London, a predominantly older city and Milton Keynes, a growing new town. We employed SCRIBE, a building physics-based health impact model of the UK housing stock linked to the English Housing Survey, to examine changes, 2010-2050, in end-use energy demand, CO2 emissions, winter indoor temperatures, airborne pollutant concentrations and associated health impacts. For each location we modelled the existing (2010) housing stock and three future scenarios with different levels of energy efficiency interventions combined with either a business-as-usual, or accelerated decarbonization of the electricity grid approach. The potential for CO2 savings was appreciably greater in London than Milton Keynes except when substantial decarbonization of the electricity grid was assumed, largely because of the lower level of current energy efficiency in London and differences in the type and form of the housing stock. The average net impact on health per thousand population was greater in magnitude under all scenarios in London compared to Milton Keynes and more beneficial when it was assumed that purpose-provided ventilation (PPV) would be part of energy efficiency interventions, but more detrimental when interventions were assumed not to include PPV. These findings illustrate the importance of considering ventilation measures for health protection and the potential variation in the impact of home energy efficiency strategies, suggesting the need for tailored policy approaches in different locations, rather than adopting a universally rolled out strategy.

  5. Simulation of the Universal-Time Diurnal Variation of the Global Electric Circuit Charging Rate

    NASA Technical Reports Server (NTRS)

    Mackerras, David; Darveniza, Mat; Orville, Richard E.; Williams, Earle R.; Goodman, Steven J.

    1999-01-01

    A global lightning model that includes diurnal and annual lightning variation, and total flash density versus latitude for each major land and ocean, has been used as the basis for simulating the global electric circuit charging rate. A particular objective has been to reconcile the difference in amplitude ratios [AR=(max-min)/mean] between global lightning diurnal variation (AR approximately equals 0.8) and the diurnal variation of typical atmospheric potential gradient curves (AR approximately equals 0.35). A constraint on the simulation is that the annual mean charging current should be about 1000 A. The global lightning model shows that negative ground flashes can contribute, at most, about 10-15% of the required current. For the purpose of the charging rate simulation, it was assumed that each ground flash contributes 5 C to the charging process. It was necessary to assume that all electrified clouds contribute to charging by means other than lightning, that the total flash rate can serve as an indirect indicator of the rate of charge transfer, and that oceanic electrified clouds contribute to charging even though they are relatively inefficient in producing lightning. It was also found necessary to add a diurnally invariant charging current component. By trial and error it was found that charging rate diurnal variation curves could be produced with amplitude ratios and general shapes similar to those of the potential gradient diurnal variation curves measured over ocean and arctic regions during voyages of the Carnegie Institute research vessels. The comparisons were made for the northern winter (Nov.-Feb.), the equinox (Mar., Apr., Sept., Oct.), the northern summer (May-Aug.), and the whole year.

  6. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  7. Hydrodynamic models for slurry bubble column reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gidaspow, D.

    1995-12-31

    The objective of this investigation is to convert a {open_quotes}learning gas-solid-liquid{close_quotes} fluidization model into a predictive design model. This model is capable of predicting local gas, liquid and solids hold-ups and the basic flow regimes: the uniform bubbling, the industrially practical churn-turbulent (bubble coalescence) and the slugging regimes. Current reactor models incorrectly assume that the gas and the particle hold-ups (volume fractions) are uniform in the reactor. They must be given in terms of empirical correlations determined under conditions that radically differ from reactor operation. In the proposed hydrodynamic approach these hold-ups are computed from separate phase momentum balances. Furthermore,more » the kinetic theory approach computes the high slurry viscosities from collisions of the catalyst particles. Thus particle rheology is not an input into the model.« less

  8. Focal length calibration of an electrically tunable lens by digital holography.

    PubMed

    Wang, Zhaomin; Qu, Weijuan; Yang, Fang; Asundi, Anand Krishna

    2016-02-01

    The electrically tunable lens (ETL) is a novel current-controlled adaptive optical component which can continuously tune its focus in a specific range via changing its surface curvature. To quantitatively characterize its tuning power, here we assume the ETL to be a pure phase object and present a novel calibration method to dynamically measure its wavefront by use of digital holographic microscopy (DHM). The least squares method is then used to fit the radius of curvature of the wavefront. The focal length is obtained by substituting the radius into the Zemax model of the ETL. The behavior curve between the focal length of the ETL and its driven current is drawn, and a quadratic mathematic model is set up to characterize it. To verify our model, an ETL and offset lens combination is proposed and applied to ETL-based transport of intensity equation (TIE) phase retrieval microscopy. The experimental result demonstrates the calibration works well in TIE phase retrieval in comparison with the phase measured by DHM.

  9. First measurement of the muon antineutrino double-differential charged-current quasielastic cross section

    NASA Astrophysics Data System (ADS)

    Aguilar-Arevalo, A. A.; Brown, B. C.; Bugel, L.; Cheng, G.; Church, E. D.; Conrad, J. M.; Dharmapalan, R.; Djurcic, Z.; Finley, D. A.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Grange, J.; Huelsnitz, W.; Ignarra, C.; Imlay, R.; Johnson, R. A.; Karagiorgi, G.; Katori, T.; Kobilarcik, T.; Louis, W. C.; Mariani, C.; Marsh, W.; Mills, G. B.; Mirabal, J.; Moore, C. D.; Mousseau, J.; Nienaber, P.; Osmanov, B.; Pavlovic, Z.; Perevalov, D.; Polly, C. C.; Ray, H.; Roe, B. P.; Russell, A. D.; Shaevitz, M. H.; Spitz, J.; Stancu, I.; Tayloe, R.; Van de Water, R. G.; Wascko, M. O.; White, D. H.; Wickremasinghe, D. A.; Zeller, G. P.; Zimmerman, E. D.

    2013-08-01

    The largest sample ever recorded of ν¯μ charged-current quasielastic (CCQE, ν¯μ+p→μ++n) candidate events is used to produce the minimally model-dependent, flux-integrated double-differential cross section (d2σ)/(dTμdcos⁡θμ) for ν¯μ CCQE for a mineral oil target. This measurement exploits the large statistics of the MiniBooNE antineutrino mode sample and provides the most complete information of this process to date. In order to facilitate historical comparisons, the flux-unfolded total cross section σ(Eν) and single-differential cross section (dσ)/(dQ2) on both mineral oil and on carbon are also reported. The observed cross section is somewhat higher than the predicted cross section from a model assuming independently acting nucleons in carbon with canonical form factor values. The shape of the data are also discrepant with this model. These results have implications for intranuclear processes and can help constrain signal and background processes for future neutrino oscillation measurements.

  10. Self-consistent modeling of CFETR baseline scenarios for steady-state operation

    NASA Astrophysics Data System (ADS)

    Chen, Jiale; Jian, Xiang; Chan, Vincent S.; Li, Zeyu; Deng, Zhao; Li, Guoqiang; Guo, Wenfeng; Shi, Nan; Chen, Xi; CFETR Physics Team

    2017-07-01

    Integrated modeling for core plasma is performed to increase confidence in the proposed baseline scenario in the 0D analysis for the China Fusion Engineering Test Reactor (CFETR). The steady-state scenarios are obtained through the consistent iterative calculation of equilibrium, transport, auxiliary heating and current drives (H&CD). Three combinations of H&CD schemes (NB + EC, NB + EC + LH, and EC + LH) are used to sustain the scenarios with q min > 2 and fusion power of ˜70-150 MW. The predicted power is within the target range for CFETR Phase I, although the confinement based on physics models is lower than that assumed in 0D analysis. Ideal MHD stability analysis shows that the scenarios are stable against n = 1-10 ideal modes, where n is the toroidal mode number. Optimization of RF current drive for the RF-only scenario is also presented. The simulation workflow for core plasma in this work provides a solid basis for a more extensive research and development effort for the physics design of CFETR.

  11. Tabletop Molecular Communication: Text Messages through Chemical Signals

    PubMed Central

    Farsad, Nariman; Guo, Weisi; Eckford, Andrew W.

    2013-01-01

    In this work, we describe the first modular, and programmable platform capable of transmitting a text message using chemical signalling – a method also known as molecular communication. This form of communication is attractive for applications where conventional wireless systems perform poorly, from nanotechnology to urban health monitoring. Using examples, we demonstrate the use of our platform as a testbed for molecular communication, and illustrate the features of these communication systems using experiments. By providing a simple and inexpensive means of performing experiments, our system fills an important gap in the molecular communication literature, where much current work is done in simulation with simplified system models. A key finding in this paper is that these systems are often nonlinear in practice, whereas current simulations and analysis often assume that the system is linear. However, as we show in this work, despite the nonlinearity, reliable communication is still possible. Furthermore, this work motivates future studies on more realistic modelling, analysis, and design of theoretical models and algorithms for these systems. PMID:24367571

  12. Postural effects on intracranial pressure: modeling and clinical evaluation.

    PubMed

    Qvarlander, Sara; Sundström, Nina; Malm, Jan; Eklund, Anders

    2013-11-01

    The physiological effect of posture on intracranial pressure (ICP) is not well described. This study defined and evaluated three mathematical models describing the postural effects on ICP, designed to predict ICP at different head-up tilt angles from the supine ICP value. Model I was based on a hydrostatic indifference point for the cerebrospinal fluid (CSF) system, i.e., the existence of a point in the system where pressure is independent of body position. Models II and III were based on Davson's equation for CSF absorption, which relates ICP to venous pressure, and postulated that gravitational effects within the venous system are transferred to the CSF system. Model II assumed a fully communicating venous system, and model III assumed that collapse of the jugular veins at higher tilt angles creates two separate hydrostatic compartments. Evaluation of the models was based on ICP measurements at seven tilt angles (0-71°) in 27 normal pressure hydrocephalus patients. ICP decreased with tilt angle (ANOVA: P < 0.01). The reduction was well predicted by model III (ANOVA lack-of-fit: P = 0.65), which showed excellent fit against measured ICP. Neither model I nor II adequately described the reduction in ICP (ANOVA lack-of-fit: P < 0.01). Postural changes in ICP could not be predicted based on the currently accepted theory of a hydrostatic indifference point for the CSF system, but a new model combining Davson's equation for CSF absorption and hydrostatic gradients in a collapsible venous system performed well and can be useful in future research on gravity and CSF physiology.

  13. Model averaging, optimal inference, and habit formation

    PubMed Central

    FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724

  14. Polygonal current models for polycyclic aromatic hydrocarbons and graphene sheets of various shapes.

    PubMed

    Pelloni, Stefano; Lazzeretti, Paolo

    2018-01-05

    Assuming that graphene is an "infinite alternant" polycyclic aromatic hydrocarbon resulting from tessellation of a surface by only six-membered carbon rings, planar fragments of various size and shape (hexagon, triangle, rectangle, and rhombus) have been considered to investigate their response to a magnetic field applied perpendicularly. Allowing for simple polygonal current models, the diatropicity of a series of polycyclic textures has been reliably determined by comparing quantitative indicators, the π-electron contribution to I B , the magnetic field-induced current susceptibility of the peripheral circuit, to ξ∥ and to σ∥(CM)=-NICS∥(CM), respectively the out-of-plane components of the magnetizability tensor and of the magnetic shielding tensor at the center of mass. Extended numerical tests and the analysis based on the polygonal model demonstrate that (i) ξ∥ and σ∥(CM) yield inadequate and sometimes erroneous measures of diatropicity, as they are heavily flawed by spurious geometrical factors, (ii) I B values computed by simple polygonal models are valid quantitative indicators of aromaticity on the magnetic criterion, preferable to others presently available, whenever current susceptibility cannot be calculated ab initio as a flux integral, (iii) the hexagonal shape is the most effective to maximize the strength of π-electron currents over the molecular perimeter, (iv) the edge current strength of triangular and rhombic graphene fragments is usually much smaller than that of hexagonal ones, (v) doping by boron and nitrogen nuclei can regulate and even inhibit peripheral ring currents, (vi) only for very large rectangular fragments can substantial current strengths be expected. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Reliability Analysis of the Gradual Degradation of Semiconductor Devices.

    DTIC Science & Technology

    1983-07-20

    under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Donald; Elgqvist, Emma; Santhanagopalan, Shriram

    Manufacturing capacity for lithium-ion batteries (LIBs)--which power many consumer electronics and are increasingly used to power electric vehicles--is heavily concentrated in east Asia. Currently, China, Japan, and Korea collectively host 88% of all LIB cell and 79% of automotive LIB cell manufacturing capacity. Mature supply chains and strong cumulative production experience suggest that most LIB cell production will remain concentrated in Asia. However, other regions--including North America--could be competitive in the growing automotive LIB cell market under certain conditions. To illuminate the factors that drive regional competitiveness in automotive LIB cell production, this study models cell manufacturing cost and minimummore » sustainable price, and examines development of LIB supply chains and current LIB market conditions. Modeled costs are for large format, 20-Ah stacked pouch cells with lithium-nickel-manganese-cobalt-oxide (NMC) cathodes and graphite anodes suitable for automotive application. Production volume is assumed to be at commercial scale, 600 MWh per year.« less

  17. Statistical Prediction of Sea Ice Concentration over Arctic

    NASA Astrophysics Data System (ADS)

    Kim, Jongho; Jeong, Jee-Hoon; Kim, Baek-Min

    2017-04-01

    In this study, a statistical method that predict sea ice concentration (SIC) over the Arctic is developed. We first calculate the Season-reliant Empirical Orthogonal Functions (S-EOFs) of monthly Arctic SIC from Nimbus-7 SMMR and DMSP SSM/I-SSMIS Passive Microwave Data, which contain the seasonal cycles (12 months long) of dominant SIC anomaly patterns. Then, the current SIC state index is determined by projecting observed SIC anomalies for latest 12 months to the S-EOFs. Assuming the current SIC anomalies follow the spatio-temporal evolution in the S-EOFs, we project the future (upto 12 months) SIC anomalies by multiplying the SI and the corresponding S-EOF and then taking summation. The predictive skill is assessed by hindcast experiments initialized at all the months for 1980-2010. When comparing predictive skill of SIC predicted by statistical model and NCEP CFS v2, the statistical model shows a higher skill in predicting sea ice concentration and extent.

  18. Cosmic background radiation anisotropies in universes dominated by nonbaryonic dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. R.; Efstathiou, G.

    1984-01-01

    Detailed calculations of the temperature fluctuations in the cosmic background radiation for universes dominated by massive collisionless relics of the big bang are presented. An initially adiabatic constant curvature perturbation spectrum is assumed. In models with cold dark matter, the simplest hypothesis - that galaxies follow the mass distribution leads to small-scale anisotropies which exceed current observational limits if omega is less than 0.2 h to the -4/3. Since low values of omega are indicated by dynamical studies of galaxy clustering, cold particle models in which light traces mass are probably incorrect. Reheating of the pregalactic medium is unlikely to modify this conclusion. In cold particle or neutrino-dominated universes with omega = 1, presented predictions for small-scale and quadrupole anisotropies are below current limits. In all cases, the small-scale fluctuations are predicted to be about 10 percent linearly polarized.

  19. Toroidal Ampere-Faraday Equations Solved Consistently with the CQL3D Fokker-Planck Time-Evolution

    NASA Astrophysics Data System (ADS)

    Harvey, R. W.; Petrov, Yu. V.

    2013-10-01

    A self-consistent, time-dependent toroidal electric field calculation is a key feature of a complete 3D Fokker-Planck kinetic distribution radial transport code for f(v,theta,rho,t). In the present CQL3D finite-difference model, the electric field E(rho,t) is either prescribed, or iteratively adjusted to obtain prescribed toroidal or parallel currents. We discuss first results of an implementation of the Ampere-Faraday equation for the self-consistent toroidal electric field, as applied to the runaway electron production in tokamaks due to rapid reduction of the plasma temperature as occurs in a plasma disruption. Our previous results assuming a constant current density (Lenz' Law) model showed that prompt ``hot-tail runaways'' dominated ``knock-on'' and Dreicer ``drizzle'' runaways; we will examine modifications due to the more complete Ampere-Faraday solution. Work supported by US DOE under DE-FG02-ER54744.

  20. Spectral Analysis of the Wake behind a Helicopter Rotor Hub

    NASA Astrophysics Data System (ADS)

    Petrin, Christopher; Reich, David; Schmitz, Sven; Elbing, Brian

    2016-11-01

    A scaled model of a notional helicopter rotor hub was tested in the 48" Garfield Thomas Water Tunnel at the Applied Research Laboratory Penn State. LDV and PIV measurements in the far-wake consistently showed a six-per-revolution flow structure, in addition to stronger two- and four-per-revolution structures. These six-per-revolution structures persisted into the far-field, and have no direct geometric counterpart on the hub model. The current study will examine the Reynolds number dependence of these structures and present higher-order statistics of the turbulence within the wake. In addition, current activity using the EFPL Large Water Tunnel at Oklahoma State University will be presented. This effort uses a more canonical configuration to identify the source for these six-per-revolution structures, which are assumed to be a non-linear interaction between the two- and four-per-revolution structures.

  1. Inferences about unobserved causes in human contingency learning.

    PubMed

    Hagmayer, York; Waldmann, Michael R

    2007-03-01

    Estimates of the causal efficacy of an event need to take into account the possible presence and influence of other unobserved causes that might have contributed to the occurrence of the effect. Current theoretical approaches deal differently with this problem. Associative theories assume that at least one unobserved cause is always present. In contrast, causal Bayes net theories (including Power PC theory) hypothesize that unobserved causes may be present or absent. These theories generally assume independence of different causes of the same event, which greatly simplifies modelling learning and inference. In two experiments participants were requested to learn about the causal relation between a single cause and an effect by observing their co-occurrence (Experiment 1) or by actively intervening in the cause (Experiment 2). Participants' assumptions about the presence of an unobserved cause were assessed either after each learning trial or at the end of the learning phase. The results show an interesting dissociation. Whereas there was a tendency to assume interdependence of the causes in the online judgements during learning, the final judgements tended to be more in the direction of an independence assumption. Possible explanations and implications of these findings are discussed.

  2. Saturation current and collection efficiency for ionization chambers in pulsed beams.

    PubMed

    DeBlois, F; Zankowski, C; Podgorsak, E B

    2000-05-01

    Saturation currents and collection efficiencies in ionization chambers exposed to pulsed megavoltage photon and electron beams are determined assuming a linear relationship between 1/I and 1/V in the extreme near-saturation region, with I and V the chamber current and polarizing voltage, respectively. Careful measurements of chamber current against polarizing voltage in the extreme near-saturation region reveal a current rising faster than that predicted by the linear relationship. This excess current combined with conventional "two-voltage" technique for determination of collection efficiency may result in an up to 0.7% overestimate of the saturation current for standard radiation field sizes of 10X10 cm2. The measured excess current is attributed to charge multiplication in the chamber air volume and to radiation-induced conductivity in the stem of the chamber (stem effect). These effects may be accounted for by an exponential term used in conjunction with Boag's equation for collection efficiency in pulsed beams. The semiempirical model follows the experimental data well and accounts for both the charge recombination as well as for the charge multiplication effects and the chamber stem effect.

  3. Effects of Droplet Size on Intrusion of Sub-Surface Oil Spills

    NASA Astrophysics Data System (ADS)

    Adams, Eric; Chan, Godine; Wang, Dayang

    2014-11-01

    We explore effects of droplet size on droplet intrusion and transport in sub-surface oil spills. Negatively buoyant glass beads released continuously to a stratified ambient simulate oil droplets in a rising multiphase plume, and distributions of settled beads are used to infer signatures of surfacing oil. Initial tests used quiescent conditions, while ongoing tests simulate currents by towing the source and a bottom sled. Without current, deposited beads have a Gaussian distribution, with variance increasing with decreasing particle size. Distributions agree with a model assuming first order particle loss from an intrusion layer of constant thickness, and empirically determined flow rate. With current, deposited beads display a parabolic distribution similar to that expected from a source in uniform flow; we are currently comparing observed distributions with similar analytical models. Because chemical dispersants have been used to reduce oil droplet size, our study provides one measure of their effectiveness. Results are applied to conditions from the `Deep Spill' field experiment, and the recent Deepwater Horizon oil spill, and are being used to provide ``inner boundary conditions'' for subsequent far field modeling of these events. This research was made possible by grants from Chevron Energy Technology Co., through the Chevron-MITEI University Partnership Program, and BP/The Gulf of Mexico Research Initiative, GISR.

  4. Simulation of Electromigration Based on Resistor Networks

    NASA Astrophysics Data System (ADS)

    Patrinos, Anthony John

    A two dimensional computer simulation of electromigration based on resistor networks was designed and implemented. The model utilizes a realistic grain structure generated by the Monte Carlo method and takes specific account of the local effects through which electromigration damage progresses. The dynamic evolution of the simulated thin film is governed by the local current and temperature distributions. The current distribution is calculated by superimposing a two dimensional electrical network on the lattice whose nodes correspond to the particles in the lattice and the branches to interparticle bonds. Current is assumed to flow from site to site via nearest neighbor bonds. The current distribution problem is solved by applying Kirchhoff's rules on the resulting electrical network. The calculation of the temperature distribution in the lattice proceeds by discretizing the partial differential equation for heat conduction, with appropriate material parameters chosen for the lattice and its defects. SEReNe (for Simulation of Electromigration using Resistor Networks) was tested by applying it to common situations arising in experiments with real films with satisfactory results. Specifically, the model successfully reproduces the expected grain size, line width and bamboo effects, the lognormal failure time distribution and the relationship between current density exponent and current density. It has also been modified to simulate temperature ramp experiments but with mixed, in this case, results.

  5. An information-motivation-behavioral skills model of adherence to antiretroviral therapy.

    PubMed

    Fisher, Jeffrey D; Fisher, William A; Amico, K Rivet; Harman, Jennifer J

    2006-07-01

    HIV-positive persons who do not maintain consistently high levels of adherence to often complex and toxic highly active antiretroviral therapy (HAART) regimens may experience therapeutic failure and deterioration of health status and may develop multidrug-resistant HIV that can be transmitted to uninfected others. The current analysis conceptualizes social and psychological determinants of adherence to HAART among HIV-positive individuals. The authors propose an information-motivation-behavioral skills (IMB) model of HAART adherence that assumes that adherence-related information, motivation, and behavioral skills are fundamental determinants of adherence to HAART. According to the model, adherence-related information and motivation work through adherence-related behavioral skills to affect adherence to HAART. Empirical support for the IMB model of adherence is presented, and its application in adherence-promotion intervention efforts is discussed.

  6. PREDICTING TWO-DIMENSIONAL STEADY-STATE SOIL FREEZING FRONTS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1986-01-01

    The complex variable boundary element method (CVBEM) is used instead of a real variable boundary element method due to the available modeling error evaluation techniques developed. The modeling accuracy is evaluated by the model-user in the determination of an approximative boundary upon which the CVBEM provides an exact solution. Although inhomogeneity (and anisotropy) can be included in the CVBEM model, the resulting fully populated matrix system quickly becomes large. Therefore in this paper, the domain is assumed homogeneous and isotropic except for differences in frozen and thawed conduction parameters on either side of the freezing front. The example problems presented were obtained by use of a popular 64K microcomputer (the current version of the program used in this study has the capacity to accommodate 30 nodal points).

  7. Effect of quantum learning model in improving creativity and memory

    NASA Astrophysics Data System (ADS)

    Sujatmika, S.; Hasanah, D.; Hakim, L. L.

    2018-04-01

    Quantum learning is a combination of many interactions that exist during learning. This model can be applied by current interesting topic, contextual, repetitive, and give opportunities to students to demonstrate their abilities. The basis of the quantum learning model are left brain theory, right brain theory, triune, visual, auditorial, kinesthetic, game, symbol, holistic, and experiential learning theory. Creativity plays an important role to be success in the working world. Creativity shows alternatives way to problem-solving or creates something. Good memory plays a role in the success of learning. Through quantum learning, students will use all of their abilities, interested in learning and create their own ways of memorizing concepts of the material being studied. From this idea, researchers assume that quantum learning models can improve creativity and memory of the students.

  8. How to misinterpret photosynthesis measurements and develop incorrect ecosystem models

    NASA Astrophysics Data System (ADS)

    Prentice, Iain Colin

    2017-04-01

    It is becoming widely accepted than current land ecosystem models (dynamic global vegetation models and land-surface models) rest on shaky foundations and are in need of rebuilding, taking advantage of huge data resources that were hardly conceivable when these models were first developed. It has also become almost a truism that next-generation model development should involve observationalists, experimentalists and modellers working more closely together. What is currently lacking, however, is open discussion of specific problems in the structure of current models, and how they might have arisen. Such a discussion is important if the same mistakes are not to be perpetuated in a new generation of models. I will focus on the central processes governing leaf-level gas exchange, which powers the land carbon and water cycles. I will show that a broad area of confusion exists - as much in the empirical ecophysiological literature as in modelling research - concerning the interpretation of gas-exchange measurements and (especially) their scaling up from the narrow temporal and spatial scales of laboratory measurements to the broad-scale research questions linked to global environmental change. In particular, I will provide examples (drawing on a variety of published and unpublished observations) that illustrate the benefits of taking a "plant-centred" view, showing how consideration of optimal acclimation challenges many (often untstated) assumptions about the relationship of plant and ecosystem processes to environmental variation. (1) Photosynthesis is usually measured at light saturation (implying Rubisco limitation), leading to temperature and CO2 responses that are completely different from those of gross primary production (GPP) under field conditions. (2) The actual rate of electron transport under field conditions depends strongly on the intrinsic quantum efficiency, which is temperature-independent (within a broad range) and unrelated to the maximum electron transport rate. (3) Because leaf nitrogen (per unit area) correlates with photosynthetic capacity, it is often assumed that the former controls the latter. But this correlation is often weak and causality appear to be the other way round. (4) Ecosystem respiration does not increase during daytime, but the standard methods of flux partitioning assume that it does. The result is a systematic bias in gross primary production "data" derived from flux measurements. (5) Stomatal conductance and assimilation rate are closely coupled. Neglect of this coupling can lead to incorrect interpretations of stomatal behaviour. Consideration of this coupling, however, leads to strongly supported predictions of the ratio of leaf-internal to ambient carbon dioxide. (6) The photosynthetic capacities for carboxylation and electron transport vary spatially and seasonally (which most models neglect) but not systematically with plant functional types (as most models assume). (7) "Down-regulation" of photosynthetic capacity (and even leaf nitrogen) with enhanced carbon dioixde represents optimal acclimation. (8) The fertilization effect of enhanced carbon dioxide is not universally dependent on nutrient supply, and can account for the observed land carbon sink. I will end on an optimistic note: rapid recent developments in formalizing optimality hypotheses, and their translation into explicit, quantitative predictions that can be tested using measurements (available through data synthesis or new experiments and measurement campaigns), offer extraordinary promise for the building of new and more secure foundations for terrestrial ecosystem science.

  9. Multiwavelength Polarization of Rotation-Powered Pulsars

    NASA Technical Reports Server (NTRS)

    Harding, Alice K.; Kalapotharakos, Constantinos

    2017-01-01

    Polarization measurements provide strong constraints on models for emission from rotation-powered pulsars. We present multiwavelength polarization predictions showing that measurements over a range of frequencies can be particularly important for constraining the emission location, radiation mechanisms, and system geometry. The results assume a generic model for emission from the outer magnetosphere and current sheet in which optical to hard X-ray emission is produced by synchrotron radiation (SR) from electron-positron pairs and gamma-ray emission is produced by curvature radiation (CR) or SR from accelerating primary electrons. The magnetic field structure of a force-free magnetosphere is assumed and the phase-resolved and phase-averaged polarization is calculated in the frame of an inertial observer. We find that large position angle (PA) swings and deep depolarization dips occur during the light-curve peaks in all energy bands. For synchrotron emission, the polarization characteristics are strongly dependent on photon emission radius with larger, nearly 180deg, PA swings for emission outside the light cylinder (LC)‚ as the line of sight crosses the current sheet. The phase-averaged polarization degree for SR is less that 10% and around 20% for emission starting inside and outside the LC, respectively, while the polarization degree for CR is much larger, up to 40%-60%. Observing a sharp increase in polarization degree and a change in PA at the transition between X-ray and gamma-ray spectral components would indicate that CR is the gamma-ray emission mechanism.

  10. Multiwavelength Polarization of Rotation-powered Pulsars

    NASA Astrophysics Data System (ADS)

    Harding, Alice K.; Kalapotharakos, Constantinos

    2017-05-01

    Polarization measurements provide strong constraints on models for emission from rotation-powered pulsars. We present multiwavelength polarization predictions showing that measurements over a range of frequencies can be particularly important for constraining the emission location, radiation mechanisms, and system geometry. The results assume a generic model for emission from the outer magnetosphere and current sheet in which optical to hard X-ray emission is produced by synchrotron radiation (SR) from electron-positron pairs and γ-ray emission is produced by curvature radiation (CR) or SR from accelerating primary electrons. The magnetic field structure of a force-free magnetosphere is assumed and the phase-resolved and phase-averaged polarization is calculated in the frame of an inertial observer. We find that large position angle (PA) swings and deep depolarization dips occur during the light-curve peaks in all energy bands. For synchrotron emission, the polarization characteristics are strongly dependent on photon emission radius with larger, nearly 180°, PA swings for emission outside the light cylinder (LC) as the line of sight crosses the current sheet. The phase-averaged polarization degree for SR is less that 10% and around 20% for emission starting inside and outside the LC, respectively, while the polarization degree for CR is much larger, up to 40%-60%. Observing a sharp increase in polarization degree and a change in PA at the transition between X-ray and γ-ray spectral components would indicate that CR is the γ-ray emission mechanism.

  11. Multiwavelength Polarization of Rotation-powered Pulsars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, Alice K.; Kalapotharakos, Constantinos

    Polarization measurements provide strong constraints on models for emission from rotation-powered pulsars. We present multiwavelength polarization predictions showing that measurements over a range of frequencies can be particularly important for constraining the emission location, radiation mechanisms, and system geometry. The results assume a generic model for emission from the outer magnetosphere and current sheet in which optical to hard X-ray emission is produced by synchrotron radiation (SR) from electron–positron pairs and γ -ray emission is produced by curvature radiation (CR) or SR from accelerating primary electrons. The magnetic field structure of a force-free magnetosphere is assumed and the phase-resolved andmore » phase-averaged polarization is calculated in the frame of an inertial observer. We find that large position angle (PA) swings and deep depolarization dips occur during the light-curve peaks in all energy bands. For synchrotron emission, the polarization characteristics are strongly dependent on photon emission radius with larger, nearly 180°, PA swings for emission outside the light cylinder (LC) as the line of sight crosses the current sheet. The phase-averaged polarization degree for SR is less that 10% and around 20% for emission starting inside and outside the LC, respectively, while the polarization degree for CR is much larger, up to 40%–60%. Observing a sharp increase in polarization degree and a change in PA at the transition between X-ray and γ -ray spectral components would indicate that CR is the γ -ray emission mechanism.« less

  12. Complexity and demographic explanations of cumulative culture.

    PubMed

    Querbes, Adrien; Vaesen, Krist; Houkes, Wybo

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.

  13. GPS constraints on M 7-8 earthquake recurrence times for the New Madrid seismic zone

    USGS Publications Warehouse

    Stuart, W.D.

    2001-01-01

    Newman et al. (1999) estimate the time interval between the 1811-1812 earthquake sequence near New Madrid, Missouri and a future similar sequence to be at least 2,500 years, an interval significantly longer than other recently published estimates. To calculate the recurrence time, they assume that slip on a vertical half-plane at depth contributes to the current interseismic motion of GPS benchmarks. Compared to other plausible fault models, the half-plane model gives nearly the maximum rate of ground motion for the same interseismic slip rate. Alternative models with smaller interseismic fault slip area can satisfy the present GPS data by having higher slip rate and thus can have earthquake recurrence times much less than 2,500 years.

  14. Past, present and future distributions of an Iberian Endemic, Lepus granatensis: ecological and evolutionary clues from species distribution models.

    PubMed

    Acevedo, Pelayo; Melo-Ferreira, José; Real, Raimundo; Alves, Paulo Célio

    2012-01-01

    The application of species distribution models (SDMs) in ecology and conservation biology is increasing and assuming an important role, mainly because they can be used to hindcast past and predict current and future species distributions. However, the accuracy of SDMs depends on the quality of the data and on appropriate theoretical frameworks. In this study, comprehensive data on the current distribution of the Iberian hare (Lepus granatensis) were used to i) determine the species' ecogeographical constraints, ii) hindcast a climatic model for the last glacial maximum (LGM), relating it to inferences derived from molecular studies, and iii) calibrate a model to assess the species future distribution trends (up to 2080). Our results showed that the climatic factor (in its pure effect and when it is combined with the land-cover factor) is the most important descriptor of the current distribution of the Iberian hare. In addition, the model's output was a reliable index of the local probability of species occurrence, which is a valuable tool to guide species management decisions and conservation planning. Climatic potential obtained for the LGM was combined with molecular data and the results suggest that several glacial refugia may have existed for the species within the major Iberian refugium. Finally, a high probability of occurrence of the Iberian hare in the current species range and a northward expansion were predicted for future. Given its current environmental envelope and evolutionary history, we discuss the macroecology of the Iberian hare and its sensitivity to climate change.

  15. Transient Response in a Dendritic Neuron Model for Current Injected at One Branch

    PubMed Central

    Rinzel, John; Rall, Wilfrid

    1974-01-01

    Mathematical expressions are obtained for the response function corresponding to an instantaneous pulse of current injected to a single dendritic branch in a branched dendritic neuron model. The theoretical model assumes passive membrane properties and the equivalent cylinder constraint on branch diameters. The response function when used in a convolution formula enables one to compute the voltage transient at any specified point in the dendritic tree for an arbitrary current injection at a given input location. A particular numerical example, for a brief current injection at a branch terminal, illustrates the attenuation and delay characteristics of the depolarization peak as it spreads throughout the neuron model. In contrast to the severe attenuation of voltage transients from branch input sites to the soma, the fraction of total input charge actually delivered to the soma and other trees is calculated to be about one-half. This fraction is independent of the input time course. Other numerical examples, which compare a branch terminal input site with a soma input site, demonstrate that, for a given transient current injection, the peak depolarization is not proportional to the input resistance at the injection site and, for a given synaptic conductance transient, the effective synaptic driving potential can be significantly reduced, resulting in less synaptic current flow and charge, for a branch input site. Also, for the synaptic case, the two inputs are compared on the basis of the excitatory post-synaptic potential (EPSP) seen at the soma and the total charge delivered to the soma. PMID:4424185

  16. The Plane-parallel Albedo Bias of Liquid Clouds from MODIS Observations

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Cahalan, Robert F.; Platnick, Steven

    2007-01-01

    In our most advanced modeling tools for climate change prediction, namely General Circulation Models (GCMs), the schemes used to calculate the budget of solar and thermal radiation commonly assume that clouds are horizontally homogeneous at scales as large as a few hundred kilometers. However, this assumption, used for convenience, computational speed, and lack of knowledge on cloud small scale variability, leads to erroneous estimates of the radiation budget. This paper provides a global picture of the solar radiation errors at scales of approximately 100 km due to warm (liquid phase) clouds only. To achieve this, we use cloud retrievals from the instrument MODIS on the Terra and Aqua satellites, along with atmospheric and surface information, as input into a GCM-style radiative transfer algorithm. Since the MODIS product contains information on cloud variability below 100 km we can run the radiation algorithm both for the variable and the (assumed) homogeneous clouds. The difference between these calculations for reflected or transmitted solar radiation constitutes the bias that GCMs would commit if they were able to perfectly predict the properties of warm clouds, but then assumed they were homogeneous for radiation calculations. We find that the global average of this bias is approx.2-3 times larger in terms of energy than the additional amount of thermal energy that would be trapped if we were to double carbon dioxide from current concentrations. We should therefore make a greater effort to predict horizontal cloud variability in GCMs and account for its effects in radiation calculations.

  17. Multi-regime transport model for leaching behavior of heterogeneous porous materials.

    PubMed

    Sanchez, F; Massry, I W; Eighmy, T; Kosson, D S

    2003-01-01

    Utilization of secondary materials in civil engineering applications (e.g. as substitutes for natural aggregates or binder constituents) requires assessment of the physical and environment properties of the product. Environmental assessment often necessitates evaluation of the potential for constituent release through leaching. Currently most leaching models used to estimate long-term field performance assume that the species of concern is uniformly dispersed in a homogeneous porous material. However, waste materials are often comprised of distinct components such as coarse or fine aggregates in a cement concrete or waste encapsulated in a stabilized matrix. The specific objectives of the research presented here were to (1) develop a one-dimensional, multi-regime transport model (i.e. MRT model) to describe the release of species from heterogeneous porous materials and, (2) evaluate simple limit cases using the model for species when release is not dependent on pH. Two different idealized model systems were considered: (1) a porous material contaminated with the species of interest and containing inert aggregates and, (2) a porous material containing the contaminant of interest only in the aggregates. The effect of three factors on constituent release were examined: (1) volume fraction of material occupied by the aggregates compared to a homogeneous porous material, (2) aggregate size and, (3) differences in mass transfer rates between the binder and the aggregates. Simulation results confirmed that assuming homogeneous materials to evaluate the release of contaminants from porous waste materials may result in erroneous long-term field performance assessment.

  18. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  19. Modeling Error Distributions of Growth Curve Models through Bayesian Methods

    ERIC Educational Resources Information Center

    Zhang, Zhiyong

    2016-01-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…

  20. Why is it so difficult to determine the yield of indoor cannabis plantations? A case study from the Netherlands.

    PubMed

    Vanhove, Wouter; Maalsté, Nicole; Van Damme, Patrick

    2017-07-01

    Together, the Netherlands and Belgium are the largest indoor cannabis producing countries in Europe. In both countries, legal prosecution procedure of convicted illicit cannabis growers usually includes recovery of the profits gained. However, it is not easy to make a reliable estimation of the latter profits, due to the wide range of factors that determine indoor cannabis yields and eventual selling prices. In the Netherlands, since 2005, a reference model is used that assumes a constant yield (g) per plant for a given indoor cannabis plant density. Later, in 2011, a new model was developed in Belgium for yield estimation of Belgian indoor cannabis plantations that assumes a constant yield per m 2 of growth surface, provided that a number of growth conditions are met. Indoor cannabis plantations in the Netherlands and Belgium share similar technical characteristics. As a result, for indoor cannabis plantations in both countries, both aforementioned yield estimation models should yield similar yield estimations. By means of a real-case study from the Netherlands, we show that the reliability of both models is hampered by a number of flaws and unmet preconditions. The Dutch model is based on a regression equation that makes use of ill-defined plant development stages, assumes a linear plant growth, does not discriminate between different plantation size categories and does not include other important yield determining factors (such as fertilization). The Belgian model addresses some of the latter shortcomings, but its applicability is constrained by a number of pre-conditions including plantation size between 50 and 1000 plants; cultivation in individual pots with peat soil; 600W (electrical power) assimilation lamps; constant temperature between 20°C and 30°C; adequate fertilizer application and plants unaffected by pests and diseases. Judiciary in both the Netherlands and Belgium require robust indoor cannabis yield models for adequate legal prosecution of illicit indoor cannabis growth operations. To that aim, the current models should be optimized whereas the validity of their application should be examined case by case. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Reformulated space-charge-limited current model and its application to disordered organic systems

    NASA Astrophysics Data System (ADS)

    Woellner, Cristiano F.; Freire, José A.

    2011-02-01

    We have reformulated a traditional model used to describe the current-voltage dependence of low mobility materials sandwiched between planar electrodes by using the quasi-electrochemical potential as the fundamental variable instead of the local electric field or the local charge carrier density. This allows the material density-of-states to enter explicitly in the equations and dispenses with the need to assume a particular type of contact. The diffusion current is included and as a consequence the current-voltage dependence obtained covers, with increasing bias, the diffusion limited current, the space-charge limited current, and the injection limited current regimes. The generalized Einstein relation and the field and density dependent mobility are naturally incorporated into the formalism; these two points being of particular relevance for disordered organic semiconductors. The reformulated model can be applied to any material where the carrier density and the mobility may be written as a function of the quasi-electrochemical potential. We applied it to the textbook example of a nondegenerate, constant mobility material and showed how a single dimensionless parameter determines the form of the I(V) curve. We obtained integral expressions for the carrier density and for the mobility as a function of the quasi-electrochemical potential for a Gaussianly disordered organic material and found the general form of the I(V) curve for such materials over the full range of bias, showing how the energetic disorder alone can give rise, in the space-charge limited current regime, to an I∝Vn dependence with an exponent n larger than 2.

  2. Magneto-hydrodynamic modeling of gas discharge switches

    NASA Astrophysics Data System (ADS)

    Doiphode, P.; Sakthivel, N.; Sarkar, P.; Chaturvedi, S.

    2002-12-01

    We have performed one-dimensional, time-dependent magneto-hydrodynamic modeling of fast gas-discharge switches. The model has been applied to both high- and low-pressure switches, involving a cylindrical argon-filled cavity. It is assumed that the discharge is initiated in a small channel near the axis of the cylinder. Joule heating in this channel rapidly raises its temperature and pressure. This drives a radial shock wave that heats and ionizes the surrounding low-temperature region, resulting in progressive expansion of the current channel. Our model is able to reproduce this expansion. However, significant difference of detail is observed, as compared with a simple model reported in the literature. In this paper, we present details of our simulations, a comparison with results from the simple model, and a physical interpretation for these differences. This is a first step towards development of a detailed 2-D model for such switches.

  3. The structure of evaporating and combusting sprays: Measurements and predictions

    NASA Technical Reports Server (NTRS)

    Shuen, J. S.; Solomon, A. S. P.; Faeth, G. M.

    1982-01-01

    An apparatus was constructed to provide measurements in open sprays with no zones of recirculation, in order to provide well-defined conditions for use in evaluating spray models. Measurements were completed in a gas jet, in order to test experimental methods, and are currently in progress for nonevaporating sprays. A locally homogeneous flow (LHF) model where interphase transport rates are assumed to be infinitely fast; a separated flow (SF) model which allows for finite interphase transport rates but neglects effects of turbulent fluctuations on drop motion; and a stochastic SF model which considers effects of turbulent fluctuations on drop motion were evaluated using existing data on particle-laden jets. The LHF model generally overestimates rates of particle dispersion while the SF model underestimates dispersion rates. The stochastic SF flow yield satisfactory predictions except at high particle mass loadings where effects of turbulence modulation may have caused the model to overestimate turbulence levels.

  4. Time-dependent inhomogeneous jet models for BL Lac objects

    NASA Technical Reports Server (NTRS)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-01-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  5. Time-dependent inhomogeneous jet models for BL Lac objects

    NASA Astrophysics Data System (ADS)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-05-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  6. Analytical model of a corona discharge from a conical electrode under saturation

    NASA Astrophysics Data System (ADS)

    Boltachev, G. Sh.; Zubarev, N. M.

    2012-11-01

    Exact partial solutions are found for the electric field distribution in the outer region of a stationary unipolar corona discharge from an ideal conical needle in the space-charge-limited current mode with allowance for the electric field dependence of the ion mobility. It is assumed that only the very tip of the cone is responsible for the discharge, i.e., that the ionization zone is a point. The solutions are obtained by joining the spherically symmetric potential distribution in the drift space and the self-similar potential distribution in the space-charge-free region. Such solutions are outside the framework of the conventional Deutsch approximation, according to which the space charge insignificantly influences the shape of equipotential surfaces and electric lines of force. The dependence is derived of the corona discharge saturation current on the apex angle of the conical electrode and applied potential difference. A simple analytical model is suggested that describes drift in the point-plane electrode geometry under saturation as a superposition of two exact solutions for the field potential. In terms of this model, the angular distribution of the current density over the massive plane electrode is derived, which agrees well with Warburg's empirical law.

  7. Calculating tracer currents through narrow ion channels: Beyond the independent particle model.

    PubMed

    Coalson, Rob D; Jasnow, David

    2018-06-01

    Discrete state models of single-file ion permeation through a narrow ion channel pore are employed to analyze the ratio of forward to backward tracer current. Conditions under which the well-known Ussing formula for this ratio hold are explored in systems where ions do not move independently through the channel. Building detailed balance into the rate constants for the model in such a way that under equilibrium conditions (equal rate of forward vs. backward permeation events) the Nernst Equation is satisfied, it is found that in a model where only one ion can occupy the channel at a time, the Ussing formula is always obeyed for any number of binding sites, reservoir concentrations of the ions and electric potential difference across the membrane which the ion channel spans, independent of the internal details of the permeation pathway. However, numerical analysis demonstrates that when multiple ions can occupy the channel at once, the nonequilibrium forward/backward tracer flux ratio deviates from the prediction of the Ussing model. Assuming an appropriate effective potential experienced by ions in the channel, we provide explicit formulae for the rate constants in these models. © 2018 IOP Publishing Ltd.

  8. Enhanced dielectric standoff and mechanical failure in field-structured composites

    NASA Astrophysics Data System (ADS)

    Martin, James E.; Tigges, Chris P.; Anderson, Robert A.; Odinek, Judy

    1999-09-01

    We report dielectric breakdown experiments on electric-field-structured composites of high-dielectric-constant BaTiO3 particles in an epoxy resin. These experiments show a significant increase in the dielectric standoff strength perpendicular to the field structuring direction, relative to control samples consisting of randomly dispersed particles. To understand the relation of this observation to microstructure, we apply a simple resistor-short breakdown model to three-dimensional composite structures generated from a dynamical simulation. In this breakdown model the composite material is assumed to conduct primarily through particle contacts, so the simulated structures are mapped onto a resistor network where the center of mass of each particle is a node that is connected to neighboring nodes by resistors of fixed resistance that irreversibly short to perfect conductors when the current reaches a threshold value. This model gives relative breakdown voltages that are in good agreement with experimental results. Finally, we consider a primitive model of the mechanical strength of a field-structured composite material, which is a current-driven, conductor-insulator fuse model. This model leads to a macroscopic fusing behavior and can be related to mechanical failure of the composite.

  9. Everglades Landscape Model: Integrated Assessment of Hydrology, Biogeochemistry, and Biology

    NASA Astrophysics Data System (ADS)

    Fitz, H. C.; Wang, N.; Sklar, F. H.

    2002-05-01

    Water management infrastructure and operations have fragmented the greater Everglades into separate, impounded basins, altering flows and hydropatterns. A significant area of this managed system has experienced anthropogenic eutrophication. This combination of altered hydrology and water quality has interacted to degrade vegetative habitats and other ecological characteristics of the Everglades. One of the modeling tools to be used in developing restoration alternatives is the Everglades Landscape Model (ELM), a process-based, spatially explicit simulation of ecosystem dynamics across a heterogeneous, 10,000 km2 region. The model has been calibrated to capture hydrologic and surface water quality dynamics across most of the Everglades landscape over decadal time scales. We evaluated phosphorus loading throughout the Everglades system under two base scenarios. The 1995 base case assumed current management operations, with phosphorus inflow concentrations fixed at their long term, historical average. The 2050 base case assumed future modifications in water and nutrient management, with all managed inflows to the Everglades having reduced phosphorus concentrations. In an example indicator subregion that currently is highly eutrophic, the 31-yr simulations predicted that desirable periphyton and macrophyte communities were maintained under the 2050 base case, whereas in the 1995 base case, periphyton biomass and production decreased to negligible levels and macrophytes became extremely dense. The negative periphyton response in the 1995 base case was due to high phosphorus loads and rapid macrophyte growth that shaded this algal community. Along an existing 11 km eutrophication gradient, the model indicated that the 2050 base case had ecologically significant reductions in phosphorus accumulation compared to the 1995 base case. Indicator regions (in Everglades National Park) distant from phosphorus inflow points also exhibited reductions in phosphorus accumulation under the 2050 base case, albeit to a lesser extent due to its distance from phosphorus inflows. The ELM fills a critical information need in Everglades management, and has become an accepted tool in evaluating scenarios of potential restoration of the natural system.

  10. Dust Density Distribution and Imaging Analysis of Different Ice Lines in Protoplanetary Disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinilla, P.; Pohl, A.; Stammler, S. M.

    Recent high angular resolution observations of protoplanetary disks at different wavelengths have revealed several kinds of structures, including multiple bright and dark rings. Embedded planets are the most used explanation for such structures, but there are alternative models capable of shaping the dust in rings as it has been observed. We assume a disk around a Herbig star and investigate the effect that ice lines have on the dust evolution, following the growth, fragmentation, and dynamics of multiple dust size particles, covering from 1 μ m to 2 m sized objects. We use simplified prescriptions of the fragmentation velocity threshold,more » which is assumed to change radially at the location of one, two, or three ice lines. We assume changes at the radial location of main volatiles, specifically H{sub 2}O, CO{sub 2}, and NH{sub 3}. Radiative transfer calculations are done using the resulting dust density distributions in order to compare with current multiwavelength observations. We find that the structures in the dust density profiles and radial intensities at different wavelengths strongly depend on the disk viscosity. A clear gap of emission can be formed between ice lines and be surrounded by ring-like structures, in particular between the H{sub 2}O and CO{sub 2} (or CO). The gaps are expected to be shallower and narrower at millimeter emission than at near-infrared, opposite to model predictions of particle trapping. In our models, the total gas surface density is not expected to show strong variations, in contrast to other gap-forming scenarios such as embedded giant planets or radial variations of the disk viscosity.« less

  11. Thoughts on Reforming Professional Bureaucracies. Summary.

    ERIC Educational Resources Information Center

    Lynch, Patrick D.

    1978-01-01

    The humanization of bureaucracies is essential in modern society. Traditionally, the training of administrators has emphasized three models of organizations. The first is the productivity model, the second model assumes that fulfilling the needs of organizational members will increase client satisfaction, and the third assumes a stable environment…

  12. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    PubMed Central

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  13. On the kinetics of anaerobic power

    PubMed Central

    2012-01-01

    Background This study investigated two different mathematical models for the kinetics of anaerobic power. Model 1 assumes that the work power is linear with the work rate, while Model 2 assumes a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. In order to test these models, a cross country skier ran with poles on a treadmill at different exercise intensities. The aerobic power, based on the measured oxygen uptake, was used as input to the models, whereas the simulated blood lactate concentration was compared with experimental results. Thereafter, the metabolic rate from phosphocreatine break down was calculated theoretically. Finally, the models were used to compare phosphocreatine break down during continuous and interval exercises. Results Good similarity was found between experimental and simulated blood lactate concentration during steady state exercise intensities. The measured blood lactate concentrations were lower than simulated for intensities above the lactate threshold, but higher than simulated during recovery after high intensity exercise when the simulated lactate concentration was averaged over the whole lactate space. This fit was improved when the simulated lactate concentration was separated into two compartments; muscles + internal organs and blood. Model 2 gave a better behavior of alactic energy than Model 1 when compared against invasive measurements presented in the literature. During continuous exercise, Model 2 showed that the alactic energy storage decreased with time, whereas Model 1 showed a minimum value when steady state aerobic conditions were achieved. During interval exercise the two models showed similar patterns of alactic energy. Conclusions The current study provides useful insight on the kinetics of anaerobic power. Overall, our data indicate that blood lactate levels can be accurately modeled during steady state, and suggests a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. PMID:22830586

  14. The effects of vent location, event scale and time forecasts on pyroclastic density current hazard maps at Campi Flegrei caldera (Italy)

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Andrea; Neri, Augusto; Bisson, Marina; Esposti Ongaro, Tomaso; Flandoli, Franco; Isaia, Roberto; Rosi, Mauro; Vitale, Stefano

    2017-09-01

    This study presents a new method for producing long-term hazard maps for pyroclastic density currents (PDC) originating at Campi Flegrei caldera. Such method is based on a doubly stochastic approach and is able to combine the uncertainty assessments on the spatial location of the volcanic vent, the size of the flow and the expected time of such an event. The results are obtained by using a Monte Carlo approach and adopting a simplified invasion model based on the box model integral approximation. Temporal assessments are modelled through a Cox-type process including self-excitement effects, based on the eruptive record of the last 15 kyr. Mean and percentile maps of PDC invasion probability are produced, exploring their sensitivity to some sources of uncertainty and to the effects of the dependence between PDC scales and the caldera sector where they originated. Conditional maps representative of PDC originating inside limited zones of the caldera, or of PDC with a limited range of scales are also produced. Finally, the effect of assuming different time windows for the hazard estimates is explored, also including the potential occurrence of a sequence of multiple events. Assuming that the last eruption of Monte Nuovo (A.D. 1538) marked the beginning of a new epoch of activity similar to the previous ones, results of the statistical analysis indicate a mean probability of PDC invasion above 5% in the next 50 years on almost the entire caldera (with a probability peak of 25% in the central part of the caldera). In contrast, probability values reduce by a factor of about 3 if the entire eruptive record is considered over the last 15 kyr, i.e. including both eruptive epochs and quiescent periods.

  15. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-08-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  16. Effects of Neutral Density on Energetic Ions Produced Near High-Current Hollow Cathodes

    NASA Technical Reports Server (NTRS)

    Kameyama, Ikuya

    1997-01-01

    Energy distributions of ion current from high-current, xenon hollow cathodes, which are essential information to understand erosion phenomena observed in high-power ion thrusters, were obtained using an electrostatic energy analyzer (ESA). The effects of ambient pressure and external flow rate introduced immediately downstream of hollow cathode on ion currents with energies greater than that associated with the cathode-to-anode potential difference were investigated. The results were analyzed to determine the changes in the magnitudes of ion currents to the ESA at various energies. Either increasing the ambient pressure or adding external flow induces an increase in the distribution of ion currents with moderate energies (epsilon less than 25 to 35 eV) and a decrease in the distribution for high energies (epsilon greater than 25 to 35 eV). The magnitude of the current distribution increase in the moderate energy range is greater for a cathode equipped with a toroidal keeper than for one without a keeper, but the distribution in the high energy range does not seem to be affected by a keeper. An MHD model, which has been proposed to describe energetic-ion production mechanism in hollow cathode at high discharge currents, was developed to describe these effects. The results show, however, that this model involves no mechanism by which a significant increase of ion current could occur at any energy. It was found, on the other hand, that the potential-hill model of energetic ion production, which assumes existence of a local maximum of plasma potential, could explain combined increases in the currents of ions with moderate energies and decreases in high energy ions due to increased neutral atom density using a charge-exchange mechanism. The existing, simplified version of the potential-hill model, however, shows poor quantitative agreement with measured ion-current-energy-distribution changes induced by neutral density changes.

  17. A Comparison of Alternative Approaches to the Analysis of Interrupted Time-Series.

    ERIC Educational Resources Information Center

    Harrop, John W.; Velicer, Wayne F.

    1985-01-01

    Computer generated data representative of 16 Auto Regressive Integrated Moving Averages (ARIMA) models were used to compare the results of interrupted time-series analysis using: (1) the known model identification, (2) an assumed (l,0,0) model, and (3) an assumed (3,0,0) model as an approximation to the General Transformation approach. (Author/BW)

  18. Optimal low thrust geocentric transfer. [mission analysis computer program

    NASA Technical Reports Server (NTRS)

    Edelbaum, T. N.; Sackett, L. L.; Malchow, H. L.

    1973-01-01

    A computer code which will rapidly calculate time-optimal low thrust transfers is being developed as a mission analysis tool. The final program will apply to NEP or SEP missions and will include a variety of environmental effects. The current program assumes constant acceleration. The oblateness effect and shadowing may be included. Detailed state and costate equations are given for the thrust effect, oblateness effect, and shadowing. A simple but adequate model yields analytical formulas for power degradation due to the Van Allen radiation belts for SEP missions. The program avoids the classical singularities by the use of equinoctial orbital elements. Kryloff-Bogoliuboff averaging is used to facilitate rapid calculation. Results for selected cases using the current program are given.

  19. Search for muon neutrino disappearance due to sterile neutrino oscillations with the MINOS/MINOS+ experiment

    NASA Astrophysics Data System (ADS)

    Todd, J.; Chen, R.; Huang, J.; ">MINOS, A study of weak anisotropy in electron pressure in the tail current sheet

    NASA Technical Reports Server (NTRS)

    Lee, D.-Y.; Voigt, G.-H.

    1995-01-01

    We adopt a magnetotail model with stretched field lines where ion motions are generally nonadiabatic and where it is assumed that the pressure anisotropy resides only in the electron pressure tensor. We show that the magnetic field lines with p(perpendicular) greater than p(parallel) are less stretched than the corresponding field lines in the isotropic model. For p(parallel) greater than p(perpendicular), the magnetic field lines become more and more stretched as the anisotropy approaches the marginal firehose limit, p(parallel) = p(perpendicular) + B(exp 2)/mu(sub 0). We also show that the tail current density is highly enhanced at the firehose limit, a situation that might be subject to a microscopic instability. However, we emphasize that the enhancement in the current density is notable only near the center of the tail current sheet (z = 0). Thus it remains unclear whether any microscopic instability can significantly alter the global magnetic field configuration of the tail. By comparing the radius of the field-line curvature at z = 0 with the particle's gyroradius, we suspect that even the conventional adiabatic description of electrons may become questionable very close to the marginal firehose limit.

  1. Hidden Markov Item Response Theory Models for Responses and Response Times.

    PubMed

    Molenaar, Dylan; Oberski, Daniel; Vermunt, Jeroen; De Boeck, Paul

    2016-01-01

    Current approaches to model responses and response times to psychometric tests solely focus on between-subject differences in speed and ability. Within subjects, speed and ability are assumed to be constants. Violations of this assumption are generally absorbed in the residual of the model. As a result, within-subject departures from the between-subject speed and ability level remain undetected. These departures may be of interest to the researcher as they reflect differences in the response processes adopted on the items of a test. In this article, we propose a dynamic approach for responses and response times based on hidden Markov modeling to account for within-subject differences in responses and response times. A simulation study is conducted to demonstrate acceptable parameter recovery and acceptable performance of various fit indices in distinguishing between different models. In addition, both a confirmatory and an exploratory application are presented to demonstrate the practical value of the modeling approach.

  2. Chromatography modelling to describe protein adsorption at bead level.

    PubMed

    Gerontas, Spyridon; Shapiro, Michael S; Bracewell, Daniel G

    2013-04-05

    Chromatographic modelling can be used to describe and further understand the behaviour of biological species during their chromatography separation on adsorption resins. Current modelling approaches assume uniform rate parameters throughout the column. Software and hardware advances now allow us to consider what can be learnt from modelling at bead level, enabling simulation of heterogeneity in bead and packed bed structure due to design or due to changes during operation. In this paper, a model has been developed to simulate at bead level protein loading in 1.5 μl microfluidic columns. This model takes into account the heterogeneity in bead sizes and the spatial variations of the characteristics of a packed bed, such as bed void fraction and dispersion, thus offering a detailed description of the flow field and mass transfer phenomena. Simulations were shown to be in good agreement with published experimental data. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Three-dimensional modeling of the plasma arc in arc welding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, G.; Tsai, H. L.; Hu, J.

    2008-11-15

    Most previous three-dimensional modeling on gas tungsten arc welding (GTAW) and gas metal arc welding (GMAW) focuses on the weld pool dynamics and assumes the two-dimensional axisymmetric Gaussian distributions for plasma arc pressure and heat flux. In this article, a three-dimensional plasma arc model is developed, and the distributions of velocity, pressure, temperature, current density, and magnetic field of the plasma arc are calculated by solving the conservation equations of mass, momentum, and energy, as well as part of the Maxwell's equations. This three-dimensional model can be used to study the nonaxisymmetric plasma arc caused by external perturbations such asmore » an external magnetic field. It also provides more accurate boundary conditions when modeling the weld pool dynamics. The present work lays a foundation for true three-dimensional comprehensive modeling of GTAW and GMAW including the plasma arc, weld pool, and/or electrode.« less

  4. Modeling Analysis for NASA GRC Vacuum Facility 5 Upgrade

    NASA Technical Reports Server (NTRS)

    Yim, J. T.; Herman, D. A.; Burt, J. M.

    2013-01-01

    A model of the VF5 test facility at NASA Glenn Research Center was developed using the direct simulation Monte Carlo Hypersonic Aerothermodynamics Particle (HAP) code. The model results were compared to several cold flow and thruster hot fire cases. The main uncertainty in the model is the determination of the effective sticking coefficient -- which sets the pumping effectiveness of the cryopanels and oil diffusion pumps including baffle transmission. An effective sticking coefficient of 0.25 was found to provide generally good agreement with the experimental chamber pressure data. The model, which assumes a cold diffuse inflow, also fared satisfactorily in predicting the pressure distribution during thruster operation. The model was used to assess other chamber configurations to improve the local effective pumping speed near the thruster. A new configuration of the existing cryopumps is found to show more than 2x improvement over the current baseline configuration.

  5. Plasma Model V&V of Collisionless Electrostatic Shock

    NASA Astrophysics Data System (ADS)

    Martin, Robert; Le, Hai; Bilyeu, David; Gildea, Stephen

    2014-10-01

    A simple 1D electrostatic collisionless shock was selected as an initial validation and verification test case for a new plasma modeling framework under development at the Air Force Research Laboratory's In-Space Propulsion branch (AFRL/RQRS). Cross verification between PIC, Vlasov, and Fluid plasma models within the framework along with expected theoretical results will be shown. The non-equilibrium velocity distributions (VDF) captured by PIC and Vlasov will be compared to each other and the assumed VDF of the fluid model at selected points. Validation against experimental data from the University of California, Los Angeles double-plasma device will also be presented along with current work in progress at AFRL/RQRS towards reproducing the experimental results using higher fidelity diagnostics to help elucidate differences between model results and between the models and original experiment. DISTRIBUTION A: Approved for public release; unlimited distribution; PA (Public Affairs) Clearance Number 14332.

  6. Atomistic simulations of carbon diffusion and segregation in liquid silicon

    NASA Astrophysics Data System (ADS)

    Luo, Jinping; Alateeqi, Abdullah; Liu, Lijun; Sinno, Talid

    2017-12-01

    The diffusivity of carbon atoms in liquid silicon and their equilibrium distribution between the silicon melt and crystal phases are key, but unfortunately not precisely known parameters for the global models of silicon solidification processes. In this study, we apply a suite of molecular simulation tools, driven by multiple empirical potential models, to compute diffusion and segregation coefficients of carbon at the silicon melting temperature. We generally find good consistency across the potential model predictions, although some exceptions are identified and discussed. We also find good agreement with the range of available experimental measurements of segregation coefficients. However, the carbon diffusion coefficients we compute are significantly lower than the values typically assumed in continuum models of impurity distribution. Overall, we show that currently available empirical potential models may be useful, at least semi-quantitatively, for studying carbon (and possibly other impurity) transport in silicon solidification, especially if a multi-model approach is taken.

  7. Updates on Force Limiting Improvements

    NASA Technical Reports Server (NTRS)

    Kolaini, Ali R.; Scharton, Terry

    2013-01-01

    The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.

  8. Depletion region surface effects in electron beam induced current measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haney, Paul M.; Zhitenev, Nikolai B.; Yoon, Heayoung P.

    2016-09-07

    Electron beam induced current (EBIC) is a powerful characterization technique which offers the high spatial resolution needed to study polycrystalline solar cells. Current models of EBIC assume that excitations in the p-n junction depletion region result in perfect charge collection efficiency. However, we find that in CdTe and Si samples prepared by focused ion beam (FIB) milling, there is a reduced and nonuniform EBIC lineshape for excitations in the depletion region. Motivated by this, we present a model of the EBIC response for excitations in the depletion region which includes the effects of surface recombination from both charge-neutral and chargedmore » surfaces. For neutral surfaces, we present a simple analytical formula which describes the numerical data well, while the charged surface response depends qualitatively on the location of the surface Fermi level relative to the bulk Fermi level. We find that the experimental data on FIB-prepared Si solar cells are most consistent with a charged surface and discuss the implications for EBIC experiments on polycrystalline materials.« less

  9. Three-dimensional electrical impedance tomography based on the complete electrode model.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Savolainen, T; Kaipio, J P

    1999-09-01

    In electrical impedance tomography an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. It is often assumed that the injected currents are confined to the two-dimensional (2-D) electrode plane and the reconstruction is based on 2-D assumptions. However, the currents spread out in three dimensions and, therefore, off-plane structures have significant effect on the reconstructed images. In this paper we propose a finite element-based method for the reconstruction of three-dimensional resistivity distributions. The proposed method is based on the so-called complete electrode model that takes into account the presence of the electrodes and the contact impedances. Both the forward and the inverse problems are discussed and results from static and dynamic (difference) reconstructions with real measurement data are given. It is shown that in phantom experiments with accurate finite element computations it is possible to obtain static images that are comparable with difference images that are reconstructed from the same object with the empty (saline filled) tank as a reference.

  10. Toroidal Ampere-Faraday Equations Solved Simultaneously with CQL3D Fokker-Planck Time-Evolution

    NASA Astrophysics Data System (ADS)

    Harvey, R. W. (Bob); Petrov, Yu. V. (Yuri); Forest, C. B.; La Haye, R. J.

    2017-10-01

    A self-consistent, time-dependent toroidal electric field calculation is a key feature of a complete 3D Fokker-Planck kinetic distribution radial transport code for f(v,theta,rho,t). We discuss benchmarking and first applications of an implementation of the Ampere-Faraday equation for the self-consistent toroidal electric field, as applied to (1) resistive turn on of applied electron cyclotron current in the DIII-D tokamak giving initial back current adjacent to the direct CD region and having possible NTM stabilization implications, and (2) runaway electron production in tokamaks due to rapid reduction of the plasma temperature as occurs in pellet injection, massive gas injection, or a plasma disruption. Our previous results assuming a constant current density (Lenz' Law) model showed that prompt ``hot-tail runaways'' dominated ``knock-on'' and Dreicer ``drizzle'' runaways; we perform full-radius modeling and examine modifications due to the more complete Ampere-Faraday solution. Presently, the implementation relies on a fixed shape eqdsk, and this limitation will be addressed in future work. Research supported by USDOE FES award ER54744.

  11. Quantifying Transport Between the Tropical and Mid-Latitude Lower Stratosphere

    PubMed

    Volk; Elkins; Fahey; Salawitch; Dutton; Gilligan; Proffitt; Loewenstein; Podolske; Minschwaner; Margitan; Chan

    1996-06-21

    Airborne in situ observations of molecules with a wide range of lifetimes (methane, nitrous oxide, reactive nitrogen, ozone, chlorinated halocarbons, and halon-1211), used in a tropical tracer model, show that mid-latitude air is entrained into the tropical lower stratosphere within about 13.5 months; transport is faster in the reverse direction. Because exchange with the tropics is slower than global photochemical models generally assume, ozone at mid-latitudes appears to be more sensitive to elevated levels of industrial chlorine than is currently predicted. Nevertheless, about 45 percent of air in the tropical ascent region at 21 kilometers is of mid-latitude origin, implying that emissions from supersonic aircraft could reach the middle stratosphere.

  12. Radiation effects induced in pin photodiodes by 40- and 85-MeV protons

    NASA Technical Reports Server (NTRS)

    Becher, J.; Kernell, R. L.; Reft, C. S.

    1985-01-01

    PIN photodiodes were bombarded with 40- and 85-MeV protons to a fluence of 1.5 x 10 to the 11th power p/sq cm, and the resulting change in spectral response in the near infrared was determined. The photocurrent, dark current and pulse amplitude were measured as a function of proton fluence. Changes in these three measured properties are discussed in terms of changes in the diode's spectral response, minority carrier diffusion length and depletion width. A simple model of induced radiation effects is presented which is in good agreement with the experimental results. The model assumes that incident protons produce charged defects within the depletion region simulating donor type impurities.

  13. Neutrino mass, dark matter, and Baryon asymmetry via TeV-scale physics without fine-tuning.

    PubMed

    Aoki, Mayumi; Kanemura, Shinya; Seto, Osamu

    2009-02-06

    We propose an extended version of the standard model, in which neutrino oscillation, dark matter, and the baryon asymmetry of the Universe can be simultaneously explained by the TeV-scale physics without assuming a large hierarchy among the mass scales. Tiny neutrino masses are generated at the three-loop level due to the exact Z2 symmetry, by which the stability of the dark matter candidate is guaranteed. The extra Higgs doublet is required not only for the tiny neutrino masses but also for successful electroweak baryogenesis. The model provides discriminative predictions especially in Higgs phenomenology, so that it is testable at current and future collider experiments.

  14. Pre-conceptual design of the Z-LLE accelerator.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stygar, William A.

    We begin with a model of 20 LTD modules, connected in parallel. We assume each LTD module consists of 10 LTD cavities, connected in series. We assume each cavity includes 20 LTD bricks, in parallel. Each brick is assumed to have a 40-nF capacitance and a 160-nH inductance. We use for this calculation the RLC-circuit model of an LTD system that was developed by Mazarakis and colleagues.

  15. Understanding Breaks in Flare X-Ray Spectra: Evaluation of a Cospatial Collisional Return-current Model

    NASA Astrophysics Data System (ADS)

    Alaoui, Meriem; Holman, Gordon D.

    2017-12-01

    Hard X-ray (HXR) spectral breaks are explained in terms of a one-dimensional model with a cospatial return current. We study 19 flares observed by the Ramaty High Energy Solar Spectroscopic Imager with strong spectral breaks at energies around a few deka-keV, which cannot be explained by isotropic albedo or non-uniform ionization alone. We identify these breaks at the HXR peak time, but we obtain 8 s cadence spectra of the entire impulsive phase. Electrons with an initially power-law distribution and a sharp low-energy cutoff lose energy through return-current losses until they reach the thick target, where they lose their remaining energy through collisions. Our main results are as follows. (1) The return-current collisional thick-target model provides acceptable fits for spectra with strong breaks. (2) Limits on the plasma resistivity are derived from the fitted potential drop and deduced electron-beam flux density, assuming the return current is a drift current in the ambient plasma. These resistivities are typically 2–3 orders of magnitude higher than the Spitzer resistivity at the fitted temperature, and provide a test for the adequacy of classical resistivity and the stability of the return current. (3) Using the upper limit of the low-energy cutoff, the return current is always stable to the generation of ion-acoustic and electrostatic ion-cyclotron instabilities when the electron temperature is nine times lower than the ion temperature. (4) In most cases, the return current is most likely primarily carried by runaway electrons from the tail of the thermal distribution rather than by the bulk drifting thermal electrons. For these cases, anomalous resistivity is not required.

  16. Theoretical model and experimental investigation of current density boundary condition for welding arc study

    NASA Astrophysics Data System (ADS)

    Boutaghane, A.; Bouhadef, K.; Valensi, F.; Pellerin, S.; Benkedda, Y.

    2011-04-01

    This paper presents results of theoretical and experimental investigation of the welding arc in Gas Tungsten Arc Welding (GTAW) and Gas Metal Arc Welding (GMAW) processes. A theoretical model consisting in simultaneous resolution of the set of conservation equations for mass, momentum, energy and current, Ohm's law and Maxwell equation is used to predict temperatures and current density distribution in argon welding arcs. A current density profile had to be assumed over the surface of the cathode as a boundary condition in order to make the theoretical calculations possible. In stationary GTAW process, this assumption leads to fair agreement with experimental results reported in literature with maximum arc temperatures of ~21 000 K. In contrast to the GTAW process, in GMAW process, the electrode is consumable and non-thermionic, and a realistic boundary condition of the current density is lacking. For establishing this crucial boundary condition which is the current density in the anode melting electrode, an original method is setup to enable the current density to be determined experimentally. High-speed camera (3000 images/s) is used to get geometrical dimensions of the welding wire used as anode. The total area of the melting anode covered by the arc plasma being determined, the current density at the anode surface can be calculated. For a 330 A arc, the current density at the melting anode surface is found to be of 5 × 107 A m-2 for a 1.2 mm diameter welding electrode.

  17. A unified engineering model of the first stroke in downward negative lightning

    NASA Astrophysics Data System (ADS)

    Nag, Amitabh; Rakov, Vladimir A.

    2016-03-01

    Each stroke in a negative cloud-to-ground lightning flash is composed of downward leader and upward return stroke processes, which are usually modeled individually. The first stroke leader is stepped and starts with preliminary breakdown (PB) which is often viewed as a separate process. We present the first unified engineering model for computing the electric field produced by a sequence of PB, stepped leader, and return stroke processes, serving to transport negative charge to ground. We assume that a negatively charged channel extends downward in a stepped fashion during both the PB and leader stages. Each step involves a current wave that propagates upward along the newly formed channel section. Once the leader attaches to ground, an upward propagating return stroke neutralizes the charge deposited along the channel. Model-predicted electric fields are in reasonably good agreement with simultaneous measurements at both near (hundreds of meters, electrostatic field component is dominant) and far (tens of kilometers, radiation field component is dominant) distances from the lightning channel. Relations between the features of computed electric field waveforms and model input parameters are examined. It appears that peak currents associated with PB pulses are similar to return stroke peak currents, and the observed variation of electric radiation field peaks produced by leader steps at different heights above ground is influenced by the ground corona space charge.

  18. Effects of illumination functions on the computation of gravity-dependent signal path variation models in primary focus and Cassegrainian VLBI telescopes

    NASA Astrophysics Data System (ADS)

    Abbondanza, Claudio; Sarti, Pierguido

    2010-08-01

    This paper sets the rules for an optimal definition of precise signal path variation (SPV) models, revising and highlighting the deficiencies in the calculations adopted in previous studies and improving the computational approach. Hence, the linear coefficients that define the SPV model are rigorously determined. The equations that are presented depend on the dimensions and the focal lengths of the telescopes as well as on the feed illumination taper. They hold for any primary focus or Cassegrainian very long baseline interferometry (VLBI) telescope. Earlier investigations usually determined the SPV models assuming a uniform illumination of the telescope mirrors. We prove this hypothesis to be over-simplistic by comparing results derived adopting (a) uniform, (b) Gaussian and (c) binomial illumination functions. Numerical computations are developed for AZ-EL mount, 32 m Medicina and Noto (Italy) VLBI telescopes, these latter being the only telescopes which possess thorough information on gravity-dependent deformation patterns. Particularly, assuming a Gaussian illumination function, the SPV in primary focus over the elevation range [0°, 90°] is 10.1 and 7.2 mm, for Medicina and Noto, respectively. With uniform illumination function the maximal path variation for Medicina is 17.6 and 12.7 mm for Noto, thus highlighting the strong dependency on the choice of the illumination function. According to our findings, a revised SPV model is released for Medicina and a model for Noto is presented here for the first time. Currently, no other VLBI telescope possesses SPV models capable of correcting gravity-dependent observation biases.

  19. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.

  20. Inductive current startup in large tokamaks with expanding minor radius and RF assist

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowski, S.K.

    1983-01-01

    Auxiliary RF heating of electrons before and during the current rise phase of a large tokamak, such as the Fusion Engineering Device, is examined as a means of reducing both the initiation loop voltage and resistive flux expenditure during startup. Prior to current initiation, 1 to 2 MW of electron cyclotron resonance heating power at approx.90 GHz is used to create a small volume of high conductivity plasma (T/sub e/ approx. = 100 eV, n/sub e/ approx. = 10/sup 19/m/sup -3/) near the upper hybrid resonance (UHR) region. This plasma conditioning permits a small radius (a/sup 0/ approx.< 0.4 m)more » current channel to be established with a relatively low initial loop voltage (approx.< 25 V as opposed to approx.100 V without RF assist). During the subsequent plasma expansion and current ramp phase, additional RF power is introduced to reduce volt-second consumption due to plasma resistance. To study the preheating phase, a near classical particle and energy transport model is developed to estimate the electron heating efficiency in a currentless toroidal plasma. The model assumes that preferential electron heating at the UHR leads to the formation of an ambipolar sheath potential between the neutral plasma and the conducting vacuum vessel and limiter.« less

  1. Psychopathy, intelligence and conviction history.

    PubMed

    Heinzen, Hanna; Köhler, Denis; Godt, Nils; Geiger, Friedemann; Huchzermeier, Christian

    2011-01-01

    The current study examined the relationship between psychopathy, intelligence and two variables describing the conviction history (length of conviction and number of prior convictions). It was hypothesized that psychopathy factors (interpersonal and antisocial factors assuming a 2-factor model or interpersonal, affective, lifestyle and antisocial factors assuming a 4-factor model) would be related in different ways to IQ scores, length of conviction and number of prior convictions. Psychopathy and IQ were assessed using the PCL:SV and the CFT 20-R respectively. Results indicated no association between interpersonal psychopathy features (Factor 1, two-factor model), IQ and the number of prior convictions but a positive association between Factor 1 and the length of conviction. Antisocial features (Factor 2, two-factor model) were negatively related to IQ and the length of conviction and positively related to the number of prior convictions. Results were further differentiated for the four-factor model of psychopathy. The relationship between IQ and psychopathy features was further assessed by statistically isolating the effects of the two factors of psychopathy. It was found that individuals scoring high on interpersonal features of psychopathy are more intelligent than those scoring high on antisocial features, but less intelligent than those scoring low on both psychopathy features. The results underpin the importance of allocating psychopathic individuals to subgroups on the basis of personality characteristics and criminological features. These subgroups may identify different types of offenders and may be highly valuable for defining treatment needs and risk of future violence. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. An Innovative Interactive Modeling Tool to Analyze Scenario-Based Physician Workforce Supply and Demand.

    PubMed

    Gupta, Saurabh; Black-Schaffer, W Stephen; Crawford, James M; Gross, David; Karcher, Donald S; Kaufman, Jill; Knapman, Doug; Prystowsky, Michael B; Wheeler, Thomas M; Bean, Sarah; Kumar, Paramhans; Sharma, Raghav; Chamoli, Vaibhav; Ghai, Vikrant; Gogia, Vineet; Weintraub, Sally; Cohen, Michael B; Robboy, Stanley J

    2015-01-01

    Effective physician workforce management requires that the various organizations comprising the House of Medicine be able to assess their current and future workforce supply. This information has direct relevance to funding of graduate medical education. We describe a dynamic modeling tool that examines how individual factors and practice variables can be used to measure and forecast the supply and demand for existing and new physician services. The system we describe, while built to analyze the pathologist workforce, is sufficiently broad and robust for use in any medical specialty. Our design provides a computer-based software model populated with data from surveys and best estimates by specialty experts about current and new activities in the scope of practice. The model describes the steps needed and data required for analysis of supply and demand. Our modeling tool allows educators and policy makers, in addition to physician specialty organizations, to assess how various factors may affect demand (and supply) of current and emerging services. Examples of factors evaluated include types of professional services (3 categories with 16 subcategories), service locations, elements related to the Patient Protection and Affordable Care Act, new technologies, aging population, and changing roles in capitated, value-based, and team-based systems of care. The model also helps identify where physicians in a given specialty will likely need to assume new roles, develop new expertise, and become more efficient in practice to accommodate new value-based payment models.

  3. An Innovative Interactive Modeling Tool to Analyze Scenario-Based Physician Workforce Supply and Demand

    PubMed Central

    Gupta, Saurabh; Black-Schaffer, W. Stephen; Crawford, James M.; Gross, David; Karcher, Donald S.; Kaufman, Jill; Knapman, Doug; Prystowsky, Michael B.; Wheeler, Thomas M.; Bean, Sarah; Kumar, Paramhans; Sharma, Raghav; Chamoli, Vaibhav; Ghai, Vikrant; Gogia, Vineet; Weintraub, Sally; Cohen, Michael B.

    2015-01-01

    Effective physician workforce management requires that the various organizations comprising the House of Medicine be able to assess their current and future workforce supply. This information has direct relevance to funding of graduate medical education. We describe a dynamic modeling tool that examines how individual factors and practice variables can be used to measure and forecast the supply and demand for existing and new physician services. The system we describe, while built to analyze the pathologist workforce, is sufficiently broad and robust for use in any medical specialty. Our design provides a computer-based software model populated with data from surveys and best estimates by specialty experts about current and new activities in the scope of practice. The model describes the steps needed and data required for analysis of supply and demand. Our modeling tool allows educators and policy makers, in addition to physician specialty organizations, to assess how various factors may affect demand (and supply) of current and emerging services. Examples of factors evaluated include types of professional services (3 categories with 16 subcategories), service locations, elements related to the Patient Protection and Affordable Care Act, new technologies, aging population, and changing roles in capitated, value-based, and team-based systems of care. The model also helps identify where physicians in a given specialty will likely need to assume new roles, develop new expertise, and become more efficient in practice to accommodate new value-based payment models. PMID:28725751

  4. Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference

    PubMed Central

    Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.

    2016-01-01

    Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243

  5. DOSE ASSESSMENT OF THE FINAL INVENTORIES IN CENTER SLIT TRENCHES ONE THROUGH FIVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collard, L.; Hamm, L.; Smith, F.

    2011-05-02

    In response to a request from Solid Waste Management (SWM), this study evaluates the performance of waste disposed in Slit Trenches 1-5 by calculating exposure doses and concentrations. As of 8/19/2010, Slit Trenches 1-5 have been filled and are closed to future waste disposal in support of an ARRA-funded interim operational cover project. Slit Trenches 6 and 7 are currently in operation and are not addressed within this analysis. Their current inventory limits are based on the 2008 SA and are not being impacted by this study. This analysis considers the location and the timing of waste disposal in Slitmore » Trenches 1-5 throughout their operational life. In addition, the following improvements to the modeling approach have been incorporated into this analysis: (1) Final waste inventories from WITS are used for the base case analysis where variance in the reported final disposal inventories is addressed through a sensitivity analysis; (2) Updated K{sub d} values are used; (3) Area percentages of non-crushable containers are used in the analysis to determine expected infiltration flows for cases that consider collapse of these containers; (4) An updated representation of ETF carbon column vessels disposed in SLIT3-Unit F is used. Preliminary analyses indicated a problem meeting the groundwater beta-gamma dose limit because of high H-3 and I-129 release from the ETF vessels. The updated model uses results from a recent structural analysis of the ETF vessels indicating that water does not penetrate the vessels for about 130 years and that the vessels remain structurally intact throughout the 1130-year period of assessment; and (5) Operational covers are included with revised installation dates and sets of Slit Trenches that have a common cover. With the exception of the modeling enhancements noted above, the analysis follows the same methodology used in the 2008 PA (WSRC, 2008) and the 2008 SA (Collard and Hamm, 2008). Infiltration flows through the vadose zone are identical to the flows used in the 2008 PA, except for flows during the operational cover time period. The physical (i.e., non-geochemical) models of the vadose zone and aquifer are identical in most cases to the models used in the 2008 PA. However, the 2008 PA assumed a uniform distribution of waste within each Slit Trench (WITS Location) and assumed that the entire inventory of each trench was disposed of at the time the first Slit Trench was opened. The current analysis considers individual trench excavations (i.e., segments) and groups of segments (i.e., Inventory Groups also known as WITS Units) within Slit Trenches. Waste disposal is assumed to be spatially uniform in each Inventory Group and is distributed in time increments of six months or less between the time the Inventory Group was opened and closed.« less

  6. Assimilating the Future for Better Forecasts and Earlier Warnings

    NASA Astrophysics Data System (ADS)

    Du, H.; Wheatcroft, E.; Smith, L. A.

    2016-12-01

    Multi-model ensembles have become popular tools to account for some of the uncertainty due to model inadequacy in weather and climate simulation-based predictions. The current multi-model forecasts focus on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently or with different primary target variables, each is likely to contain different dynamical strengths and weaknesses. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information regarding the future from each individual model operationally. The proposed approach generates model states in time via applying data assimilation scheme(s) to yield truly "multi-model trajectories". It is demonstrated to outperform traditional statistical post-processing in the 40-dimensional Lorenz96 flow. Data assimilation approaches are originally designed to improve state estimation from the past to the current time. The aim of this talk is to introduce a framework that uses data assimilation to improve model forecasts at future time (not to argue for any one particular data assimilation scheme). Illustration of applying data assimilation "in the future" to provide early warning of future high-impact events is also presented.

  7. The processing of unexpected positive response outcomes in the mediofrontal cortex.

    PubMed

    Ferdinand, Nicola K; Mecklinger, Axel; Kray, Jutta; Gehring, William J

    2012-08-29

    The human mediofrontal cortex, especially the anterior cingulate cortex, is commonly assumed to contribute to higher cognitive functions like performance monitoring. How exactly this is achieved is currently the subject of lively debate but there is evidence that an event's valence and its expectancy play important roles. One prominent theory, the reinforcement learning theory by Holroyd and colleagues (2002, 2008), assigns a special role to feedback valence, while the prediction of response-outcome (PRO) model by Alexander and Brown (2010, 2011) claims that the mediofrontal cortex is sensitive to unexpected events regardless of their valence. However, paradigms examining this issue have included confounds that fail to separate valence and expectancy. In the present study, we tested the two competing theories of performance monitoring by using an experimental task that separates valence and unexpectedness of performance feedback. The feedback-related negativity of the event-related potential, which is commonly assumed to be a reflection of mediofrontal cortex activity, was elicited not only by unexpected negative feedback, but also by unexpected positive feedback. This implies that the mediofrontal cortex is sensitive to the unexpectedness of events in general rather than their valence and by this supports the PRO model.

  8. Manual for a workstation-based generic flight simulation program (LaRCsim), version 1.4

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce

    1995-01-01

    LaRCsim is a set of ANSI C routines that implement a full set of equations of motion for a rigid-body aircraft in atmospheric and low-earth orbital flight, suitable for pilot-in-the-loop simulations on a workstation-class computer. All six rigid-body degrees of freedom are modeled. The modules provided include calculations of the typical aircraft rigid-body simulation variables, earth geodesy, gravity and atmospheric models, and support several data recording options. Features/limitations of the current version include English units of measure, a 1962 atmosphere model in cubic spline function lookup form, ranging from sea level to 75,000 feet, rotating oblate spheroidal earth model, with aircraft C.G. coordinates in both geocentric and geodetic axes. Angular integrations are done using quaternion state variables Vehicle X-Z symmetry is assumed.

  9. The heuristic-analytic theory of reasoning: extension and evaluation.

    PubMed

    Evans, Jonathan St B T

    2006-06-01

    An extensively revised heuristic-analytic theory of reasoning is presented incorporating three principles of hypothetical thinking. The theory assumes that reasoning and judgment are facilitated by the formation of epistemic mental models that are generated one at a time (singularity principle) by preconscious heuristic processes that contextualize problems in such a way as to maximize relevance to current goals (relevance principle). Analytic processes evaluate these models but tend to accept them unless there is good reason to reject them (satisficing principle). At a minimum, analytic processing of models is required so as to generate inferences or judgments relevant to the task instructions, but more active intervention may result in modification or replacement of default models generated by the heuristic system. Evidence for this theory is provided by a review of a wide range of literature on thinking and reasoning.

  10. Radiation from a current filament driven by a traveling wave

    NASA Technical Reports Server (NTRS)

    Levine, D. M.; Meneghini, R.

    1976-01-01

    Solutions are presented for the electromagnetic fields radiated by an arbitrarily oriented current filament located above a perfectly conducting ground plane and excited by a traveling current wave. Both an approximate solution, valid in the fraunhofer region of the filament and predicting the radiation terms in the fields, and an exact solution, which predicts both near and far field components of the electromagnetic fields, are presented. Both solutions apply to current waveforms which propagate along the channel but are valid regardless of the actual waveshape. The exact solution is valid only for waves which propagate at the speed of light, and the approximate solution is formulated for arbitrary velocity of propagation. The spectrum-magnitude of the fourier transform-of the radiated fields is computed by assuming a compound exponential model for the current waveform. The effects of channel orientation and length, as well as velocity of propagation of the current waveform and location of the observer, are discussed. It is shown that both velocity of propagation and an effective channel length are important in determining the shape of the spectrum.

  11. Past, Present and Future Distributions of an Iberian Endemic, Lepus granatensis: Ecological and Evolutionary Clues from Species Distribution Models

    PubMed Central

    Acevedo, Pelayo; Melo-Ferreira, José; Real, Raimundo; Alves, Paulo Célio

    2012-01-01

    The application of species distribution models (SDMs) in ecology and conservation biology is increasing and assuming an important role, mainly because they can be used to hindcast past and predict current and future species distributions. However, the accuracy of SDMs depends on the quality of the data and on appropriate theoretical frameworks. In this study, comprehensive data on the current distribution of the Iberian hare (Lepus granatensis) were used to i) determine the species’ ecogeographical constraints, ii) hindcast a climatic model for the last glacial maximum (LGM), relating it to inferences derived from molecular studies, and iii) calibrate a model to assess the species future distribution trends (up to 2080). Our results showed that the climatic factor (in its pure effect and when it is combined with the land-cover factor) is the most important descriptor of the current distribution of the Iberian hare. In addition, the model’s output was a reliable index of the local probability of species occurrence, which is a valuable tool to guide species management decisions and conservation planning. Climatic potential obtained for the LGM was combined with molecular data and the results suggest that several glacial refugia may have existed for the species within the major Iberian refugium. Finally, a high probability of occurrence of the Iberian hare in the current species range and a northward expansion were predicted for future. Given its current environmental envelope and evolutionary history, we discuss the macroecology of the Iberian hare and its sensitivity to climate change. PMID:23272115

  12. Measuring Greenland Ice Mass Variation With Gravity Recovery and the Climate Experiment Gravity and GPS

    NASA Technical Reports Server (NTRS)

    Wu, Xiao-Ping

    1999-01-01

    The response of the Greenland ice sheet to climate change could significantly alter sea level. The ice sheet was much thicker at the last glacial maximum. To gain insight into the global change process and the future trend, it is important to evaluate the ice mass variation as a function of time and space. The Gravity Recovery and Climate Experiment (GRACE) mission to fly in 2001 for 5 years will measure gravity changes associated with the current ice variation and the solid earth's response to past variations. Our objective is to assess the separability of different change sources, accuracy and resolution in the mass variation determination by the new gravity data and possible Global Positioning System (GPS) bedrock uplift measurements. We use a reference parameter state that follows a dynamic ice model for current mass variation and a variant of the Tushingham and Peltier ICE-3G deglaciation model for historical deglaciation. The current linear trend is also assumed to have started 5 kyr ago. The Earth model is fixed as preliminary reference Earth model (PREM) with four viscoelastic layers. A discrete Bayesian inverse algorithm is developed employing an isotropic Gaussian a priori covariance function over the ice sheet and time. We use data noise predicted by the University of Texas and JPL for major GRACE error sources. A 2 mm/yr uplift uncertainty is assumed for GPS occupation time of 5 years. We then carry out covariance analysis and inverse simulation using GRACE geoid coefficients up to degree 180 in conjunction with a number of GPS uplift rates. Present-day ice mass variation and historical deglaciation are solved simultaneously over 146 grids of roughly 110 km x 110 km and with 6 time increments of 3 kyr each, along with a common starting epoch of the current trend. For present-day ice thickness change, the covariance analysis using GRACE geoid data alone results in a root mean square (RMS) posterior root variance of 2.6 cm/yr, with fairly large a priori uncertainties in the parameters and a Gaussian correlation length of 350 km. Simulated inverse can successfully recover most features in the reference present-day change. The RMS difference between them over the grids is 2.8 cm/yr. The RMS difference becomes 1.1 cm/yr when both are averaged with a half Gaussian wavelength of 150 km. With a fixed Earth model, GRACE alone can separate the geoid signals due to past and current load fairly well. Shown are the reference geoid signatures of direct and elastic effects of the current trend, the viscoelastic effect of the same trend starting from 5 kyr ago, the Post Glacial Rebound (PGR), and the predicted GRACE geoid error. The difference between the reference and inverse modeled total viscoelastic signatures is also shown. Although past and current ice mass variations are allowed the same spatial scale, their geoid signals have different spatial patterns. GPS data can contribute to the ice mass determination as well. Additional information is contained in the original.

  13. Estimation of inlet flow rates for image-based aneurysm CFD models: where and how to begin?

    PubMed

    Valen-Sendstad, Kristian; Piccinelli, Marina; KrishnankuttyRema, Resmi; Steinman, David A

    2015-06-01

    Patient-specific flow rates are rarely available for image-based computational fluid dynamics models. Instead, flow rates are often assumed to scale according to the diameters of the arteries of interest. Our goal was to determine how choice of inlet location and scaling law affect such model-based estimation of inflow rates. We focused on 37 internal carotid artery (ICA) aneurysm cases from the Aneurisk cohort. An average ICA flow rate of 245 mL min(-1) was assumed from the literature, and then rescaled for each case according to its inlet diameter squared (assuming a fixed velocity) or cubed (assuming a fixed wall shear stress). Scaling was based on diameters measured at various consistent anatomical locations along the models. Choice of location introduced a modest 17% average uncertainty in model-based flow rate, but within individual cases estimated flow rates could vary by >100 mL min(-1). A square law was found to be more consistent with physiological flow rates than a cube law. Although impact of parent artery truncation on downstream flow patterns is well studied, our study highlights a more insidious and potentially equal impact of truncation site and scaling law on the uncertainty of assumed inlet flow rates and thus, potentially, downstream flow patterns.

  14. Space charge limited current emission for a sharp tip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Y. B., E-mail: zhuyingbin@gmail.com; Ang, L. K., E-mail: ricky-ang@sutd.edu.sg

    In this paper, we formulate a self-consistent model to study the space charge limited current emission from a sharp tip in a dc gap. The tip is assumed to have a radius in the order of 10s nanometer. The electrons are emitted from the tip due to field emission process. It is found that the localized current density J at the apex of the tip can be much higher than the classical Child Langmuir law (flat surface). A scaling of J ∝ V{sub g}{sup 3/2}/D{sup m}, where V{sub g} is the gap bias, D is the gap size, and m = 1.1–1.2more » (depending on the emission area or radius) is proposed. The effects of non-uniform emission and the spatial dependence of work function are presented.« less

  15. Error Discounting in Probabilistic Category Learning

    PubMed Central

    Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.

    2011-01-01

    Some current theories of probabilistic categorization assume that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report two probabilistic-categorization experiments that investigated error discounting by shifting feedback probabilities to new values after different amounts of training. In both experiments, responding gradually became less responsive to errors, and learning was slowed for some time after the feedback shift. Both results are indicative of error discounting. Quantitative modeling of the data revealed that adding a mechanism for error discounting significantly improved the fits of an exemplar-based and a rule-based associative learning model, as well as of a recency-based model of categorization. We conclude that error discounting is an important component of probabilistic learning. PMID:21355666

  16. Assimilating NOAA SST data into BSH operational circulation model for North and Baltic Seas

    NASA Astrophysics Data System (ADS)

    Losa, Svetlana; Schroeter, Jens; Nerger, Lars; Janjic, Tijana; Danilov, Sergey; Janssen, Frank

    A data assimilation (DA) system is developed for BSH operational circulation model in order to improve forecast of current velocities, sea surface height, temperature and salinity in the North and Baltic Seas. Assimilated data are NOAA sea surface temperature (SST) data for the following period: 01.10.07 -30.09.08. All data assimilation experiments are based on im-plementation of one of the so-called statistical DA methods -Singular Evolutive Interpolated Kalman (SEIK) filter, -with different ways of prescribing assumed model and data errors statis-tics. Results of the experiments will be shown and compared against each other. Hydrographic data from MARNET stations and sea level at series of tide gauges are used as independent information to validate the data assimilation system. Keywords: Operational Oceanography and forecasting

  17. The Unknown Hydrogen Exosphere: Space Weather Implications

    NASA Astrophysics Data System (ADS)

    Krall, J.; Glocer, A.; Fok, M.-C.; Nossal, S. M.; Huba, J. D.

    2018-03-01

    Recent studies suggest that the hydrogen (H) density in the exosphere and geocorona might differ from previously assumed values by factors as large as 2. We use the SAMI3 (Sami3 is Also a Model of the Ionosphere) and Comprehensive Inner Magnetosphere-Ionosphere models to evaluate scenarios where the hydrogen density is reduced or enhanced, by a factor of 2, relative to values given by commonly used empirical models. We show that the rate of plasmasphere refilling following a geomagnetic storm varies nearly linearly with the hydrogen density. We also show that the ring current associated with a geomagnetic storm decays more rapidly when H is increased. With respect to these two space weather effects, increased exosphere hydrogen density is associated with reduced threats to space assets during and following a geomagnetic storm.

  18. Complexity and Demographic Explanations of Cumulative Culture

    PubMed Central

    Querbes, Adrien; Vaesen, Krist; Houkes, Wybo

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing— while favoured by increasing—population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change. PMID:25048625

  19. Multi-layer membrane model for mass transport in a direct ethanol fuel cell using an alkaline anion exchange membrane

    NASA Astrophysics Data System (ADS)

    Bahrami, Hafez; Faghri, Amir

    2012-11-01

    A one-dimensional, isothermal, single-phase model is presented to investigate the mass transport in a direct ethanol fuel cell incorporating an alkaline anion exchange membrane. The electrochemistry is analytically solved and the closed-form solution is provided for two limiting cases assuming Tafel expressions for both oxygen reduction and ethanol oxidation. A multi-layer membrane model is proposed to properly account for the diffusive and electroosmotic transport of ethanol through the membrane. The fundamental differences in fuel crossover for positive and negative electroosmotic drag coefficients are discussed. It is found that ethanol crossover is significantly reduced upon using an alkaline anion exchange membrane instead of a proton exchange membrane, especially at current densities higher than 500 A m

  20. A two dimensional power spectral estimate for some nonstationary processes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Smith, Gregory L.

    1989-01-01

    A two dimensional estimate for the power spectral density of a nonstationary process is being developed. The estimate will be applied to helicopter noise data which is clearly nonstationary. The acoustic pressure from the isolated main rotor and isolated tail rotor is known to be periodically correlated (PC) and the combined noise from the main and tail rotors is assumed to be correlation autoregressive (CAR). The results of this nonstationary analysis will be compared with the current method of assuming that the data is stationary and analyzing it as such. Another method of analysis is to introduce a random phase shift into the data as shown by Papoulis to produce a time history which can then be accurately modeled as stationary. This method will also be investigated for the helicopter data. A method used to determine the period of a PC process when the period is not know is discussed. The period of a PC process must be known in order to produce an accurate spectral representation for the process. The spectral estimate is developed. The bias and variability of the estimate are also discussed. Finally, the current method for analyzing nonstationary data is compared to that of using a two dimensional spectral representation. In addition, the method of phase shifting the data is examined.

  1. Nursing workload, patient safety incidents and mortality: an observational study from Finland

    PubMed Central

    Kinnunen, Marina; Saarela, Jan

    2018-01-01

    Objective To investigate whether the daily workload per nurse (Oulu Patient Classification (OPCq)/nurse) as measured by the RAFAELA system correlates with different types of patient safety incidents and with patient mortality, and to compare the results with regressions based on the standard patients/nurse measure. Setting We obtained data from 36 units from four Finnish hospitals. One was a tertiary acute care hospital, and the three others were secondary acute care hospitals. Participants Patients’ nursing intensity (249 123 classifications), nursing resources, patient safety incidents and patient mortality were collected on a daily basis during 1 year, corresponding to 12 475 data points. Associations between OPC/nurse and patient safety incidents or mortality were estimated using unadjusted logistic regression models, and models that adjusted for ward-specific effects, and effects of day of the week, holiday and season. Primary and secondary outcome measures Main outcome measures were patient safety incidents and death of a patient. Results When OPC/nurse was above the assumed optimal level, the adjusted odds for a patient safety incident were 1.24 (95% CI 1.08 to 1.42) that of the assumed optimal level, and 0.79 (95% CI 0.67 to 0.93) if it was below the assumed optimal level. Corresponding estimates for patient mortality were 1.43 (95% CI 1.18 to 1.73) and 0.78 (95% CI 0.60 to 1.00), respectively. As compared with the patients/nurse classification, models estimated on basis of the RAFAELA classification system generally provided larger effect sizes, greater statistical power and better model fit, although the difference was not very large. Net benefits as calculated on the basis of decision analysis did not provide any clear evidence on which measure to prefer. Conclusions We have demonstrated an association between daily workload per nurse and patient safety incidents and mortality. Current findings need to be replicated by future studies. PMID:29691240

  2. Communication: Role of explicit water models in the helix folding/unfolding processes

    NASA Astrophysics Data System (ADS)

    Palazzesi, Ferruccio; Salvalaglio, Matteo; Barducci, Alessandro; Parrinello, Michele

    2016-09-01

    In the last years, it has become evident that computer simulations can assume a relevant role in modelling protein dynamical motions for their ability to provide a full atomistic image of the processes under investigation. The ability of the current protein force-fields in reproducing the correct thermodynamics and kinetics systems behaviour is thus an essential ingredient to improve our understanding of many relevant biological functionalities. In this work, employing the last developments of the metadynamics framework, we compare the ability of state-of-the-art all-atom empirical functions and water models to consistently reproduce the folding and unfolding of a helix turn motif in a model peptide. This theoretical study puts in evidence that the choice of the water models can influence the thermodynamic and the kinetics of the system under investigation, and for this reason cannot be considered trivial.

  3. Modeling an Iodine Hall Thruster Plume in the Iodine Satellite (ISAT)

    NASA Technical Reports Server (NTRS)

    Choi, Maria

    2016-01-01

    An iodine-operated 200-W Hall thruster plume has been simulated using a hybrid-PIC model to predict the spacecraft surface-plume interaction for spacecraft integration purposes. For validation of the model, the plasma potential, electron temperature, ion current flux, and ion number density of xenon propellant were compared with available measurement data at the nominal operating condition. To simulate iodine plasma, various collision cross sections were found and used in the model. While time-varying atomic iodine species (i.e., I, I+, I2+) information is provided by HP Hall simulation at the discharge channel exit, the molecular iodine species (i.e., I2, I2+) are introduced as Maxwellian particles at the channel exit. Simulation results show that xenon and iodine plasma plumes appear to be very similar under the assumptions of the model. Assuming a sticking coefficient of unity, iodine deposition rate is estimated.

  4. Modeling an Iodine Hall Thruster Plume in the Iodine Satellite (ISAT)

    NASA Technical Reports Server (NTRS)

    Choi, Maria

    2016-01-01

    An iodine-operated 200-W Hall thruster plume has been simulated using a hybrid-PIC model to predict the spacecraft surface-plume interaction for spacecraft integration purposes. For validation of the model, the plasma potential, electron temperature, ion current flux, and ion number density of xenon propellant were compared with available measurement data at the nominal operating condition. To simulate iodine plasma, various collision cross sections were found and used in the model. While time-varying atomic iodine species (i.e., I, I+, I2+) information is provided by HPHall simulation at the discharge channel exit, the molecular iodine species (i.e., I2, I2+) are introduced as Maxwellian particles at the channel exit. Simulation results show that xenon and iodine plasma plumes appear to be very similar under the assumptions of the model. Assuming a sticking coefficient of unity, iodine deposition rate is estimated.

  5. Kron-Branin modelling of ultra-short pulsed signal microelectrode

    NASA Astrophysics Data System (ADS)

    Xu, Zhifei; Ravelo, Blaise; Liu, Yang; Zhao, Lu; Delaroche, Fabien; Vurpillot, Francois

    2018-06-01

    An uncommon circuit modelling of microelectrode for ultra-short signal propagation is developed. The proposed model is based on the Tensorial Analysis of Network (TAN) using the Kron-Branin (KB) formalism. The systemic graph topology equivalent to the considered structure problem is established by assuming as unknown variables the branch currents. The TAN mathematical solution is determined after the KB characteristic matrix identification. The TAN can integrate various structure physical parameters. As proof of concept, via hole ended microelectrodes implemented on Kapton substrate were designed, fabricated and tested. The 0.1-MHz-to-6-GHz S-parameter KB model, simulation and measurement are in good agreement. In addition, time-domain analyses with nanosecond duration pulse signals were carried out to predict the microelectrode signal integrity. The modelled microstrip electrode is usually integrated in the atom probe tomography. The proposed unfamiliar KB method is particularly beneficial with respect to the computation speed and adaptability to various structures.

  6. Electron acceleration in downward auroral field-aligned currents

    NASA Astrophysics Data System (ADS)

    Cran-McGreehin, Alexandra P.; Wright, Andrew N.

    2005-10-01

    The auroral downward field-aligned current is mainly carried by electrons accelerated up from the ionosphere into the magnetosphere along magnetic field lines. Current densities are typically of the order of a few μ Am-2, and the associated electrons are accelerated to energies of several hundred eV up to a few keV. This downward current has been modeled by Temerin and Carlson (1998) using an electron fluid. This paper extends that model by describing the electron populations via distribution functions and modeling all of the F region. We assume a given ion density profile, and invoke quasi-neutrality to solve for the potential along the field line. Several important locations and quantities emerge from this model: the ionospheric trapping point, below which the ionospheric population is trapped by an ambipolar electric field; the location of maximum E∥, of the order of a few mVm-1, which lies earthward of the B/n peak; the acceleration region, located around the B/n peak, which normally extends between altitudes of 500 and 3000 km; and the total potential increase along the field line, of the order of a few hundred V up to several kV. The B/n peak is found to be the central factor determining the altitude and magnitude of the accelerating potential required. Indeed, the total potential drop is found to depend solely on the equilibrium properties in the immediate vicinity of the B/n peak.

  7. Decay of equatorial ring current ions and associated aeronomical consequences

    NASA Technical Reports Server (NTRS)

    Fok, M.-C.; Kozyra, J. U.; Nagy, A. F.; Rasmussen, C. E.; Khazanov, G. V.

    1993-01-01

    The decay of the major ion species which constitute the ring current is studied by solving the time evolution of their distribution functions during the recovery phase of a moderate geomagnetic storm. In this work, only equatorially mirroring particles are considered. Particles are assumed to move subject to E x B and gradient drifts. They also experience loses along their drift paths. Two loss mechanisms are considered: charge exchange with neutral hydrogen atoms and Coulomb collisions with thermal plasma in the plasmasphere. Thermal plasma densities are calculated with a plasmaspheric model employing a time-dependent convection electric field model. The drift-loss model successfully reproduces a number of important and observable features in the distribution function. Charge exchange is found to be the major loss mechanism for the ring current ions; however the important effects of Coulomb collisions on both the ring current and thermal populations are also presented. The model predicts the formation of a low-energy (less than 500 eV) ion population as a result of energy degradation caused by Coulomb collision of the ring current ions with the plasmaspheric electrons; this population may be one source of the low-energy ions observed during active and quiet periods in the inner magnetosphere. The energy transferred to plasmaspheric electrons through Coulomb collisions with ring current ions is believed to be the energy source for the electron temperature enhancement and the associated 6300 A (stable auroral red (SAR) arc) emission in the subauroral region. The calculated energy deposition rate is sufficient to produce a subauroral electron temperature enhancement and SAR arc emissions that are consistent with observations of these quantities during moderate magnetic activity levels.

  8. Modeling Geoelectric Fields and Geomagnetically Induced Currents Around New Zealand to Explore GIC in the South Island's Electrical Transmission Network

    NASA Astrophysics Data System (ADS)

    Divett, T.; Ingham, M.; Beggan, C. D.; Richardson, G. S.; Rodger, C. J.; Thomson, A. W. P.; Dalzell, M.

    2017-10-01

    Transformers in New Zealand's South Island electrical transmission network have been impacted by geomagnetically induced currents (GIC) during geomagnetic storms. We explore the impact of GIC on this network by developing a thin-sheet conductance (TSC) model for the region, a geoelectric field model, and a GIC network model. (The TSC is composed of a thin-sheet conductance map with underlying layered resistivity structure.) Using modeling approaches that have been successfully used in the United Kingdom and Ireland, we applied a thin-sheet model to calculate the electric field as a function of magnetic field and ground conductance. We developed a TSC model based on magnetotelluric surveys, geology, and bathymetry, modified to account for offshore sediments. Using this representation, the thin sheet model gave good agreement with measured impedance vectors. Driven by a spatially uniform magnetic field variation, the thin-sheet model results in electric fields dominated by the ocean-land boundary with effects due to the deep ocean and steep terrain. There is a strong tendency for the electric field to align northwest-southeast, irrespective of the direction of the magnetic field. Applying this electric field to a GIC network model, we show that modeled GIC are dominated by northwest-southeast transmission lines rather than east-west lines usually assumed to dominate.

  9. Unsteady density-current equations for highly curved terrain

    NASA Technical Reports Server (NTRS)

    Sivakumaran, N. S.; Dressler, R. F.

    1989-01-01

    New nonlinear partial differential equations containing terrain curvature and its rate of change are derived that describe the flow of an atmospheric density current. Unlike the classical hydraulic-type equations for density currents, the new equations are valid for two-dimensional, gradually varied flow over highly curved terrain, hence suitable for computing unsteady (or steady) flows over arbitrary mountain/valley profiles. The model assumes the atmosphere above the density current exerts a known arbitrary variable pressure upon the unknown interface. Later this is specialized to the varying hydrostatic pressure of the atmosphere above. The new equations yield the variable velocity distribution, the interface position, and the pressure distribution that contains a centrifugal component, often significantly larger than its hydrostatic component. These partial differential equations are hyperbolic, and the characteristic equations and characteristic directions are derived. Using these to form a characteristic mesh, a hypothetical unsteady curved-flow problem is calculated, not based upon observed data, merely as an example to illustrate the simplicity of their application to unsteady flows over mountains.

  10. Overcoming Spatial and Temporal Barriers to Public Access Defibrillators Via Optimization.

    PubMed

    Sun, Christopher L F; Demirtas, Derya; Brooks, Steven C; Morrison, Laurie J; Chan, Timothy C Y

    2016-08-23

    Immediate access to an automated external defibrillator (AED) increases the chance of survival for out-of-hospital cardiac arrest (OHCA). Current deployment usually considers spatial AED access, assuming AEDs are available 24 h a day. The goal of this study was to develop an optimization model for AED deployment, accounting for spatial and temporal accessibility, to evaluate if OHCA coverage would improve compared with deployment based on spatial accessibility alone. This study was a retrospective population-based cohort trial using data from the Toronto Regional RescuNET Epistry cardiac arrest database. We identified all nontraumatic public location OHCAs in Toronto, Ontario, Canada (January 2006 through August 2014) and obtained a list of registered AEDs (March 2015) from Toronto Paramedic Services. Coverage loss due to limited temporal access was quantified by comparing the number of OHCAs that occurred within 100 meters of a registered AED (assumed coverage 24 h per day, 7 days per week) with the number that occurred both within 100 meters of a registered AED and when the AED was available (actual coverage). A spatiotemporal optimization model was then developed that determined AED locations to maximize OHCA actual coverage and overcome the reported coverage loss. The coverage gain between the spatiotemporal model and a spatial-only model was computed by using 10-fold cross-validation. A total of 2,440 nontraumatic public OHCAs and 737 registered AED locations were identified. A total of 451 OHCAs were covered by registered AEDs under assumed coverage 24 h per day, 7 days per week, and 354 OHCAs under actual coverage, representing a coverage loss of 21.5% (p < 0.001). Using the spatiotemporal model to optimize AED deployment, a 25.3% relative increase in actual coverage was achieved compared with the spatial-only approach (p < 0.001). One in 5 OHCAs occurred near an inaccessible AED at the time of the OHCA. Potential AED use was significantly improved with a spatiotemporal optimization model guiding deployment. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  11. Where to Combat Shrub Encroachment in Alpine Timberline Ecosystems: Combining Remotely-Sensed Vegetation Information with Species Habitat Modelling

    PubMed Central

    Braunisch, Veronika; Patthey, Patrick; Arlettaz, Raphaël

    2016-01-01

    In many cultural landscapes, the abandonment of traditional grazing leads to encroachment of pastures by woody plants, which reduces habitat heterogeneity and impacts biodiversity typical of semi-open habitats. We developed a framework of mutually interacting spatial models to locate areas where shrub encroachment in Alpine treeline ecosystems deteriorates vulnerable species’ habitat, using black grouse Tetrao tetrix (L.) in the Swiss Alps as a study model. Combining field observations and remote-sensing information we 1) identified and located the six predominant treeline vegetation types; 2) modelled current black grouse breeding habitat as a function thereof so as to derive optimal habitat profiles; 3) simulated from these profiles the theoretical spatial extension of breeding habitat when assuming optimal vegetation conditions throughout; and used the discrepancy between (2) and (3) to 4) locate major aggregations of homogeneous shrub vegetation in otherwise suitable breeding habitat as priority sites for habitat restoration. All six vegetation types (alpine pasture, coniferous forest, Alnus viridis (Chaix), Rhododendron-dominated, Juniperus-dominated and mixed heathland) were predicted with high accuracy (AUC >0.9). Breeding black grouse preferred a heterogeneous mosaic of vegetation types, with none exceeding 50% cover. While 15% of the timberline belt currently offered suitable breeding habitat, twice that fraction (29%) would potentially be suitable when assuming optimal shrub and ground vegetation conditions throughout the study area. Yet, only 10% of this difference was attributed to habitat deterioration by shrub-encroachment of dense heathland (all types 5.2%) and Alnus viridis (4.8%). The presented method provides both a general, large-scale assessment of areas covered by dense shrub vegetation as well as specific target values and priority areas for habitat restoration related to a selected target organism. This facilitates optimizing the spatial allocation of management resources in geographic regions where shrub encroachment represents a major biodiversity conservation issue. PMID:27727325

  12. Indoor PM2.5 exposure in London's domestic stock: Modelling current and future exposures following energy efficient refurbishment

    NASA Astrophysics Data System (ADS)

    Shrubsole, C.; Ridley, I.; Biddulph, P.; Milner, J.; Vardoulakis, S.; Ucci, M.; Wilkinson, P.; Chalabi, Z.; Davies, M.

    2012-12-01

    Simulations using CONTAM (a validated multi-zone indoor air quality (IAQ) model) are employed to predict indoor exposure to PM2.5 in London dwellings in both the present day housing stock and the same stock following energy efficient refurbishments to meet greenhouse gas emissions reduction targets for 2050. We modelled interventions that would contribute to the achievement of these targets by reducing the permeability of the dwellings to 3 m3 m-2 h-1 at 50 Pa, combined with the introduction of mechanical ventilation and heat recovery (MVHR) systems. It is assumed that the current mean outdoor PM2.5 concentration of 13 μg m-3 decreased to 9 μg m-3 by 2050 due to emission control policies. Our primary finding was that installation of (assumed perfectly functioning) MVHR systems with permeability reduction are associated with appreciable reductions in PM2.5 exposure in both smoking and non-smoking dwellings. Modelling of the future scenario for non-smoking dwellings show a reduction in annual average indoor exposure to PM2.5 of 18.8 μg m-3 (from 28.4 to 9.6 μg m-3) for a typical household member. Also of interest is that a larger reduction of 42.6 μg m-3 (from 60.5 to 17.9 μg m-3) was shown for members exposed primarily to cooking-related particle emissions in the kitchen (cooks). Reductions in envelope permeability without mechanical ventilation produced increases in indoor PM2.5 concentrations; 5.4 μg m-3 for typical household members and 9.8 μg m-3 for cooks. These estimates of changes in PM2.5 exposure are sensitive to assumptions about occupant behaviour, ventilation system usage and the distributions of input variables (±72% for non-smoking and ±107% in smoking residences). However, if realised, they would result in significant health benefits.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.

    The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less

  14. Advanced LIGO constraints on neutron star mergers and r-process sites

    DOE PAGES

    Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.; ...

    2017-02-20

    The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less

  15. Accuracy of expressions for the fill factor of a solar cell in terms of open-circuit voltage and ideality factor

    NASA Astrophysics Data System (ADS)

    Leilaeioun, Mehdi; Holman, Zachary C.

    2016-09-01

    An approximate expression proposed by Green predicts the maximum obtainable fill factor (FF) of a solar cell from its open-circuit voltage (Voc). The expression was originally suggested for silicon solar cells that behave according to a single-diode model and, in addition to Voc, it requires an ideality factor as input. It is now commonly applied to silicon cells by assuming a unity ideality factor—even when the cells are not in low injection—as well as to non-silicon cells. Here, we evaluate the accuracy of the expression in several cases. In particular, we calculate the recombination-limited FF and Voc of hypothetical silicon solar cells from simulated lifetime curves, and compare the exact FF to that obtained with the approximate expression using assumed ideality factors. Considering cells with a variety of recombination mechanisms, wafer doping densities, and photogenerated current densities reveals the range of conditions under which the approximate expression can safely be used. We find that the expression is unable to predict FF generally: For a typical silicon solar cell under one-sun illumination, the error is approximately 6% absolute with an assumed ideality factor of 1. Use of the expression should thus be restricted to cells under very low or very high injection.

  16. Dynamic Multi-Axial Loading Response and Constitutive/Damage Modeling of Titanium and Titanium Alloys

    DTIC Science & Technology

    2006-06-24

    crystals and assume same yield stress in tension and compression. Some anisotropic models have been proposed and used in the literature for HCP poly...2006), etc. These criteria dealt with the modeling of cubic crystals and assume same yield stress in tension an compression. Some anisotropic...Constitutive/Damage Modeling of Titanium and Titanium Alloys Principal Investigator: Akhtar S. Khan

  17. The effect of precipitation on measuring sea surface salinity from space

    NASA Astrophysics Data System (ADS)

    Jin, Xuchen; Pan, Delu; He, Xianqiang; Wang, Difeng; Zhu, Qiankun; Gong, Fang

    2017-10-01

    The sea surface salinity (SSS) can be measured from space by using L-band (1.4 GHz) microwave radiometers. The L-band has been chosen for its sensitivity of brightness temperature to the change of salinity. However, SSS remote sensing is still challenging due to the low sensitivity of brightness temperature to SSS variation: for the vertical polarization, the sensitivity is about 0.4 to 0.8 K/psu with different incident angles and sea surface temperature; for horizontal polarization, the sensitivity is about 0.2 to 0.6 K/psu. It means that we have to make radiometric measurements with accuracy better than 1K even for the best sensitivity of brightness temperature to SSS. Therefore, in order to retrieve SSS, the measured brightness temperature at the top of atmosphere (TOA) needs to be corrected for many sources of error. One main geophysical source of error comes from atmosphere. Currently, the atmospheric effect at L-band is usually corrected by absorption and emission model, which estimate the radiation absorbed and emitted by atmosphere. However, the radiation scattered by precipitation is neglected in absorption and emission models, which might be significant under heavy precipitation. In this paper, a vector radiative transfer model for coupled atmosphere and ocean systems with a rough surface is developed to simulate the brightness temperature at the TOA under different precipitations. The model is based on the adding-doubling method, which includes oceanic emission and reflection, atmospheric absorption and scattering. For the ocean system with a rough surface, an empirical emission model established by Gabarro and the isotropic Cox-Munk wave model considering shadowing effect are used to simulate the emission and reflection of sea surface. For the atmospheric attenuation, it is divided into two parts: For the rain layer, a Marshall-Palmer distribution is used and the scattering properties of the hydrometeors are calculated by Mie theory (the scattering hydrometeors are assumed to be spherical). For the other atmosphere layers, which are assumed to be clear sky, Liebe's millimeter wave propagation model (MPM93) is used to calculate the absorption coefficients of oxygen, water vapor, and cloud droplets. To simulate the change of brightness temperature caused by different rain rate (0-50 mm/h), we assume a 26-layer precipitation structure corresponding to NCEP FNL data. Our radiative transfer simulations showed that the brightness temperature at TOA can be influenced significantly by the heavy precipitation, the results indicate that the atmospheric attenuation of L-band at incidence angle of 42.5° should be a positive bias, and when rain rate rise up to 50 mm/h, the brightness temperature increases are close to 0.6 K and 0.8 K for horizontally and vertically polarized brightness temperature, respectively. Thus, in the case of heavy precipitation, the current absorption and emission model is not accurate enough to correct atmospheric effect, and a radiative transfer model which considers the effect of radiation scattering should be used.

  18. Modelling the possible interaction between edge-driven convection and the Canary Islands mantle plume

    NASA Astrophysics Data System (ADS)

    Negredo, A. M.; Rodríguez-González, J.; Fullea, J.; Van Hunen, J.

    2017-12-01

    The close location between many hotspots and the edges of cratonic lithosphere has led to the hypothesis that these hotspots could be explained by small-scale mantle convection at the edge of cratons (Edge Driven Convection, EDC). The Canary Volcanic Province hotspot represents a paradigmatic example of this situation due to its close location to the NW edge of the African Craton. Geochemical evidence, prominent low seismic velocity anomalies in the upper and lower mantle, and the rough NE-SW age-progression of volcanic centers consistently point out to a deep-seated mantle plume as the origin of the Canary Volcanic Province. It has been hypothesized that the plume material could be affected by upper mantle convection caused by the thermal contrast between thin oceanic lithosphere and thick (cold) African craton. Deflection of upwelling blobs due to convection currents would be responsible for the broader and more irregular pattern of volcanism in the Canary Province compared to the Madeira Province. In this study we design a model setup inspired on this scenario to investigate the consequences of possible interaction between ascending mantle plumes and EDC. The Finite Element code ASPECT is used to solve convection in a 2D box. The compositional field and melt fraction distribution are also computed. Free slip along all boundaries and constant temperature at top and bottom boundaries are assumed. The initial temperature distribution assumes a small long-wavelength perturbation. The viscosity structure is based on a thick cratonic lithosphere progressively varying to a thin, or initially inexistent, oceanic lithosphere. The effects of assuming different rheologies, as well as steep or gradual changes in lithospheric thickness are tested. Modelling results show that a very thin oceanic lithosphere (< 30 km) is needed to generate partial melting by EDC. In this case partial melting can occur as far as 700 km away from the edge of the craton. The size of EDC cells is relatively small (diameter about 300 km) for lithosphere/asthenosphere viscosity contrasts of 1000. In contrast, models assuming temperature-dependent viscosity and large viscosity variations evolve to large-scale (upper mantle) convection cells, with upwelling of hot material being enhanced by cold downwellings at the edge of cratonic lithosphere.

  19. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE PAGES

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.; ...

    2017-09-07

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  20. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  1. Modeling Creep Effects within SiC/SiC Turbine Components

    NASA Technical Reports Server (NTRS)

    DiCarlo, J. A.; Lang, J.

    2008-01-01

    Anticipating the implementation of advanced SiC/SiC ceramic composites into the hot section components of future gas turbine engines, the primary objective of this on-going study is to develop physics-based analytical and finite-element modeling tools to predict the effects of constituent creep on SiC/SiC component service life. A second objective is to understand how to possibly select and manipulate constituent materials, processes, and geometries in order to minimize these effects. In initial studies aimed at SiC/SiC components experiencing through-thickness stress gradients, creep models were developed that allowed an understanding of detrimental residual stress effects that can develop globally within the component walls. It was assumed that the SiC/SiC composites behaved as isotropic visco-elastic materials with temperature-dependent creep behavior as experimentally measured in-plane in the fiber direction of advanced thin-walled 2D SiC/SiC panels. The creep models and their key results are discussed assuming state-of-the-art SiC/SiC materials within a simple cylindrical thin-walled tubular structure, which is currently being employed to model creep-related effects for turbine airfoil leading edges subjected to through-thickness thermal stress gradients. Improvements in the creep models are also presented which focus on constituent behavior with more realistic non-linear stress dependencies in order to predict such key creep-related SiC/SiC properties as time-dependent matrix stress, constituent creep and content effects on composite creep rates and rupture times, and stresses on fiber and matrix during and after creep.

  2. 3-D Modeling of Irregular Volcanic Sources Using Sparsity-Promoting Inversions of Geodetic Data and Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Zhai, Guang; Shirzaei, Manoochehr

    2017-12-01

    Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.

  3. 46 CFR 111.52-3 - Systems below 1500 kilowatts.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Calculation of Short-Circuit Currents § 111.52-3 Systems below 1500 kilowatts. The... maximum short-circuit current of a direct current system must be assumed to be 10 times the aggregate...

  4. 46 CFR 111.52-3 - Systems below 1500 kilowatts.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Calculation of Short-Circuit Currents § 111.52-3 Systems below 1500 kilowatts. The... maximum short-circuit current of a direct current system must be assumed to be 10 times the aggregate...

  5. Towards a Compositional SPIN

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulou, Dimitra

    2006-01-01

    This paper discusses our initial experience with introducing automated assume-guarantee verification based on learning in the SPIN tool. We believe that compositional verification techniques such as assume-guarantee reasoning could complement the state-reduction techniques that SPIN already supports, thus increasing the size of systems that SPIN can handle. We present a "light-weight" approach to evaluating the benefits of learning-based assume-guarantee reasoning in the context of SPIN: we turn our previous implementation of learning for the LTSA tool into a main program that externally invokes SPIN to provide the model checking-related answers. Despite its performance overheads (which mandate a future implementation within SPIN itself), this approach provides accurate information about the savings in memory. We have experimented with several versions of learning-based assume guarantee reasoning, including a novel heuristic introduced here for generating component assumptions when their environment is unavailable. We illustrate the benefits of learning-based assume-guarantee reasoning in SPIN through the example of a resource arbiter for a spacecraft. Keywords: assume-guarantee reasoning, model checking, learning.

  6. a Physical Parameterization of Snow Albedo for Use in Climate Models.

    NASA Astrophysics Data System (ADS)

    Marshall, Susan Elaine

    The albedo of a natural snowcover is highly variable ranging from 90 percent for clean, new snow to 30 percent for old, dirty snow. This range in albedo represents a difference in surface energy absorption of 10 to 70 percent of incident solar radiation. Most general circulation models (GCMs) fail to calculate the surface snow albedo accurately, yet the results of these models are sensitive to the assumed value of the snow albedo. This study replaces the current simple empirical parameterizations of snow albedo with a physically-based parameterization which is accurate (within +/- 3% of theoretical estimates) yet efficient to compute. The parameterization is designed as a FORTRAN subroutine (called SNOALB) which can be easily implemented into model code. The subroutine requires less then 0.02 seconds of computer time (CRAY X-MP) per call and adds only one new parameter to the model calculations, the snow grain size. The snow grain size can be calculated according to one of the two methods offered in this thesis. All other input variables to the subroutine are available from a climate model. The subroutine calculates a visible, near-infrared and solar (0.2-5 μm) snow albedo and offers a choice of two wavelengths (0.7 and 0.9 mu m) at which the solar spectrum is separated into the visible and near-infrared components. The parameterization is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, version 1 (CCM1), and the results of a five -year, seasonal cycle, fixed hydrology experiment are compared to the current model snow albedo parameterization. The results show the SNOALB albedos to be comparable to the old CCM1 snow albedos for current climate conditions, with generally higher visible and lower near-infrared snow albedos using the new subroutine. However, this parameterization offers a greater predictability for climate change experiments outside the range of current snow conditions because it is physically-based and not tuned to current empirical results.

  7. Modeling of Word Translation: Activation Flow from Concepts to Lexical Items

    ERIC Educational Resources Information Center

    Roelofs, Ardi; Dijkstra, Ton; Gerakaki, Svetlana

    2013-01-01

    Whereas most theoretical and computational models assume a continuous flow of activation from concepts to lexical items in spoken word production, one prominent model assumes that the mapping of concepts onto words happens in a discrete fashion (Bloem & La Heij, 2003). Semantic facilitation of context pictures on word translation has been taken to…

  8. Microphysical explanation of the RH-dependent water affinity of biogenic organic aerosol and its importance for climate

    DOE PAGES

    Rastak, N.; Pajunoja, A.; Acosta Navarro, J. C.; ...

    2017-04-28

    A large fraction of atmospheric organic aerosol (OA) originates from natural emissions that are oxidized in the atmosphere to form secondary organic aerosol (SOA). Isoprene (IP) and monoterpenes (MT) are the most important precursors of SOA originating from forests. The climate impacts from OA are currently estimated through parameterizations of water uptake that drastically simplify the complexity of OA. We combine laboratory experiments, thermodynamic modeling, field observations, and climate modeling to (1) explain the molecular mechanisms behind RH-dependent SOA water-uptake with solubility and phase separation; (2) show that laboratory data on IP- and MT-SOA hygroscopicity are representative of ambient datamore » with corresponding OA source profiles; and (3) demonstrate the sensitivity of the modeled aerosol climate effect to assumed OA water affinity. We conclude that the commonly used single-parameter hygroscopicity framework can introduce significant error when quantifying the climate effects of organic aerosol. The results highlight the need for better constraints on the overall global OA mass loadings and its molecular composition, including currently underexplored anthropogenic and marine OA sources.« less

  9. Microphysical explanation of the RH-dependent water affinity of biogenic organic aerosol and its importance for climate

    NASA Astrophysics Data System (ADS)

    Rastak, N.; Pajunoja, A.; Acosta Navarro, J. C.; Ma, J.; Song, M.; Partridge, D. G.; Kirkevâg, A.; Leong, Y.; Hu, W. W.; Taylor, N. F.; Lambe, A.; Cerully, K.; Bougiatioti, A.; Liu, P.; Krejci, R.; Petäjä, T.; Percival, C.; Davidovits, P.; Worsnop, D. R.; Ekman, A. M. L.; Nenes, A.; Martin, S.; Jimenez, J. L.; Collins, D. R.; Topping, D. O.; Bertram, A. K.; Zuend, A.; Virtanen, A.; Riipinen, I.

    2017-05-01

    A large fraction of atmospheric organic aerosol (OA) originates from natural emissions that are oxidized in the atmosphere to form secondary organic aerosol (SOA). Isoprene (IP) and monoterpenes (MT) are the most important precursors of SOA originating from forests. The climate impacts from OA are currently estimated through parameterizations of water uptake that drastically simplify the complexity of OA. We combine laboratory experiments, thermodynamic modeling, field observations, and climate modeling to (1) explain the molecular mechanisms behind RH-dependent SOA water-uptake with solubility and phase separation; (2) show that laboratory data on IP- and MT-SOA hygroscopicity are representative of ambient data with corresponding OA source profiles; and (3) demonstrate the sensitivity of the modeled aerosol climate effect to assumed OA water affinity. We conclude that the commonly used single-parameter hygroscopicity framework can introduce significant error when quantifying the climate effects of organic aerosol. The results highlight the need for better constraints on the overall global OA mass loadings and its molecular composition, including currently underexplored anthropogenic and marine OA sources.

  10. Population Density and Moment-based Approaches to Modeling Domain Calcium-mediated Inactivation of L-type Calcium Channels.

    PubMed

    Wang, Xiao; Hardcastle, Kiah; Weinberg, Seth H; Smith, Gregory D

    2016-03-01

    We present a population density and moment-based description of the stochastic dynamics of domain [Formula: see text]-mediated inactivation of L-type [Formula: see text] channels. Our approach accounts for the effect of heterogeneity of local [Formula: see text] signals on whole cell [Formula: see text] currents; however, in contrast with prior work, e.g., Sherman et al. (Biophys J 58(4):985-995, 1990), we do not assume that [Formula: see text] domain formation and collapse are fast compared to channel gating. We demonstrate the population density and moment-based modeling approaches using a 12-state Markov chain model of an L-type [Formula: see text] channel introduced by Greenstein and Winslow (Biophys J 83(6):2918-2945, 2002). Simulated whole cell voltage clamp responses yield an inactivation function for the whole cell [Formula: see text] current that agrees with the traditional approach when domain dynamics are fast. We analyze the voltage-dependence of [Formula: see text] inactivation that may occur via slow heterogeneous domain [[Formula: see text

  11. Dark energy and equivalence principle constraints from astrophysical tests of the stability of the fine-structure constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martins, C.J.A.P.; Pinho, A.M.M.; Alves, R.F.C.

    2015-08-01

    Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ, to the electromagnetic sector) the α variation. Specifically, current data tightly constrains a combination of ζ and the present dark energy equation of state w{sub 0}. Moreover, inmore » these models the new degree of freedom inevitably couples to nucleons (through the α dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the Eötvös parameter η that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints.« less

  12. Microphysical explanation of the RH-dependent water affinity of biogenic organic aerosol and its importance for climate.

    PubMed

    Rastak, N; Pajunoja, A; Acosta Navarro, J C; Ma, J; Song, M; Partridge, D G; Kirkevåg, A; Leong, Y; Hu, W W; Taylor, N F; Lambe, A; Cerully, K; Bougiatioti, A; Liu, P; Krejci, R; Petäjä, T; Percival, C; Davidovits, P; Worsnop, D R; Ekman, A M L; Nenes, A; Martin, S; Jimenez, J L; Collins, D R; Topping, D O; Bertram, A K; Zuend, A; Virtanen, A; Riipinen, I

    2017-05-28

    A large fraction of atmospheric organic aerosol (OA) originates from natural emissions that are oxidized in the atmosphere to form secondary organic aerosol (SOA). Isoprene (IP) and monoterpenes (MT) are the most important precursors of SOA originating from forests. The climate impacts from OA are currently estimated through parameterizations of water uptake that drastically simplify the complexity of OA. We combine laboratory experiments, thermodynamic modeling, field observations, and climate modeling to (1) explain the molecular mechanisms behind RH-dependent SOA water-uptake with solubility and phase separation; (2) show that laboratory data on IP- and MT-SOA hygroscopicity are representative of ambient data with corresponding OA source profiles; and (3) demonstrate the sensitivity of the modeled aerosol climate effect to assumed OA water affinity. We conclude that the commonly used single-parameter hygroscopicity framework can introduce significant error when quantifying the climate effects of organic aerosol. The results highlight the need for better constraints on the overall global OA mass loadings and its molecular composition, including currently underexplored anthropogenic and marine OA sources.

  13. Microphysical explanation of the RH‐dependent water affinity of biogenic organic aerosol and its importance for climate

    PubMed Central

    Rastak, N.; Pajunoja, A.; Acosta Navarro, J. C.; Ma, J.; Song, M.; Partridge, D. G.; Kirkevåg, A.; Leong, Y.; Hu, W. W.; Taylor, N. F.; Lambe, A.; Cerully, K.; Bougiatioti, A.; Liu, P.; Krejci, R.; Petäjä, T.; Percival, C.; Davidovits, P.; Worsnop, D. R.; Ekman, A. M. L.; Nenes, A.; Martin, S.; Jimenez, J. L.; Collins, D. R.; Topping, D.O.; Bertram, A. K.; Zuend, A.; Virtanen, A.

    2017-01-01

    Abstract A large fraction of atmospheric organic aerosol (OA) originates from natural emissions that are oxidized in the atmosphere to form secondary organic aerosol (SOA). Isoprene (IP) and monoterpenes (MT) are the most important precursors of SOA originating from forests. The climate impacts from OA are currently estimated through parameterizations of water uptake that drastically simplify the complexity of OA. We combine laboratory experiments, thermodynamic modeling, field observations, and climate modeling to (1) explain the molecular mechanisms behind RH‐dependent SOA water‐uptake with solubility and phase separation; (2) show that laboratory data on IP‐ and MT‐SOA hygroscopicity are representative of ambient data with corresponding OA source profiles; and (3) demonstrate the sensitivity of the modeled aerosol climate effect to assumed OA water affinity. We conclude that the commonly used single‐parameter hygroscopicity framework can introduce significant error when quantifying the climate effects of organic aerosol. The results highlight the need for better constraints on the overall global OA mass loadings and its molecular composition, including currently underexplored anthropogenic and marine OA sources. PMID:28781391

  14. Modeling and analysis of biomagnetic blood Carreau fluid flow through a stenosis artery with magnetic heat transfer: A transient study.

    PubMed

    Abdollahzadeh Jamalabadi, Mohammad Yaghoub; Daqiqshirazi, Mohammadreza; Nasiri, Hossein; Safaei, Mohammad Reza; Nguyen, Truong Khang

    2018-01-01

    We present a numerical investigation of tapered arteries that addresses the transient simulation of non-Newtonian bio-magnetic fluid dynamics (BFD) of blood through a stenosis artery in the presence of a transverse magnetic field. The current model is consistent with ferro-hydrodynamic (FHD) and magneto-hydrodynamic (MHD) principles. In the present work, blood in small arteries is analyzed using the Carreau-Yasuda model. The arterial wall is assumed to be fixed with cosine geometry for the stenosis. A parametric study was conducted to reveal the effects of the stenosis intensity and the Hartman number on a wide range of flow parameters, such as the flow velocity, temperature, and wall shear stress. Current findings are in a good agreement with recent findings in previous research studies. The results show that wall temperature control can keep the blood in its ideal blood temperature range (below 40°C) and that a severe pressure drop occurs for blockages of more than 60 percent. Additionally, with an increase in the Ha number, a velocity drop in the blood vessel is experienced.

  15. Microphysical explanation of the RH-dependent water affinity of biogenic organic aerosol and its importance for climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rastak, N.; Pajunoja, A.; Acosta Navarro, J. C.

    A large fraction of atmospheric organic aerosol (OA) originates from natural emissions that are oxidized in the atmosphere to form secondary organic aerosol (SOA). Isoprene (IP) and monoterpenes (MT) are the most important precursors of SOA originating from forests. The climate impacts from OA are currently estimated through parameterizations of water uptake that drastically simplify the complexity of OA. We combine laboratory experiments, thermodynamic modeling, field observations, and climate modeling to (1) explain the molecular mechanisms behind RH-dependent SOA water-uptake with solubility and phase separation; (2) show that laboratory data on IP- and MT-SOA hygroscopicity are representative of ambient datamore » with corresponding OA source profiles; and (3) demonstrate the sensitivity of the modeled aerosol climate effect to assumed OA water affinity. We conclude that the commonly used single-parameter hygroscopicity framework can introduce significant error when quantifying the climate effects of organic aerosol. The results highlight the need for better constraints on the overall global OA mass loadings and its molecular composition, including currently underexplored anthropogenic and marine OA sources.« less

  16. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stygar, W.A.; Spielman, R.B.; Allshouse, G.O.

    The 36-module Z accelerator was designed to drive z-pinch loads for weapon-physics and inertial-confinement-fusion experiments, and to serve as a testing facility for pulsed-power research required to develop higher-current drivers. The authors have designed and tested a 10-nH 1.5-m-radius vacuum section for the Z accelerator. The vacuum section consists of four vacuum flares, four conical 1.3-m-radius magnetically-insulated transmission lines, a 7.6-cm-radius 12-post double-post-hole convolute which connects the four outer MITLs in parallel, and a 5-cm-long inner MITL which connects the output of the convolute to a z-pinch load. IVORY and ELECTRO calculations were performed to minimize the inductance of themore » vacuum flares with the constraint that there be no significant electron emission from the insulator-stack grading rings. Iterative TLCODE calculations were performed to minimize the inductance of the outer MITLs with the constraint that the MITL electron-flow-current fraction be {le} 7% at peak current. The TLCODE simulations assume a 2.5 cm/{micro}s MITL-cathode-plasma expansion velocity. The design limits the electron dose to the outer-MITL anodes to 50 J/g to prevent the formation of an anode plasma. The TLCODE results were confirmed by SCREAMER, TRIFL, TWOQUICK, IVORY, and LASNEX simulations. For the TLCODE, SCREAMER, and TRIFL calculations, the authors assume that after magnetic insulation is established, the electron-flow current launched in the outer MITLs is lost at the convolute. This assumption has been validated by 3-D QUICKSILVER simulations for load impedances {le} 0.36 ohms. LASNEX calculations suggest that ohmic resistance of the pinch and conduction-current-induced energy loss to the MITL electrodes can be neglected in Z power-flow modeling that is accurate to first order. To date, the Z vacuum section has been tested on 100 shots. They have demonstrated they can deliver a 100-ns rise-time 20-MA current pulse to the baseline z-pinch load.« less

  18. Asymmetric nanowire SQUID: Linear current-phase relation, stochastic switching, and symmetries

    NASA Astrophysics Data System (ADS)

    Murphy, A.; Bezryadin, A.

    2017-09-01

    We study nanostructures based on two ultrathin superconducting nanowires connected in parallel to form a superconducting quantum interference device (SQUID). The measured function of the critical current versus magnetic field, IC(B ) , is multivalued, asymmetric, and its maxima and minima are shifted from the usual integer and half integer flux quantum points. We also propose a low-temperature-limit model which generates accurate fits to the IC(B ) functions and provides verifiable predictions. The key assumption of our model is that each wire is characterized by a sample-specific critical phase ϕC defined as the phase difference at which the supercurrent in the wire is the maximum. For our nanowires ϕC is much greater than the usual π /2 , which makes a qualitative difference in the behavior of the SQUID. The nanowire current-phase relation is assumed linear, since the wires are much longer than the coherence length. The model explains single-valuedness regions where only one vorticity value nv is stable. Also, it predicts regions where multiple vorticity values are stable because the Little-Parks (LP) diamonds, which describe the region of stability for each winding number nv in the current-field diagram, can overlap. We also observe and explain regions in which the standard deviation of the switching current is independent of the magnetic field. We develop a technique that allows a reliable detection of hidden phase slips and use it to determine the boundaries of the LP diamonds even at low currents where IC(B ) is not directly measurable.

  19. Predicting long-term carbon sequestration in response to CO 2 enrichment: How and why do current ecosystem models differ?

    DOE PAGES

    Walker, Anthony P.; Zaehle, Sönke; Medlyn, Belinda E.; ...

    2015-04-27

    Large uncertainty exists in model projections of the land carbon (C) sink response to increasing atmospheric CO 2. Free-Air CO 2 Enrichment (FACE) experiments lasting a decade or more have investigated ecosystem responses to a step change in atmospheric CO 2 concentration. To interpret FACE results in the context of gradual increases in atmospheric CO 2 over decades to centuries, we used a suite of seven models to simulate the Duke and Oak Ridge FACE experiments extended for 300 years of CO 2 enrichment. We also determine key modeling assumptions that drive divergent projections of terrestrial C uptake and evaluatemore » whether these assumptions can be constrained by experimental evidence. All models simulated increased terrestrial C pools resulting from CO 2 enrichment, though there was substantial variability in quasi-equilibrium C sequestration and rates of change. In two of two models that assume that plant nitrogen (N) uptake is solely a function of soil N supply, the net primary production response to elevated CO 2 became progressively N limited. In four of five models that assume that N uptake is a function of both soil N supply and plant N demand, elevated CO 2 led to reduced ecosystem N losses and thus progressively relaxed nitrogen limitation. Many allocation assumptions resulted in increased wood allocation relative to leaves and roots which reduced the vegetation turnover rate and increased C sequestration. Additionally, self-thinning assumptions had a substantial impact on C sequestration in two models. As a result, accurate representation of N process dynamics (in particular N uptake), allocation, and forest self-thinning is key to minimizing uncertainty in projections of future C sequestration in response to elevated atmospheric CO 2.« less

  20. Integrating count and detection–nondetection data to model population dynamics

    USGS Publications Warehouse

    Zipkin, Elise F.; Rossman, Sam; Yackulic, Charles B.; Wiens, David; Thorson, James T.; Davis, Raymond J.; Grant, Evan H. Campbell

    2017-01-01

    There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture–recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection–nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection–nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection–nondetection data (1995–2014) with newly collected count data (2015–2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance.

  1. Integrating count and detection-nondetection data to model population dynamics.

    PubMed

    Zipkin, Elise F; Rossman, Sam; Yackulic, Charles B; Wiens, J David; Thorson, James T; Davis, Raymond J; Grant, Evan H Campbell

    2017-06-01

    There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture-recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection-nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection-nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection-nondetection data (1995-2014) with newly collected count data (2015-2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance. © 2017 by the Ecological Society of America.

  2. Certification of a hybrid parameter model of the fully flexible Shuttle Remote Manipulator System

    NASA Technical Reports Server (NTRS)

    Barhorst, Alan A.

    1995-01-01

    The development of high fidelity models of mechanical systems with flexible components is in flux. Many working models of these devices assume the elastic motion is small and can be superimposed on the overall rigid body motion. A drawback associated with this type of modeling technique is that it is required to regenerate the linear modal model of the device if the elastic motion is sufficiently far from the base rigid motion. An advantage to this type of modeling is that it uses NASTRAN modal data which is the NASA standard means of modal information exchange. A disadvantage to the linear modeling is that it fails to accurately represent large motion of the system, unless constant modal updates are performed. In this study, which is a continuation of a project started last year, the drawback of the currently used modal snapshot modeling technique is addressed in a rigorous fashion by novel and easily applied means.

  3. Impedance analysis of cultured cells: a mean-field electrical response model for electric cell-substrate impedance sensing technique.

    PubMed

    Urdapilleta, E; Bellotti, M; Bonetto, F J

    2006-10-01

    In this paper we present a model to describe the electrical properties of a confluent cell monolayer cultured on gold microelectrodes to be used with electric cell-substrate impedance sensing technique. This model was developed from microscopic considerations (distributed effects), and by assuming that the monolayer is an element with mean electrical characteristics (specific lumped parameters). No assumptions were made about cell morphology. The model has only three adjustable parameters. This model and other models currently used for data analysis are compared with data we obtained from electrical measurements of confluent monolayers of Madin-Darby Canine Kidney cells. One important parameter is the cell-substrate height and we found that estimates of this magnitude strongly differ depending on the model used for the analysis. We analyze the origin of the discrepancies, concluding that the estimates from the different models can be considered as limits for the true value of the cell-substrate height.

  4. Flux-pinning-induced interfacial shearing and transverse normal stress in a superconducting coated conductor long strip

    NASA Astrophysics Data System (ADS)

    Jing, Ze; Yong, Huadong; Zhou, Youhe

    2012-08-01

    In this paper, a theoretical model is proposed to analyze the transverse normal stress and interfacial shearing stress induced by the electromagnetic force in the superconducting coated conductor. The plane strain approach is used and a singular integral equation is derived. By assuming that the critical current density is magnetic field independent and the superconducting film is infinitely thin, the interfacial shearing stress and normal stress in the film are evaluated for the coated conductor during the increasing and decreasing in the transport current, respectively. The calculation results are discussed and compared for the conductor with different substrate and geometry. The results indicate that the coated conductor with stiffer substrate and larger width experiences larger interfacial shearing stress and less normal stress in the film.

  5. Global food security under climate change

    PubMed Central

    Schmidhuber, Josef; Tubiello, Francesco N.

    2007-01-01

    This article reviews the potential impacts of climate change on food security. It is found that of the four main elements of food security, i.e., availability, stability, utilization, and access, only the first is routinely addressed in simulation studies. To this end, published results indicate that the impacts of climate change are significant, however, with a wide projected range (between 5 million and 170 million additional people at risk of hunger by 2080) strongly depending on assumed socio-economic development. The likely impacts of climate change on the other important dimensions of food security are discussed qualitatively, indicating the potential for further negative impacts beyond those currently assessed with models. Finally, strengths and weaknesses of current assessment studies are discussed, suggesting improvements and proposing avenues for new analyses. PMID:18077404

  6. Game relativity: how context influences strategic decision making.

    PubMed

    Vlaev, Ivo; Chater, Nick

    2006-01-01

    Existing models of strategic decision making typically assume that only the attributes of the currently played game need be considered when reaching a decision. The results presented in this article demonstrate that the so-called "co-operativeness" of the previously played prisoner's dilemma games influence choices and predictions in the current prisoner's dilemma game, which suggests that games are not considered independently. These effects involved reinforcement-based assimilation to the previous choices and also a perceptual contrast of the present game with preceding games, depending on the range and the rank of their co-operativeness. A. Parducci's (1965) range frequency theory and H. Helson's (1964) adaptation level theory are plausible theories of relative judgment of magnitude information, which could provide an account of these context effects. ((c) 2006 APA, all rights reserved).

  7. Variability and reliability analysis in self-assembled multichannel carbon nanotube field-effect transistors

    NASA Astrophysics Data System (ADS)

    Hu, Zhaoying; Tulevski, George S.; Hannon, James B.; Afzali, Ali; Liehr, Michael; Park, Hongsik

    2015-06-01

    Carbon nanotubes (CNTs) have been widely studied as a channel material of scaled transistors for high-speed and low-power logic applications. In order to have sufficient drive current, it is widely assumed that CNT-based logic devices will have multiple CNTs in each channel. Understanding the effects of the number of CNTs on device performance can aid in the design of CNT field-effect transistors (CNTFETs). We have fabricated multi-CNT-channel CNTFETs with an 80-nm channel length using precise self-assembly methods. We describe compact statistical models and Monte Carlo simulations to analyze failure probability and the variability of the on-state current and threshold voltage. The results show that multichannel CNTFETs are more resilient to process variation and random environmental fluctuations than single-CNT devices.

  8. 46 CFR 111.52-3 - Systems below 1500 kilowatts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-GENERAL REQUIREMENTS Calculation of Short-Circuit Currents § 111.52-3 Systems below 1500 kilowatts. The following short-circuit assumptions must be made for a system with an aggregate generating capacity below... maximum short-circuit current of a direct current system must be assumed to be 10 times the aggregate...

  9. Finite frequency current noise in the Holstein model

    NASA Astrophysics Data System (ADS)

    Stadler, P.; Rastelli, G.; Belzig, W.

    2018-05-01

    We investigate the effects of local vibrational excitations in the nonsymmetrized current noise S (ω ) of a nanojunction. For this purpose, we analyze a simple model—the Holstein model—in which the junction is described by a single electronic level that is coupled to two metallic leads and to a single vibrational mode. Using the Keldysh Green's function technique, we calculate the nonsymmetrized current noise to the leading order in the charge-vibration interaction. For the noise associated to the latter, we identify distinct terms corresponding to the mean-field noise and the vertex correction. The mean-field result can be further divided into an elastic correction to the noise and in an inelastic correction, the second one being related to energy exchange with the vibration. To illustrate the general behavior of the noise induced by the charge-vibration interaction, we consider two limit cases. In the first case, we assume a strong coupling of the dot to the leads with an energy-independent transmission, whereas in the second case we assume a weak tunneling coupling between the dot and the leads such that the transport occurs through a sharp resonant level. We find that the noise associated to the vibration-charge interaction shows a complex pattern as a function of the frequency ω and of the transmission function or of the dot's energy level. Several transitions from enhancement to suppression of the noise occurs in different regions, which are determined, in particular, by the vibrational frequency. Remarkably, in the regime of an energy-independent transmission, the zero-order elastic noise vanishes at perfect transmission and at positive frequency, whereas the noise related to the charge-vibration interaction remains finite, enabling the analysis of the pure vibrational-induced current noise.

  10. Dynamics of the Antarctic Circumpolar Current. Evidence for Topographic Effects from Altimeter Data and Numerical Model Output

    NASA Technical Reports Server (NTRS)

    Gille, Sarah T.

    1995-01-01

    Geosat altimeter data and numerical model output are used to examine the circulation and dynamics of the Antarctic Circumpolar Current (ACC). The mean sea surface height across the ACC has been reconstructed from height variability measured by the altimeter, without assuming prior knowledge of the geoid. The results indicate locations for the Subantarctic and Polar Fronts which are consistent with in situ observations and indicate that the fronts are substantially steered by bathymetry. Detailed examination of spatial and temporal variability indicates a spatial decorrelation scale of 85 km and a temporal e-folding scale of 34 days. Empirical Orthogonal Function analysis suggests that the scales of motion are relatively short, occuring on 1000 km length-scales rather than basin or global scales. The momentum balance of the ACC has been investigated using output from the high resolution primitive equation model in combination with altimeter data. In the Semtner-Chervin quarter-degree general circulation model topographic form stress is the dominant process balancing the surface wind forcing. In stream coordinates, the dominant effect transporting momentum across the ACC is bibarmonic friction. Potential vorticity is considered on Montgomery streamlines in the model output and along surface streamlines in model and altimeter data. (AN)

  11. Source analysis of MEG activities during sleep (abstract)

    NASA Astrophysics Data System (ADS)

    Ueno, S.; Iramina, K.

    1991-04-01

    The present study focuses on magnetic fields of the brain activities during sleep, in particular on K-complexes, vertex waves, and sleep spindles in human subjects. We analyzed these waveforms based on both topographic EEG (electroencephalographic) maps and magnetic fields measurements, called MEGs (magnetoencephalograms). The components of magnetic fields perpendicular to the surface of the head were measured using a dc SQUID magnetometer with a second derivative gradiometer. In our computer simulation, the head is assumed to be a homogeneous spherical volume conductor, with electric sources of brain activity modeled as current dipoles. Comparison of computer simulations with the measured data, particularly the MEG, suggests that the source of K-complexes can be modeled by two current dipoles. A source for the vertex wave is modeled by a single current dipole which orients along the body axis out of the head. By again measuring the simultaneous MEG and EEG signals, it is possible to uniquely determine the orientation of this dipole, particularly when it is tilted slightly off-axis. In sleep stage 2, fast waves of magnetic fields consistently appeared, but EEG spindles appeared intermittently. The results suggest that there exist sources which are undetectable by electrical measurement but are detectable by magnetic-field measurement. Such source can be described by a pair of opposing dipoles of which directions are oppositely oriented.

  12. Associations of Bar and Restaurant Smoking Bans With Smoking Behavior in the CARDIA Study: A 25-Year Study.

    PubMed

    Mayne, Stephanie L; Auchincloss, Amy H; Tabb, Loni Philip; Stehr, Mark; Shikany, James M; Schreiner, Pamela J; Widome, Rachel; Gordon-Larsen, Penny

    2018-06-01

    Indoor smoking bans have often been associated with reductions in smoking prevalence. However, few studies have evaluated their association with within-person changes in smoking behaviors. We linked longitudinal data from 5,105 adults aged 18-30 years at baseline from the Coronary Artery Risk Development in Young Adults (CARDIA) Study (1985-2011) to state, county, and local policies mandating 100% smoke-free bars and restaurants by census tract. We used fixed-effects models to examine the association of smoking bans with within-person change in current smoking risk, smoking intensity (smoking ≥10 cigarettes/day on average vs. <10 cigarettes/day), and quitting attempts, using both linear and nonlinear adjustment for secular trends. In models assuming a linear secular trend, smoking bans were associated with a decline in current smoking risk and smoking intensity and an increased likelihood of a quitting attempt. The association with current smoking was greatest among participants with a bachelor's degree or higher. In models with a nonlinear secular trend, pooled results were attenuated (confidence intervals included the null), but effect modification results were largely unchanged. Findings suggest that smoking ban associations may be difficult to disentangle from other tobacco control interventions and emphasize the importance of evaluating equity throughout policy implementation.

  13. Inelastic cotunneling with energy-dependent contact transmission

    NASA Astrophysics Data System (ADS)

    Blok, S.; Agundez Mojarro, R. R.; Maduro, L. A.; Blaauboer, M.; Van Der Molen, S. J.

    2017-03-01

    We investigate inelastic cotunneling in a model system where the charging island is connected to the leads through molecules with energy-dependent transmission functions. To study this problem, we propose two different approaches. The first is a pragmatic approach that assumes Lorentzian-like transmission functions that determine the transmission probability to the island. Using this model, we calculate current versus voltage (IV) curves for increasing resonance level positions of the molecule. We find that shifting the resonance energy of the molecule away from the Fermi energy of the contacts leads to a decreased current at low bias, but as bias increases, this difference decreases and eventually inverses. This is markedly different from IV behavior outside the cotunneling regime. The second approach involves multiple cotunneling where also the molecules are considered to be in the Coulomb blockade regime. We find here that when Ec≫eV ,kBT , the IV behavior approaches the original cotunneling behavior proposed by Averin and Nazarov [Phys. Rev. Lett. 65, 2446-2449 (1990)].

  14. Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.

    PubMed

    Guo, Tianjiao; Englehardt, James D; Fallon, Howard J

      While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.

  15. Assessment of the sustainability of bushmeat hunting based on dynamic bioeconomic models.

    PubMed

    Ling, S; Milner-Gulland, E J

    2006-08-01

    Open-access hunting is a dynamic system in which individual hunters respond to changes in system variables such as costs of hunting and prices obtained for their catch. Sustainability indices used by conservationists ignore these human processes and focus only on the biological sustainability of current offtake levels. This focus implicitly assumes that offtake is constant, says little about the actual sustainability of the system, and fails to provide any basis for predicting the impact of most feasible management interventions. A bioeconomic approach overcomes these limitations by explicitly integrating both the biological and human components of the system. We present a graphical representation of a simple bioeconomic model of bushmeat hunting and use it to demonstrate the importance of considering system dynamics when assessing sustainability. Our results show that commonly used static sustainability indices are often misleading. The best method to assess hunting sustainability is situation dependent, but characterizing supply and demand curves, even crudely, has greater potential than current approaches to provide robust predictions in the medium term.

  16. Implications of Higgs’ universality for physics beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Goldman, T.; Stephenson, G. J.

    2017-06-01

    We emulate Cabibbo by assuming a kind of universality for fermion mass terms in the Standard Model. We show that this is consistent with all current data and with the concept that deviations from what we term Higgs’ universality are due to corrections from currently unknown physics of nonetheless conventional form. The application to quarks is straightforward, while the application to leptons makes use of the recognition that Dark Matter can provide the “sterile” neutrinos needed for the seesaw mechanism. Requiring agreement with neutrino oscillation results leads to the prediction that the mass eigenstates of the sterile neutrinos are separated by quadratically larger ratios than for the charged fermions. Using consistency with the global fit to LSND-like, short-baseline oscillations to determine the scale of the lowest mass sterile neutrino strongly suggests that the recently observed astrophysical 3.55 keV γ-ray line is also consistent with the mass expected for the second most massive sterile neutrino in our analysis.

  17. Barrier inhomogeneities and electronic transport of Pt contacts to relatively highly doped n-type 4H-SiC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lingqin, E-mail: lqhuang@jsnu.edu.cn, E-mail: dwang121@dlut.edu.cn; Wang, Dejun, E-mail: lqhuang@jsnu.edu.cn, E-mail: dwang121@dlut.edu.cn

    The barrier characteristics of Pt contacts to relatively highly doped (∼1 × 10{sup 18 }cm{sup −3}) 4H-SiC were investigated using current-voltage (I-V) and capacitance-voltage (C-V) measurements in the temperature range of 160–573 K. The barrier height and ideally factor estimated from the I-V characteristics based on the thermionic emission model are abnormally temperature-dependent, which can be explained by assuming the presence of a double Gaussian distribution (GD) of inhomogeneous barrier heights. However, in the low temperature region (160–323 K), the obtained mean barrier height according to GD is lower than the actual mean value from C-V measurement. The values of barrier height determined from themore » thermionic field emission model are well consistent with those from the C-V measurements, which suggest that the current transport process could be modified by electron tunneling at low temperatures.« less

  18. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  19. Measurement of the νμ charged-current quasielastic cross section on carbon with the ND280 detector at T2K

    NASA Astrophysics Data System (ADS)

    Abe, K.; Adam, J.; Aihara, H.; Akiri, T.; Andreopoulos, C.; Aoki, S.; Ariga, A.; Assylbekov, S.; Autiero, D.; Barbi, M.; Barker, G. J.; Barr, G.; Bartet-Friburg, P.; Bass, M.; Batkiewicz, M.; Bay, F.; Berardi, V.; Berger, B. E.; Berkman, S.; Bhadra, S.; Blaszczyk, F. d. M.; Blondel, A.; Bojechko, C.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Calland, R. G.; Caravaca Rodríguez, J.; Cartwright, S. L.; Castillo, R.; Catanesi, M. G.; Cervera, A.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Clifton, A.; Coleman, J.; Coleman, S. J.; Collazuol, G.; Connolly, K.; Cremonesi, L.; Dabrowska, A.; De Rosa, G.; Danko, I.; Das, R.; Davis, S.; de Perio, P.; De Rosa, G.; Dealtry, T.; Dennis, S. R.; Densham, C.; Dewhurst, D.; Di Lodovico, F.; Di Luise, S.; Dolan, S.; Drapier, O.; Duboyski, T.; Duffy, K.; Dumarchez, J.; Dytman, S.; Dziewiecki, M.; Emery-Schrenk, S.; Ereditato, A.; Escudero, L.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Friend, M.; Fujii, Y.; Fukuda, Y.; Furmanski, A. P.; Galymov, V.; Garcia, A.; Giffin, S.; Giganti, C.; Gilje, K.; Goeldi, D.; Golan, T.; Gonin, M.; Grant, N.; Gudin, D.; Hadley, D. R.; Haegel, L.; Haesler, A.; Haigh, M. D.; Hamilton, P.; Hansen, D.; Hara, T.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Hearty, C.; Helmer, R. L.; Hierholzer, M.; Hignight, J.; Hillairet, A.; Himmel, A.; Hiraki, T.; Hirota, S.; Holeczek, J.; Horikawa, S.; Huang, K.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ieki, K.; Ieva, M.; Ikeda, M.; Imber, J.; Insler, J.; Intonti, R. A.; Irvine, T. J.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Iyogi, K.; Izmaylov, A.; Jacob, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jo, J. H.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Kanazawa, Y.; Karlen, D.; Karpikov, I.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kielczewska, D.; Kikawa, T.; Kilinski, A.; Kim, J.; King, S.; Kisiel, J.; Kitching, P.; Kobayashi, T.; Koch, L.; Kolaceke, A.; Koga, T.; Konaka, A.; Kopylov, A.; Kormos, L. L.; Korzenev, A.; Koshio, Y.; Kropp, W.; Kubo, H.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Larkin, E.; Laveder, M.; Lawe, M.; Lazos, M.; Lindner, T.; Lister, C.; Litchfield, R. P.; Longhin, A.; Lopez, J. P.; Ludovici, L.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Marino, A. D.; Marteau, J.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Ma, W. Y.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Metelko, C.; Mezzetto, M.; Mijakowski, P.; Miller, C. A.; Minamino, A.; Mineev, O.; Mine, S.; Missert, A.; Miura, M.; Moriyama, S.; Mueller, Th. A.; Murakami, A.; Murdoch, M.; Murphy, S.; Myslik, J.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakamura, K. D.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nirkko, M.; Nishikawa, K.; Nishimura, Y.; Nowak, J.; O'Keeffe, H. M.; Ohta, R.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Payne, D.; Perevozchikov, O.; Perkin, J. D.; Petrov, Y.; Pickard, L.; Pickering, L.; Pinzon Guerra, E. S.; Pistillo, C.; Plonski, P.; Poplawska, E.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Poutissou, R.; Przewlocki, P.; Quilain, B.; Radicioni, E.; Ratoff, P. N.; Ravonel, M.; Rayner, M. A. M.; Redij, A.; Reeves, M.; Reinherz-Aronis, E.; Riccio, C.; Rodrigues, P. A.; Rojas, P.; Rondio, E.; Roth, S.; Rubbia, A.; Ruterbories, D.; Rychter, A.; Sacco, R.; Sakashita, K.; Sánchez, F.; Sato, F.; Scantamburlo, E.; Scholberg, K.; Schoppmann, S.; Schwehr, J. D.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaikhiev, A.; Shaker, F.; Shaw, D.; Shiozawa, M.; Shirahige, T.; Short, S.; Shustrov, Y.; Sinclair, P.; Smith, B.; Smy, M.; Sobczyk, J. T.; Sobel, H.; Sorel, M.; Southwell, L.; Stamoulis, P.; Steinmann, J.; Still, B.; Stewart, T.; Suda, Y.; Suzuki, A.; Suzuki, K.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takahashi, S.; Takeda, A.; Takeuchi, Y.; Tanaka, H. K.; Tanaka, H. A.; Tanaka, M. M.; Terhorst, D.; Terri, R.; Thompson, L. F.; Thorley, A.; Tobayama, S.; Toki, W.; Tomura, T.; Touramanis, C.; Tsukamoto, T.; Tzanov, M.; Uchida, Y.; Vacheret, A.; Vagins, M.; Vallari, Z.; Vasseur, G.; Wachala, T.; Wakamatsu, K.; Walter, C. W.; Wark, D.; Warzycha, W.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilkes, R. J.; Wilking, M. J.; Wilkinson, C.; Williamson, Z.; Wilson, J. R.; Wilson, R. J.; Wongjirad, T.; Yamada, Y.; Yamamoto, K.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yoo, J.; Yoshida, K.; Yuan, T.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; Żmuda, J.; T2K Collaboration

    2015-12-01

    This paper reports a measurement by the T2K experiment of the νμ charged current quasielastic (CCQE) cross section on a carbon target with the off-axis detector based on the observed distribution of muon momentum (pμ) and angle with respect to the incident neutrino beam (θμ). The flux-integrated CCQE cross section was measured to be ⟨σ ⟩=(0.83 ±0.12 )×10-38 cm2 . The energy dependence of the CCQE cross section is also reported. The axial mass, MAQE, of the dipole axial form factor was extracted assuming the Smith-Moniz CCQE model with a relativistic Fermi gas nuclear model. Using the absolute (shape-only) pμ-cos θμ distribution, the effective MAQE parameter was measured to be 1.2 6-0.18+0.21 GeV /c2 (1.4 3-0.22+0.28 GeV /c2 ).

  20. Modeling of turbulent supersonic H2-air combustion with an improved joint beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1991-01-01

    Attempts at modeling recent experiments of Cheng et al. indicated that discrepancies between theory and experiment can be a result of the form of assumed probability density function (PDF) and/or the turbulence model employed. Improvements in both the form of the assumed PDF and the turbulence model are presented. The results are again used to compare with measurements. Initial comparisons are encouraging.

  1. The Effects of High Density on the X-ray Spectrum Reflected from Accretion Discs Around Black Holes

    NASA Technical Reports Server (NTRS)

    Garcia, Javier A.; Fabian, Andrew C.; Kallman, Timothy R.; Dauser, Thomas; Parker, Micahel L.; McClintock, Jeffrey E.; Steiner, James F.; Wilms, Jorn

    2016-01-01

    Current models of the spectrum of X-rays reflected from accretion discs around black holes and other compact objects are commonly calculated assuming that the density of the disc atmosphere is constant within several Thomson depths from the irradiated surface. An important simplifying assumption of these models is that the ionization structure of the gas is completely specified by a single, fixed value of the ionization parameter (xi), which is the ratio of the incident flux to the gas density. The density is typically fixed at n(sub e) = 10(exp 15) per cu cm. Motivated by observations, we consider higher densities in the calculation of the reflected spectrum. We show by computing model spectra for n(sub e) approximately greater than 10(exp 17) per cu cm that high-density effects significantly modify reflection spectra. The main effect is to boost the thermal continuum at energies 2 approximately less than keV. We discuss the implications of these results for interpreting observations of both active galactic nuclei and black hole binaries. We also discuss the limitations of our models imposed by the quality of the atomic data currently available.

  2. A Bayesian context fear learning algorithm/automaton

    PubMed Central

    Krasne, Franklin B.; Cushman, Jesse D.; Fanselow, Michael S.

    2015-01-01

    Contextual fear conditioning is thought to involve the synaptic plasticity-dependent establishment in hippocampus of representations of to-be-conditioned contexts which can then become associated with USs in the amygdala. A conceptual and computational model of this process is proposed in which contextual attributes are assumed to be sampled serially and randomly during contextual exposures. Given this assumption, moment-to-moment information about such attributes will often be quite different from one exposure to another and, in particular, between exposures during which representations are created, exposures during which conditioning occurs, and during recall sessions. This presents challenges to current conceptual models of hippocampal function. In order to meet these challenges, our model's hippocampus was made to operate in different modes during representation creation and recall, and non-hippocampal machinery was constructed that controlled these hippocampal modes. This machinery uses a comparison between contextual information currently observed and information associated with existing hippocampal representations of familiar contexts to compute the Bayesian Weight of Evidence that the current context is (or is not) a known one, and it uses this value to assess the appropriateness of creation or recall modes. The model predicts a number of known phenomena such as the immediate shock deficit, spurious fear conditioning to contexts that are absent but similar to actually present ones, and modulation of conditioning by pre-familiarization with contexts. It also predicts a number of as yet unknown phenomena. PMID:26074792

  3. Modeling biofilms with dual extracellular electron transfer mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renslow, Ryan S.; Babauta, Jerome T.; Kuprat, Andrew P.

    2013-11-28

    Electrochemically active biofilms have a unique form of respiration in which they utilize solid external materials as their terminal electron acceptor for metabolism. Currently, two primary mechanisms have been identified for long-range extracellular electron transfer (EET): a diffusion- and a conduction-based mechanism. Evidence in the literature suggests that some biofilms, particularly Shewanella oneidensis, produce components requisite for both mechanisms. In this study, a generic model is presented that incorporates both diffusion- and conduction-based mechanisms and allows electrochemically active biofilms to utilize both simultaneously. The model was applied to Shewanella oneidensis and Geobacter sulfurreducens biofilms using experimentally generated data found themore » literature. Our simulation results showed that 1) biofilms having both mechanisms available, especially if they can interact, may have metabolic advantage over biofilms that can use only a single mechanism; 2) the thickness of Geobacter sulfurreducens biofilms is likely not limited by conductivity; 3) accurate intrabiofilm diffusion coefficient values are critical for current generation predictions; and 4) the local biofilm potential and redox potential are two distinct measurements and cannot be assumed to have identical values. Finally, we determined that cyclic and squarewave voltammetry are currently not good tools to determine the specific percentage of extracellular electron transfer mechanisms used by biofilms. The developed model will be a critical tool in designing experiments to explain EET mechanisms.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renslow, Ryan S.; Babauta, Jerome T.; Kuprat, Andrew P.

    Electrochemically active biofilms have a unique form of respiration in which they utilize solid external materials as terminal electron acceptors for their metabolism. Currently, two primary mechanisms have been identified for long-range extracellular electron transfer (EET): a diffusion- and a conduction-based mechanism. Evidence in the literature suggests that some biofilms, particularly Shewanella oneidensis, produce the requisite components for both mechanisms. In this study, a generic model is presented that incorporates the diffusion- and the conduction-based mechanisms and allows electrochemically active biofilms to utilize both simultaneously. The model was applied to S. oneidensis and Geobacter sulfurreducens biofilms using experimentally generated datamore » found in the literature. Our simulation results show that 1) biofilms having both mechanisms available, especially if they can interact, may have a metabolic advantage over biofilms that can use only a single mechanism; 2) the thickness of G. sulfurreducens biofilms is likely not limited by conductivity; 3) accurate intrabiofilm diffusion coefficient values are critical for current generation predictions; and 4) the local biofilm potential and redox potential are two distinct parameters and cannot be assumed to have identical values. Finally, we determined that simulated cyclic and squarewave voltammetry based on our model are currently not capable of determining the specific percentages of extracellular electron transfer mechanisms in a biofilm. The developed model will be a critical tool for designing experiments to explain EET mechanisms.« less

  5. Beyond an Assumed Mother-Child Symbiosis in Nutritional Guidelines: The Everyday Reasoning behind Complementary Feeding Decisions

    ERIC Educational Resources Information Center

    Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte

    2014-01-01

    Researchers question the implications of the way in which "motherhood" is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs of the child and the interest and focus of the…

  6. Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Purba, Victor; Jafarpour, Saber

    Next-generation power networks will contain large numbers of grid-connected inverters satisfying a significant fraction of system load. Since each inverter model has a relatively large number of dynamic states, it is impractical to analyze complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loopmore » for grid synchronization. We outline a structure-preserving reduced-order inverter model with lumped parameters for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. We show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as any individual inverter in the system. Numerical simulations validate the reduced-order model.« less

  7. DSM disorders and their criteria: how should they inter-relate?

    PubMed

    Kendler, K S

    2017-09-01

    While the changes in psychiatric diagnosis introduced by Diagnostic and Statistical Manual third edition (DSM-III) have had major benefits to the field of psychiatry, the reification of its diagnostic criteria and the widespread adoption of diagnostic literalism have been problematic. I argue that, at root, these developments can be best understood by contrasting two approaches to the relationship between DSM disorders and their criteria. In a constitutive relationship, criteria definitively define the disorder. Having a disorder is nothing more than meeting the criteria. In an indexical relationship, the criteria are fallible indices of a disorder understood as a hypothetical, tentative diagnostic construct. I trace the origins of the constitutive model to the philosophical theory of operationalism. I then examine a range of historical and empirical results that favor the indexical over the constitutive position including (i) evidence that individual criteria for DSM-III were selected from a broader pool of possible symptoms/signs, (ii) revisions of DSM have implicitly assumed an indexical criteria-disorder relationship, (iii) the indexical position allows DSM criteria to be wrong and misdiagnose patients while such a result is incoherent for a constitutive model, an implausible position, (iv) we assume an indexical criteria-scale relationships for many personality and symptom measures commonly used in psychiatric practice and research, and (v) empirical studies suggesting similar performance for DSM and non-DSM symptoms for major depression. I then review four reasons for the rise of the constitutive position: (i) the 'official' nature of the DSM criteria, (ii) the strong investment psychiatry has had in the DSM manual and its widespread use and success, iii) lack of a clear pathophysiology for our disorders, and (iv) the absence of informative diagnostic signs of minimal clinical importance. I conclude that the constitutive position is premature and reflects a conceptual error. It assumes a definitiveness and a literalism about the nature of our criteria that is far beyond our current knowledge. The indexical position with its tentativeness and modesty accurately reflects the current state of our field.

  8. Mate-sampling costs and sexy sons.

    PubMed

    Kokko, H; Booksmythe, I; Jennions, M D

    2015-01-01

    Costly female mating preferences for purely Fisherian male traits (i.e. sexual ornaments that are genetically uncorrelated with inherent viability) are not expected to persist at equilibrium. The indirect benefit of producing 'sexy sons' (Fisher process) disappears: in some models, the male trait becomes fixed; in others, a range of male trait values persist, but a larger trait confers no net fitness advantage because it lowers survival. Insufficient indirect selection to counter the direct cost of producing fewer offspring means that preferences are lost. The only well-cited exception assumes biased mutation on male traits. The above findings generally assume constant direct selection against female preferences (i.e. fixed costs). We show that if mate-sampling costs are instead derived based on an explicit account of how females acquire mates, an initially costly mating preference can coevolve with a male trait so that both persist in the presence or absence of biased mutation. Our models predict that empirically detecting selection at equilibrium will be difficult, even if selection was responsible for the location of the current equilibrium. In general, it appears useful to integrate mate sampling theory with models of genetic consequences of mating preferences: being explicit about the process by which individuals select mates can alter equilibria. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  9. Applying the Theory of Work Adjustment to Latino Immigrant Workers: An Exploratory Study.

    PubMed

    Eggerth, Donald E; Flynn, Michael A

    2012-02-01

    Blustein mapped career decision making onto Maslow's model of motivation and personality and concluded that most models of career development assume opportunities and decision-making latitude that do not exist for many individuals from low income or otherwise disadvantaged backgrounds. Consequently, Blustein argued that these models may be of limited utility for such individuals. Blustein challenged researchers to reevaluate current career development approaches, particularly those assuming a static world of work, from a perspective allowing for changing circumstances and recognizing career choice can be limited by access to opportunities, personal obligations, and social barriers. This article represents an exploratory effort to determine if the theory of work adjustment (TWA) might meaningfully be used to describe the work experiences of Latino immigrant workers, a group living with severe constraints and having very limited employment opportunities. It is argued that there is significant conceptual convergence between Maslow's hierarchy of needs and the work reinforcers of TWA. The results of an exploratory, qualitative study with a sample of 10 Latino immigrants are also presented. These immigrants participated in key informant interviews concerning their work experiences both in the United States and in their home countries. The findings support Blustein's contention that such workers will be most focused on basic survival needs and suggest that TWA reinforcers are descriptive of important aspects of how Latino immigrant workers conceptualize their jobs.

  10. Applying the Theory of Work Adjustment to Latino Immigrant Workers: An Exploratory Study

    PubMed Central

    Eggerth, Donald E.; Flynn, Michael A.

    2015-01-01

    Blustein mapped career decision making onto Maslow’s model of motivation and personality and concluded that most models of career development assume opportunities and decision-making latitude that do not exist for many individuals from low income or otherwise disadvantaged backgrounds. Consequently, Blustein argued that these models may be of limited utility for such individuals. Blustein challenged researchers to reevaluate current career development approaches, particularly those assuming a static world of work, from a perspective allowing for changing circumstances and recognizing career choice can be limited by access to opportunities, personal obligations, and social barriers. This article represents an exploratory effort to determine if the theory of work adjustment (TWA) might meaningfully be used to describe the work experiences of Latino immigrant workers, a group living with severe constraints and having very limited employment opportunities. It is argued that there is significant conceptual convergence between Maslow’s hierarchy of needs and the work reinforcers of TWA. The results of an exploratory, qualitative study with a sample of 10 Latino immigrants are also presented. These immigrants participated in key informant interviews concerning their work experiences both in the United States and in their home countries. The findings support Blustein’s contention that such workers will be most focused on basic survival needs and suggest that TWA reinforcers are descriptive of important aspects of how Latino immigrant workers conceptualize their jobs. PMID:26345693

  11. Superconducting dark energy

    NASA Astrophysics Data System (ADS)

    Liang, Shi-Dong; Harko, Tiberiu

    2015-04-01

    Based on the analogy with superconductor physics we consider a scalar-vector-tensor gravitational model, in which the dark energy action is described by a gauge invariant electromagnetic type functional. By assuming that the ground state of the dark energy is in a form of a condensate with the U(1) symmetry spontaneously broken, the gauge invariant electromagnetic dark energy can be described in terms of the combination of a vector and of a scalar field (corresponding to the Goldstone boson), respectively. The gravitational field equations are obtained by also assuming the possibility of a nonminimal coupling between the cosmological mass current and the superconducting dark energy. The cosmological implications of the dark energy model are investigated for a Friedmann-Robertson-Walker homogeneous and isotropic geometry for two particular choices of the electromagnetic type potential, corresponding to a pure electric type field, and to a pure magnetic field, respectively. The time evolutions of the scale factor, matter energy density and deceleration parameter are obtained for both cases, and it is shown that in the presence of the superconducting dark energy the Universe ends its evolution in an exponentially accelerating vacuum de Sitter state. By using the formalism of the irreversible thermodynamic processes for open systems we interpret the generalized conservation equations in the superconducting dark energy model as describing matter creation. The particle production rates, the creation pressure and the entropy evolution are explicitly obtained.

  12. Meridional overturning circulations driven by surface wind and buoyancy forcing

    NASA Astrophysics Data System (ADS)

    Bell, M. J.

    2016-02-01

    A conceptual picture of the Meridional Overturning Circulation (MOC) is developed using 2- and 3-layer models governed by the planetary geostrophic equations and simple global geometries. The picture has four main elements. First cold water driven to the surface in the South Atlantic north of Drake passage by Ekman upwelling is transformed into warmer water by heat input at the surface from the atmosphere. Second the model's boundary conditions constrain the depths of the isopycnal layers to be almost flat along the eastern boundaries of the ocean. This results in, third, warm water reaching high latitudes in the northern hemisphere where it is transformed into cold water by surface heat loss. Finally it is assumed that western boundary currents are able to close the circulations. The results from a set of numerical experiments for the upwelling limb in the Southern Hemisphere are summarised in a simple conceptual schematic. Analytical solutions have been found for the down-welling limb assuming the wind stress in the Northern Hemisphere is negligible. Expressions for the depth of the isopycnal interface on the eastern boundary and the strength of the MOC obtained by combining these solutions in a 2-layer model are generally consistent with and complementary to those obtained by Gnandesikan (1999). The MOC in two basins one of which has a strong halocline is also discussed.

  13. Alternative model of space-charge-limited thermionic current flow through a plasma

    NASA Astrophysics Data System (ADS)

    Campanell, M. D.

    2018-04-01

    It is widely assumed that thermionic current flow through a plasma is limited by a "space-charge-limited" (SCL) cathode sheath that consumes the hot cathode's negative bias and accelerates upstream ions into the cathode. Here, we formulate a fundamentally different current-limited mode. In the "inverse" mode, the potentials of both electrodes are above the plasma potential, so that the plasma ions are confined. The bias is consumed by the anode sheath. There is no potential gradient in the neutral plasma region from resistivity or presheath. The inverse cathode sheath pulls some thermoelectrons back to the cathode, thereby limiting the circuit current. Thermoelectrons entering the zero-field plasma region that undergo collisions may also be sent back to the cathode, further attenuating the circuit current. In planar geometry, the plasma density is shown to vary linearly across the electrode gap. A continuum kinetic planar plasma diode simulation model is set up to compare the properties of current modes with classical, conventional SCL, and inverse cathode sheaths. SCL modes can exist only if charge-exchange collisions are turned off in the potential well of the virtual cathode to prevent ion trapping. With the collisions, the current-limited equilibrium must be inverse. Inverse operating modes should therefore be present or possible in many plasma devices that rely on hot cathodes. Evidence from past experiments is discussed. The inverse mode may offer opportunities to minimize sputtering and power consumption that were not previously explored due to the common assumption of SCL sheaths.

  14. A Test of Bayesian Observer Models of Processing in the Eriksen Flanker Task

    ERIC Educational Resources Information Center

    White, Corey N.; Brown, Scott; Ratcliff, Roger

    2012-01-01

    Two Bayesian observer models were recently proposed to account for data from the Eriksen flanker task, in which flanking items interfere with processing of a central target. One model assumes that interference stems from a perceptual bias to process nearby items as if they are compatible, and the other assumes that the interference is due to…

  15. STEADY-STATE MODEL OF SOLAR WIND ELECTRONS REVISITED

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Peter H.; Kim, Sunjung; Choe, G. S., E-mail: yoonp@umd.edu

    2015-10-20

    In a recent paper, Kim et al. put forth a steady-state model for the solar wind electrons. The model assumed local equilibrium between the halo electrons, characterized by an intermediate energy range, and the whistler-range fluctuations. The basic wave–particle interaction is assumed to be the cyclotron resonance. Similarly, it was assumed that a dynamical steady state is established between the highly energetic superhalo electrons and high-frequency Langmuir fluctuations. Comparisons with the measured solar wind electron velocity distribution function (VDF) during quiet times were also made, and reasonable agreements were obtained. In such a model, however, only the steady-state solution for themore » Fokker–Planck type of electron particle kinetic equation was considered. The present paper complements the previous analysis by considering both the steady-state particle and wave kinetic equations. It is shown that the model halo and superhalo electron VDFs, as well as the assumed wave intensity spectra for the whistler and Langmuir fluctuations, approximately satisfy the quasi-linear wave kinetic equations in an approximate sense, thus further validating the local equilibrium model constructed in the paper by Kim et al.« less

  16. Kinetic Simulations of Current-Sheet Formation and Reconnection at a Magnetic X Line

    NASA Technical Reports Server (NTRS)

    Black, C.; Antiochos, S. K.; Hesse, M.; Karpen, J. T.; DeVore, C. R.; Kuznetsova, M. M.; Zenitani, S.

    2011-01-01

    The integration of kinetic effects into macroscopic numerical models is currently of great interest to the plasma physics community, particularly in the context of magnetic reconnection. We are examining the formation and reconnection of current sheets in a simple, two-dimensional X-line configuration using high resolution particle-in-cell (PIC) simulations. The initial potential magnetic field is perturbed by thermal pressure introduced into the particle distribution far from the X line. The relaxation of this added stress leads to the development of a current sheet, which reconnects for imposed stress of sufficient strength. We compare the evolution and final state of our PIC simulations with magnetohydrodynamic simulations assuming both uniform and localized resistivities, and with force-free magnetic-field equilibria in which the amount of reconnect ion across the X line can be constrained to be zero (ideal evolution) or optimal (minimum final magnetic energy). We will discuss implications of our results for reconnection onset and cessation at kinetic scales in dynamically formed current sheets, such as those occurring in the terrestrial magnetotail and solar corona.

  17. Observations of ionospheric convection vortices - Signatures of momentum transfer

    NASA Technical Reports Server (NTRS)

    Mchenry, M. A.; Clauer, C. R.; Friis-Christensen, E.; Kelly, J. D.

    1988-01-01

    Several classes of traveling vortices in the dayside ionospheric flow have been detected and tracked using the Greenland magnetometer chain. One class observed during quiet times consists of a continuous series of vortices moving generally antisunward for several hours at a time. Assuming each vortex to be the convection pattern produced by a small field aligned current moving across the ionosphere, the amount of field aligned current was found by fitting a modeled ground magnetic signature to measurements from the chain of magnetometers. The calculated field aligned current is seen to be steady for each vortex and neighboring vortices have currents of opposite sign. Low altitude DMSP observations indicate the vortices are on field lines which map to the inner edge of the low latitude boundary layer. Because the vortices are conjugate to the boundary layer, repeat in a regular fashion and travel antisunward, it is argued that this class of vortices is caused by surface waves at the magnetopause. No strong correlations between field aligned current strength and solar wind density, velocity, or Bz is found.

  18. Validation of a Numerical Program for Analyzing Kinetic Energy Potential in the Bangka Strait, North Sulawesi, Indonesia

    NASA Astrophysics Data System (ADS)

    Rompas, P. T. D.; Taunaumang, H.; Sangari, F. J.

    2018-02-01

    The paper presents validation of the numerical program that computes the distribution of marine current velocities in the Bangka strait and the kinetic energy potential in the form the distributions of available power per area in the Bangka strait. The numerical program used the RANS model where the pressure distribution in the vertical assumed to be hydrostatic. The 2D and 3D numerical program results compared with the measurement results that are observation results to the moment conditions of low and high tide currents. It found no different significant between the numerical results and the measurement results. There are 0.97-2.2 kW/m2 the kinetic energy potential in the form the distributions of available power per area in the Bangka strait when low tide currents, whereas when high tide currents of 1.02-2.1 kW/m2. The results show that to be enabling the installation of marine current turbines for construction of power plant in the Bangka strait, North Sulawesi, Indonesia.

  19. In Pursuit of Improving Airburst and Ground Damage Predictions: Recent Advances in Multi-Body Aerodynamic Testing and Computational Tools Validation

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj; Gulhan, Ali; Aftosmis, Michael; Brock, Joseph; Mathias, Donovan; Need, Dominic; Rodriguez, David; Seltner, Patrick; Stern, Eric; Wiles, Sebastian

    2017-01-01

    An airburst from a large asteroid during entry can cause significant ground damage. The damage depends on the energy and the altitude of airburst. Breakup of asteroids into fragments and their lateral spread have been observed. Modeling the underlying physics of fragmented bodies interacting at hypersonic speeds and the spread of fragments is needed for a true predictive capability. Current models use heuristic arguments and assumptions such as pancaking or point source explosive energy release at pre-determined altitude or an assumed fragmentation spread rate to predict airburst damage. A multi-year collaboration between German Aerospace Center (DLR) and NASA has been established to develop validated computational tools to address the above challenge.

  20. New learning and unlearning: strangers or accomplices in threat memory attenuation?

    PubMed Central

    Clem, Roger L.; Schiller, Daniela

    2016-01-01

    To achieve greatest efficacy, therapies for attenuating fear and anxiety should preclude the re-emergence of emotional responses. Of relevance to this aim, preclinical models of threat memory reduction are considered to engage one of two discrete neural processes: either establishment of a new behavioral response that competes with, and thereby temporarily interferes with expression of, an intact threat memory (new learning), or one which modifies and thereby disrupts an intact threat memory (unlearning). We contend that a strict dichotomy of new learning and unlearning does not provide a compelling explanation for current data. Instead, we suggest the evidence warrants consideration of alternative models that assume cooperation rather than competition between formation of new cellular traces and the modification of preexisting ones. PMID:27079843

  1. Optimal temperature for malaria transmission is dramaticallylower than previously predicted

    USGS Publications Warehouse

    Mordecai, Eerin A.; Paaijmans, Krijin P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.

    2013-01-01

    The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.

  2. Optimal temperature for malaria transmission is dramatically lower than previously predicted

    USGS Publications Warehouse

    Mordecai, Erin A.; Paaijmans, Krijn P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.

    2013-01-01

    The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.

  3. Mad cows and computer models: the U.S. response to BSE.

    PubMed

    Ackerman, Frank; Johnecheck, Wendy A

    2008-01-01

    The proportion of slaughtered cattle tested for BSE is much smaller in the U.S. than in Europe and Japan, leaving the U.S. heavily dependent on statistical models to estimate both the current prevalence and the spread of BSE. We examine the models relied on by USDA, finding that the prevalence model provides only a rough estimate, due to limited data availability. Reassuring forecasts from the model of the spread of BSE depend on the arbitrary constraint that worst-case values are assumed by only one of 17 key parameters at a time. In three of the six published scenarios with multiple worst-case parameter values, there is at least a 25% probability that BSE will spread rapidly. In public policy terms, reliance on potentially flawed models can be seen as a gamble that no serious BSE outbreak will occur. Statistical modeling at this level of abstraction, with its myriad, compound uncertainties, is no substitute for precautionary policies to protect public health against the threat of epidemics such as BSE.

  4. Sample size and power calculations for detecting changes in malaria transmission using antibody seroconversion rate.

    PubMed

    Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris

    2015-12-30

    Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.

  5. Cost-effectiveness of rotavirus vaccination in peru.

    PubMed

    Clark, Andrew D; Walker, Damian G; Mosqueira, N Rocio; Penny, Mary E; Lanata, Claudio F; Fox-Rushby, Julia; Sanderson, Colin F B

    2009-11-01

    There are plans to introduce the oral rotavirus vaccine Rotarix (GlaxoSmithKline), 1 of 2 recently developed vaccines against rotavirus, in Peru. We modeled the cost-effectiveness of adding a rotavirus vaccine to the Peruvian immunization program under 3 scenarios for the timing of vaccination: (1) strictly according to schedule, at 2 and 4 months of age (on time); (2) distributed around the target ages in the same way as the actual timings in the program (flexible); and (3) flexible but assuming vaccination is not initiated for infants >12 weeks of age (restricted). We assumed an introductory price of US $7.50 per dose, and varied the annual rate of price decrease in sensitivity analyses. The discounted cost per disability-adjusted life-year averted for restricted, flexible, and on-time schedules was $621, $615, and $581, respectively. For each of the 3 scenarios, the percentage reduction in deaths due to rotavirus infection was 53%, 66%, and 69%, respectively. The cost per disability-adjusted life-year averted for alternative "what-if" scenarios ranged from $229 (assuming a 1-dose schedule, administered on time) to $1491 (assuming a 2-dose schedule, with half the baseline vaccine efficacy rates and a restricted timing policy). On the basis of current World Health Organization guidelines, rotavirus vaccination represents a highly cost-effective intervention in Peru. Withholding the vaccine from children who present for their first dose after 12 weeks of age would reduce the number of deaths averted by approximately 20%. A single dose may be more cost-effective than 2 doses, but more evidence on the protection conferred by a single dose is required.

  6. A semi-empirical model for the formation and depletion of the high burnup structure in UO 2

    DOE PAGES

    Pizzocri, D.; Cappia, F.; Luzzi, L.; ...

    2017-01-31

    In the rim zone of UO 2 nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. To this end, we per-formed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Moreover, based on these new experimental data, we assume an exponential reduction of the average grain size withmore » local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes.« less

  7. Estimating statistical power for open-enrollment group treatment trials.

    PubMed

    Morgan-Lopez, Antonio A; Saavedra, Lissette M; Hien, Denise A; Fals-Stewart, William

    2011-01-01

    Modeling turnover in group membership has been identified as a key barrier contributing to a disconnect between the manner in which behavioral treatment is conducted (open-enrollment groups) and the designs of substance abuse treatment trials (closed-enrollment groups, individual therapy). Latent class pattern mixture models (LCPMMs) are emerging tools for modeling data from open-enrollment groups with membership turnover in recently proposed treatment trials. The current article illustrates an approach to conducting power analyses for open-enrollment designs based on the Monte Carlo simulation of LCPMM models using parameters derived from published data from a randomized controlled trial comparing Seeking Safety to a Community Care condition for women presenting with comorbid posttraumatic stress disorder and substance use disorders. The example addresses discrepancies between the analysis framework assumed in power analyses of many recently proposed open-enrollment trials and the proposed use of LCPMM for data analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Modeling the system dynamics for nutrient removal in an innovative septic tank media filter.

    PubMed

    Xuan, Zhemin; Chang, Ni-Bin; Wanielista, Martin

    2012-05-01

    A next generation septic tank media filter to replace or enhance the current on-site wastewater treatment drainfields was proposed in this study. Unit operation with known treatment efficiencies, flow pattern identification, and system dynamics modeling was cohesively concatenated in order to prove the concept of a newly developed media filter. A multicompartmental model addressing system dynamics and feedbacks based on our assumed microbiological processes accounting for aerobic, anoxic, and anaerobic conditions in the media filter was constructed and calibrated with the aid of in situ measurements and the understanding of the flow patterns. Such a calibrated system dynamics model was then applied for a sensitivity analysis under changing inflow conditions based on the rates of nitrification and denitrification characterized through the field-scale testing. This advancement may contribute to design such a drainfield media filter in household septic tank systems in the future.

  9. Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Wong, Takmeng; Allan, Richard; Slingo, Anthony; Kiehl, Jeffrey T.; Soden, Brian J.; Gordon, C. T.; Miller, Alvin J.; Yang, Shi-Keng; Randall, David R.; hide

    2001-01-01

    It is widely assumed that variations in the radiative energy budget at large time and space scales are very small. We present new evidence from a compilation of over two decades of accurate satellite data that the top-of-atmosphere (TOA) tropical radiative energy budget is much more dynamic and variable than previously thought. We demonstrate that the radiation budget changes are caused by changes In tropical mean cloudiness. The results of several current climate model simulations fall to predict this large observed variation In tropical energy budget. The missing variability in the models highlights the critical need to Improve cloud modeling in the tropics to support Improved prediction of tropical climate on Inter-annual and decadal time scales. We believe that these data are the first rigorous demonstration of decadal time scale changes In the Earth's tropical cloudiness, and that they represent a new and necessary test of climate models.

  10. Assessment of the Risk of Ebola Importation to Australia

    PubMed Central

    Cope, Robert C.; Cassey, Phillip; Hugo, Graeme J.; Ross, Joshua V.

    2014-01-01

    Objectives: To assess the risk of Ebola importation to Australia during the first six months of 2015, based upon the current outbreak in West Africa. Methodology: We assessed the risk under two distinct scenarios: (i) assuming that significant numbers of cases of Ebola remain confined to Guinea, Liberia and Sierra Leone, and using historic passenger arrival data into Australia; and, (ii) assuming potential secondary spread based upon international flight data. A model appropriate to each scenario is developed, and parameterised using passenger arrival card or international flight data, and World Health Organisation case data from West Africa. These models were constructed based on WHO Ebola outbreak data as at 17 October 2014 and 3 December 2014. An assessment of the risk under each scenario is reported. On 27 October 2014 the Australian Government announced a policy change, that visas from affected countries would be refused/cancelled, and the predicted effect of this policy change is reported. Results: The current probability of at least one case entering Australia by 1 July 2015, having travelled directly from West Africa with historic passenger arrival rates into Australia, is 0.34. Under the new Australian Government policy of restricting visas from affected countries (as of 27 October 2014), the probability of at least one case entering Australia by 1 July 2015 is reduced to 0.16. The probability of at least one case entering Australia by 1 July 2015 via an outbreak from a secondary source country is approximately 0.12. Conclusions: Our models suggest that if the transmission of Ebola remains unchanged, it is possible that a case will enter Australia within the first six months of 2015, either directly from West Africa (even when current visa restrictions are considered), or via secondary outbreaks elsewhere. Government and medical authorities should be prepared to respond to this eventuality. Control measures within West Africa over recent months have contributed to a reduction in projected risk of a case entering Australia. A significant further reduction of the rate at which Ebola is proliferating in West Africa, and control of the disease if and when it proliferates elsewhere, will continue to result in substantially lower risk of the disease entering Australia. PMID:25685627

  11. A reaction limited in vivo dissolution model for the study of drug absorption: Towards a new paradigm for the biopharmaceutic classification of drugs.

    PubMed

    Macheras, Panos; Iliadis, Athanassios; Melagraki, Georgia

    2018-05-30

    The aim of this work is to develop a gastrointestinal (GI) drug absorption model based on a reaction limited model of dissolution and consider its impact on the biopharmaceutic classification of drugs. Estimates for the fraction of dose absorbed as a function of dose, solubility, reaction/dissolution rate constant and the stoichiometry of drug-GI fluids reaction/dissolution were derived by numerical solution of the model equations. The undissolved drug dose and the reaction/dissolution rate constant drive the dissolution rate and determine the extent of absorption when high-constant drug permeability throughout the gastrointestinal tract is assumed. Dose is an important element of drug-GI fluids reaction/dissolution while solubility exclusively acts as an upper limit for drug concentrations in the lumen. The 3D plots of fraction of dose absorbed as a function of dose and reaction/dissolution rate constant for highly soluble and low soluble drugs for different "stoichiometries" (0.7, 1.0, 2.0) of the drug-reaction/dissolution with the GI fluids revealed that high extent of absorption was found assuming high drug- reaction/dissolution rate constant and high drug solubility. The model equations were used to simulate in vivo supersaturation and precipitation phenomena. The model developed provides the theoretical basis for the interpretation of the extent of drug's absorption on the basis of the parameters associated with the drug-GI fluids reaction/dissolution. A new paradigm emerges for the biopharmaceutic classification of drugs, namely, a model independent biopharmaceutic classification scheme of four drug categories based on either the fulfillment or not of the current dissolution criteria and the high or low % drug metabolism. Copyright © 2018. Published by Elsevier B.V.

  12. Proposal for Professional Development Schools.

    ERIC Educational Resources Information Center

    Rajuan, Maureen

    This paper discusses the reluctance of Israeli inservice teachers to assume the role of mentor to student teachers in their classrooms, proposing an alternative Professional Development School (PDS) model as a starting point for rethinking ways to recruit teachers into this role. In this model, two student teachers assume full responsibility for 1…

  13. Extensions of Rasch's Multiplicative Poisson Model.

    ERIC Educational Resources Information Center

    Jansen, Margo G. H.; van Duijn, Marijtje A. J.

    1992-01-01

    A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)

  14. Neighboring and Urbanism: Commonality versus Friendship.

    ERIC Educational Resources Information Center

    Silverman, Carol J.

    1986-01-01

    Examines a dimension of neighboring that need not assume friendship as the role model. When the model assumes only a sense of connectedness as defining neighboring, then the residential correlation, shown in many studies between urbanism and neighboring, disappears. Theories of neighboring, study variables, methods, and analysis are discussed.…

  15. Hyperbolic Rendezvous at Mars: Risk Assessments and Mitigation Strategies

    NASA Technical Reports Server (NTRS)

    Jedrey, Ricky; Landau, Damon; Whitley, Ryan

    2015-01-01

    Given the current interest in the use of flyby trajectories for human Mars exploration, a key requirement is the capability to execute hyperbolic rendezvous. Hyperbolic rendezvous is used to transport crew from a Mars centered orbit, to a transiting Earth bound habitat that does a flyby. Representative cases are taken from future potential missions of this type, and a thorough sensitivity analysis of the hyperbolic rendezvous phase is performed. This includes early engine cutoff, missed burn times, and burn misalignment. A finite burn engine model is applied that assumes the hyperbolic rendezvous phase is done with at least two burns.

  16. Gas bubble formation in the cytoplasm of a fermenting yeast.

    PubMed

    Swart, Chantel W; Dithebe, Khumisho; Pohl, Carolina H; Swart, Hendrik C; Coetsee, Elizabeth; van Wyk, Pieter W J; Swarts, Jannie C; Lodolo, Elizabeth J; Kock, Johan L F

    2012-11-01

    Current paradigms assume that gas bubbles cannot be formed within yeasts although these workhorses of the baking and brewing industries vigorously produce and release CO(2) gas. We show that yeasts produce gas bubbles that fill a significant part of the cell. The missing link between intracellular CO(2) production by glycolysis and eventual CO(2) release from cells has therefore been resolved. Yeasts may serve as model to study CO(2) behavior under pressurized conditions that may impact on fermentation biotechnology. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  17. Charge transport properties of poly(dA)-poly(dT) DNA in variation of backbone disorder and amplitude of base-pair twisting motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahmi, Kinanti Aldilla, E-mail: kinanti.aldilla@ui.ac.id; Yudiarsah, Efta

    By using tight binding Hamiltonian model, charge transport properties of poly(dA)-poly(dT) DNA in variation of backbone disorder and amplitude of base-pair twisting motion is studied. The DNA chain used is 32 base pairs long poly(dA)-poly(dT) molecule. The molecule is contacted to electrode at both ends. The influence of environment on charge transport in DNA is modeled as variation of backbone disorder. The twisting motion amplitude is taking into account by assuming that the twisting angle distributes following Gaussian distribution function with zero average and standard deviation proportional to square root of temperature and inversely proportional to the twisting motion frequency.more » The base-pair twisting motion influences both the onsite energy of the bases and electron hopping constant between bases. The charge transport properties are studied by calculating current using Landauer-Buttiker formula from transmission probabilities which is calculated by transfer matrix methods. The result shows that as the backbone disorder increases, the maximum current decreases. By decreasing the twisting motion frequency, the current increases rapidly at low voltage, but the current increases slower at higher voltage. The threshold voltage can increase or decrease with increasing backbone disorder and increasing twisting frequency.« less

  18. Algebraic motion of vertically displacing plasmas

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Amitava; Pfefferle, David; Hirvijoki, Eero

    2017-10-01

    The vertical displacement of tokamak plasmas is modelled during the non-linear phase by a free-moving current-carrying rod coupled to a set of fixed conducting wires and a cylindrical conducting shell. The models capture the leading term in a Taylor expansion of the Green's function for the interaction between the plasma column and the vacuum vessel. The plasma is assumed not to vary during the VDE such that it behaves as a rigid body. In the limit of perfectly conducting structures, the plasma is prevented from coming in contact with the wall due to steep effective potential barriers by the eddy currents, and will hence oscillate at Alfvénic frequencies about a given force-free position. In addition to damping oscillations, resistivity allows for the column to drift towards the vessel on slow flux penetration timescales. The initial exponential motion of the plasma, i.e. the resistive vertical instability, is succeeded by a non-linear sinking behaviour, that is shown analytically to be algebraic and decelerative. The acceleration of the plasma column often observed in experiments is thus conjectured to originate from an early sharing of toroidal current between the core, the halo plasma and the wall or from the thermal quench dynamics precipitating loss of plasma current

  19. Multiscale Modeling of Carbon Fiber Reinforced Polymer (CFRP) for Integrated Computational Materials Engineering Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Jiaying; Liang, Biao; Zhang, Weizhao

    In this work, a multiscale modeling framework for CFRP is introduced to study hierarchical structure of CFRP. Four distinct scales are defined: nanoscale, microscale, mesoscale, and macroscale. Information at lower scales can be passed to higher scale, which is beneficial for studying effect of constituents on macroscale part’s mechanical property. This bottom-up modeling approach enables better understanding of CFRP from finest details. Current study focuses on microscale and mesoscale. Representative volume element is used at microscale and mesoscale to model material’s properties. At microscale, unidirection CFRP (UD) RVE is used to study properties of UD. The UD RVE can bemore » modeled with different volumetric fraction to encounter non-uniform fiber distribution in CFRP part. Such consideration is important in modeling uncertainties at microscale level. Currently, we identified volumetric fraction as the only uncertainty parameters in UD RVE. To measure effective material properties of UD RVE, periodic boundary conditions (PBC) are applied to UD RVE to ensure convergence of obtained properties. Properties of UD is directly used at mesoscale woven RVE modeling, where each yarn is assumed to have same properties as UD. Within woven RVE, there can be many potential uncertainties parameters to consider for a physical modeling of CFRP. Currently, we will consider fiber misalignment within yarn and angle between wrap and weft yarns. PBC is applied to woven RVE to calculate its effective material properties. The effect of uncertainties are investigated quantitatively by Gaussian process. Preliminary results of UD and Woven study are analyzed for efficacy of the RVE modeling. This work is considered as the foundation for future multiscale modeling framework development for ICME project.« less

  20. An approach to consider behavioral plasticity as a source of uncertainty when forecasting species' response to climate change

    PubMed Central

    Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo

    2015-01-01

    The rapid ecological shifts that are occurring due to climate change present major challenges for managers and policymakers and, therefore, are one of the main concerns for environmental modelers and evolutionary biologists. Species distribution models (SDM) are appropriate tools for assessing the relationship between species distribution and environmental conditions, so being customarily used to forecast the biogeographical response of species to climate change. A serious limitation of species distribution models when forecasting the effects of climate change is that they normally assume that species behavior and climatic tolerances will remain constant through time. In this study, we propose a new methodology, based on fuzzy logic, useful for incorporating the potential capacity of species to adapt to new conditions into species distribution models. Our results demonstrate that it is possible to include different behavioral responses of species when predicting the effects of climate change on species distribution. Favorability models offered in this study show two extremes: one considering that the species will not modify its present behavior, and another assuming that the species will take full advantage of the possibilities offered by an increase in environmental favorability. This methodology may mean a more realistic approach to the assessment of the consequences of global change on species' distribution and conservation. Overlooking the potential of species' phenotypical plasticity may under- or overestimate the predicted response of species to changes in environmental drivers and its effects on species distribution. Using this approach, we could reinforce the science behind conservation planning in the current situation of rapid climate change. PMID:26120426

  1. An approach to consider behavioral plasticity as a source of uncertainty when forecasting species' response to climate change.

    PubMed

    Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo

    2015-06-01

    The rapid ecological shifts that are occurring due to climate change present major challenges for managers and policymakers and, therefore, are one of the main concerns for environmental modelers and evolutionary biologists. Species distribution models (SDM) are appropriate tools for assessing the relationship between species distribution and environmental conditions, so being customarily used to forecast the biogeographical response of species to climate change. A serious limitation of species distribution models when forecasting the effects of climate change is that they normally assume that species behavior and climatic tolerances will remain constant through time. In this study, we propose a new methodology, based on fuzzy logic, useful for incorporating the potential capacity of species to adapt to new conditions into species distribution models. Our results demonstrate that it is possible to include different behavioral responses of species when predicting the effects of climate change on species distribution. Favorability models offered in this study show two extremes: one considering that the species will not modify its present behavior, and another assuming that the species will take full advantage of the possibilities offered by an increase in environmental favorability. This methodology may mean a more realistic approach to the assessment of the consequences of global change on species' distribution and conservation. Overlooking the potential of species' phenotypical plasticity may under- or overestimate the predicted response of species to changes in environmental drivers and its effects on species distribution. Using this approach, we could reinforce the science behind conservation planning in the current situation of rapid climate change.

  2. Simulation of variation of apparent resistivity in resistivity surveys using finite difference modelling with Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Aguirre, E. E.; Karchewski, B.

    2017-12-01

    DC resistivity surveying is a geophysical method that quantifies the electrical properties of the subsurface of the earth by applying a source current between two electrodes and measuring potential differences between electrodes at known distances from the source. Analytical solutions for a homogeneous half-space and simple subsurface models are well known, as the former is used to define the concept of apparent resistivity. However, in situ properties are heterogeneous meaning that simple analytical models are only an approximation, and ignoring such heterogeneity can lead to misinterpretation of survey results costing time and money. The present study examines the extent to which random variations in electrical properties (i.e. electrical conductivity) affect potential difference readings and therefore apparent resistivities, relative to an assumed homogeneous subsurface model. We simulate the DC resistivity survey using a Finite Difference (FD) approximation of an appropriate simplification of Maxwell's equations implemented in Matlab. Electrical resistivity values at each node in the simulation were defined as random variables with a given mean and variance, and are assumed to follow a log-normal distribution. The Monte Carlo analysis for a given variance of electrical resistivity was performed until the mean and variance in potential difference measured at the surface converged. Finally, we used the simulation results to examine the relationship between variance in resistivity and variation in surface potential difference (or apparent resistivity) relative to a homogeneous half-space model. For relatively low values of standard deviation in the material properties (<10% of mean), we observed a linear correlation between variance of resistivity and variance in apparent resistivity.

  3. A Volume-Fraction Based Two-Phase Constitutive Model for Blood

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Rui; Massoudi, Mehrdad; Hund, S.J.

    2008-06-01

    Mechanically-induced blood trauma such as hemolysis and thrombosis often occurs at microscopic channels, steps and crevices within cardiovascular devices. A predictive mathematical model based on a broad understanding of hemodynamics at micro scale is needed to mitigate these effects, and is the motivation of this research project. Platelet transport and surface deposition is important in thrombosis. Microfluidic experiments have previously revealed a significant impact of red blood cell (RBC)-plasma phase separation on platelet transport [5], whereby platelet localized concentration can be enhanced due to a non-uniform distribution of RBCs of blood flow in a capillary tube and sudden expansion. However,more » current platelet deposition models either totally ignored RBCs in the fluid by assuming a zero sample hematocrit or treated them as being evenly distributed. As a result, those models often underestimated platelet advection and deposition to certain areas [2]. The current study aims to develop a two-phase blood constitutive model that can predict phase separation in a RBC-plasma mixture at the micro scale. The model is based on a sophisticated theory known as theory of interacting continua, i.e., mixture theory. The volume fraction is treated as a field variable in this model, which allows the prediction of concentration as well as velocity profiles of both RBC and plasma phases. The results will be used as the input of successive platelet deposition models.« less

  4. Magma transport and metasomatism in the mantle: a critical review of current geochemical models

    USGS Publications Warehouse

    Nielson, J.E.; Wilshire, H.G.

    1993-01-01

    Conflicting geochemical models of metasomatic interactions between mantle peridotite and melt all assume that mantle reactions reflect chromatographic processes. Examination of field, petrological, and compositional data suggests that the hypothesis of chromatographic fractionation based on the supposition of large-scale percolative processes needs review and revision. Well-constrained rock and mineral data from xenoliths indicate that many elements that behave incompatibly in equilibrium crystallization processes are absorbed immediately when melts emerge from conduits into depleted peridotite. After reacting to equilibrium with the peridotite, melt that percolates away from the conduit is largely depleted of incompatible elements. Continued addition of melts extends the zone of equilibrium farther from the conduit. Such a process resembles ion-exchange chromatography for H2O purification, rather than the model of chromatographic species separation. -from Authors

  5. Effects on the CMB from compactification before inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontou, Eleni-Alexandra; Blanco-Pillado, Jose J.; Hertzberg, Mark P.

    2017-04-01

    Many theories beyond the Standard Model include extra dimensions, though these have yet to be directly observed. In this work we consider the possibility of a compactification mechanism which both allows extra dimensions and is compatible with current observations. This compactification is predicted to leave a signature on the CMB by altering the amplitude of the low l multipoles, dependent on the amount of inflation. Recently discovered CMB anomalies at low multipoles may be evidence for this. In our model we assume the spacetime is the product of a four-dimensional spacetime and flat extra dimensions. Before the compactification, both themore » four-dimensional spacetime and the extra dimensions can either be expanding or contracting independently. Taking into account physical constraints, we explore the observational consequences and the plausibility of these different models.« less

  6. Novel dark matter phenomenology at colliders

    NASA Astrophysics Data System (ADS)

    Wardlow, Kyle Patrick

    While a suitable candidate particle for dark matter (DM) has yet to be discovered, it is possible one will be found by experiments currently investigating physics on the weak scale. If discovered on that energy scale, the dark matter will likely be producible in significant quantities at colliders like the LHC, allowing the properties of and underlying physical model characterizing the dark matter to be precisely determined. I assume that the dark matter will be produced as one of the decay products of a new massive resonance related to physics beyond the Standard Model, and using the energy distributions of the associated visible decay products, develop techniques for determining the symmetry protecting these potential dark matter candidates from decaying into lighter Standard Model (SM) particles and to simultaneously measure the masses of both the dark matter candidate and the particle from which it decays.

  7. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  8. Influence of internal current and pacing current on pacemaker longevity.

    PubMed

    Schuchert, A; Kuck, K H

    1994-01-01

    The effects of lower pulse amplitude on battery current and pacemaker longevity were studied comparing the new, small-sized VVI pacemaker, Minix 8341, with the former model, Pasys 8329. Battery current was telemetrically measured at 0.8, 1.6, 2.5, and 5.0 V pulse amplitude and 0.05, 0.25, 0.5, and 1.0 msec pulse duration. Internal current was assumed to be equal to the battery current at 0.8 V and 0.05 msec. Pacing current was calculated subtracting internal current from battery current. The Minix pacemaker had a significantly lower battery current because of a lower internal current (Minix: 4.1 +/- 0.1 microA; Pasys: 16.1 +/- 0.1 microA); pacing current of both units was similar. At 0.5 msec pulse duration, the programming from 5.0-2.5 V pulse amplitude resulted in a greater relative reduction of battery current in the newer pacemaker (51% vs 25%). Projected longevity of each pacemaker was 7.9 years at 5.0 V and 0.5 msec. The programming from 5.0-2.5 V extended the projected longevity by 2.3 years (Pasys) and by 7.1 years (Minix). The longevity was negligibly longer after programming to 1.6 V. extension of pacemaker longevity can be achieved with the programming to 2.5 V or less if the connected pacemakers need a low internal current for their circuitry.

  9. Three region steam drum model for a nuclear power plant simulator (BRENDA). Technical report 1 Oct 80-May 81. [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slovik, G.C.

    1981-08-01

    A new three region steam drum model has been developed. This model differs from previous works for it assumes the existence of three regions within the steam drum: a steam region, a mid region (assumed to be under saturation conditions at steady state), and a bottom region (having a mixed mean subcooled enthalpy).

  10. Field Aligned Currents Derived from Pressure Profiles Obtained from TWINS ENA Images

    NASA Astrophysics Data System (ADS)

    Wood, K.; Perez, J. D.; McComas, D. J.; Goldstein, J.; Valek, P. W.

    2015-12-01

    Field aligned currents (FACs) that flow from the Earth's magnetosphere into the ionosphere are an important coupling mechanism in the interaction of the solar wind with the Earth's magnetosphere. Assuming pressure balance along with charge conservation yields an expression for the FACs in terms of plasma pressure gradients and pressure anisotropy. The Two Wide-Angle Imaging Neutral Atom Spectrometers (TWINS) mission, the first stereoscopic ENA magnetospheric imager, provides global images of the inner magnetosphere from which ion pressure distributions and pressure anisotropies can be obtained. Following the formulations in Heineman [1990] and using results from TWINS observations, we calculate the distribution of field aligned currents for the 17-18 March 2015 geomagnetic storm in which extended ionospheric precipitation was observed. Initial results for the field aligned currents will be generated assuming an isotropic pitch angle distribution. Global maps of field aligned currents during the main and recovery phase of the storm will be presented. Heinemann, H. (1990), Representations of Currents and Magnetic Fields in Anisotropic Magnetohydrostatic Plasma, J. Geophys. Res., 95, 7789.

  11. 76 FR 78706 - Self-Regulatory Organizations; C2 Options Exchange, Incorporated; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-19

    ... ``sleep timer'') and then reactivate the re-COA feature.\\7\\ All timers will be reset if a new complex.... Assume the sleep timer is set at 60 minutes. Assume the current market calculated based on the derived... the resting order has been subject to COA 2 times since it was booked in COB, the 60 minute sleep...

  12. A cosmological exclusion plot: towards model-independent constraints on modified gravity from current and future growth rate data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddei, Laura; Amendola, Luca, E-mail: laura.taddei@fis.unipr.it, E-mail: l.amendola@thphys.uni-heidelberg.de

    Most cosmological constraints on modified gravity are obtained assuming that the cosmic evolution was standard ΛCDM in the past and that the present matter density and power spectrum normalization are the same as in a ΛCDM model. Here we examine how the constraints change when these assumptions are lifted. We focus in particular on the parameter Y (also called G{sub eff}) that quantifies the deviation from the Poisson equation. This parameter can be estimated by comparing with the model-independent growth rate quantity fσ{sub 8}(z) obtained through redshift distortions. We reduce the model dependency in evaluating Y by marginalizing over σ{submore » 8} and over the initial conditions, and by absorbing the degenerate parameter Ω{sub m,0} into Y. We use all currently available values of fσ{sub 8}(z). We find that the combination Y-circumflex =YΩ{sub m,0}, assumed constant in the observed redshift range, can be constrained only very weakly by current data, Y-circumflex =0.28{sub −0.23}{sup +0.35} at 68% c.l. We also forecast the precision of a future estimation of Y-circumflex in a Euclid-like redshift survey. We find that the future constraints will reduce substantially the uncertainty, Y-circumflex =0.30{sub −0.09}{sup +0.08} , at 68% c.l., but the relative error on Y-circumflex around the fiducial remains quite high, of the order of 30%. The main reason for these weak constraints is that Y-circumflex is strongly degenerate with the initial conditions, so that large or small values of Y-circumflex are compensated by choosing non-standard initial values of the derivative of the matter density contrast. Finally, we produce a forecast of a cosmological exclusion plot on the Yukawa strength and range parameters, which complements similar plots on laboratory scales but explores scales and epochs reachable only with large-scale galaxy surveys. We find that future data can constrain the Yukawa strength to within 3% of the Newtonian one if the range is around a few Megaparsecs. In the particular case of f(R) models, we find that the Yukawa range will be constrained to be larger than 80 Mpc/h or smaller than 2 Mpc/h (95% c.l.), regardless of the specific f(R) model.« less

  13. A dual-loop model of the human controller in single-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1977-01-01

    A dual loop model of the human controller in single axis compensatory tracking tasks is introduced. This model possesses an inner-loop closure which involves feeding back that portion of the controlled element output rate which is due to control activity. The sensory inputs to the human controller are assumed to be system error and control force. The former is assumed to be sensed via visual, aural, or tactile displays while the latter is assumed to be sensed in kinesthetic fashion. A nonlinear form of the model is briefly discussed. This model is then linearized and parameterized. A set of general adaptive characteristics for the parameterized model is hypothesized. These characteristics describe the manner in which the parameters in the linearized model will vary with such things as display quality. It is demonstrated that the parameterized model can produce controller describing functions which closely approximate those measured in laboratory tracking tasks for a wide variety of controlled elements.

  14. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  15. To predict the niche, model colonization and extinction

    USGS Publications Warehouse

    Yackulic, Charles B.; Nichols, James D.; Reid, Janice; Der, Ricky

    2015-01-01

    Ecologists frequently try to predict the future geographic distributions of species. Most studies assume that the current distribution of a species reflects its environmental requirements (i.e., the species' niche). However, the current distributions of many species are unlikely to be at equilibrium with the current distribution of environmental conditions, both because of ongoing invasions and because the distribution of suitable environmental conditions is always changing. This mismatch between the equilibrium assumptions inherent in many analyses and the disequilibrium conditions in the real world leads to inaccurate predictions of species' geographic distributions and suggests the need for theory and analytical tools that avoid equilibrium assumptions. Here, we develop a general theory of environmental associations during periods of transient dynamics. We show that time-invariant relationships between environmental conditions and rates of local colonization and extinction can produce substantial temporal variation in occupancy–environment relationships. We then estimate occupancy–environment relationships during three avian invasions. Changes in occupancy–environment relationships over time differ among species but are predicted by dynamic occupancy models. Since estimates of the occupancy–environment relationships themselves are frequently poor predictors of future occupancy patterns, research should increasingly focus on characterizing how rates of local colonization and extinction vary with environmental conditions.

  16. The feasibility of a public-private long-term care financing plan.

    PubMed

    Arling, G; Hagan, S; Buhaug, H

    1992-08-01

    In this study, the feasibility of a public-private long-term care (LTC) financing plan that would combine private LTC insurance with special Medicaid eligibility requirements was assessed. The plan would also raise the Medicaid asset limit from the current $2,000 to the value of an individual's insurance benefits. After using benefits the individual could enroll in Medicaid. Thus, insurance would substitute for asset spend-down, protecting individuals against catastrophic costs. This financing plan was analyzed through a computer model that simulated lifetime LTC use for a middle-income age cohort beginning at 65 years of age. LTC payments from Medicaid, personal income and assets, Medicare, and insurance were projected by the model. Assuming that LTC use and costs would not grow beyond current projections, the proposed plan would provide asset protection for the cohort without increasing Medicaid expenditures. In contrast, private insurance alone, with no change in Medicaid eligibility, would offer only limited asset protection. The results must be qualified, however, because even a modest increase in LTC cost growth or use of care (beyond current projections) could result in substantially higher Medicaid expenditures. Also, private insurance might increase personal LTC expenditures because of the added cost of insuring.

  17. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method.

    PubMed

    Norris, Peter M; da Silva, Arlindo M

    2016-07-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  18. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 1: Method

    NASA Technical Reports Server (NTRS)

    Norris, Peter M.; Da Silva, Arlindo M.

    2016-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  19. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method

    PubMed Central

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847

  20. Model of convection mass transfer in titanium alloy at low energy high current electron beam action

    NASA Astrophysics Data System (ADS)

    Sarychev, V. D.; Granovskii, A. Yu; Nevskii, S. A.; Konovalov, S. V.; Gromov, V. E.

    2017-01-01

    The convection mixing model is proposed for low-energy high-current electron beam treatment of titanium alloys, pre-processed by heterogeneous plasma flows generated via explosion of carbon tape and powder TiB2. The model is based on the assumption vortices in the molten layer are formed due to the treatment by concentrated energy flows. These vortices evolve as the result of thermocapillary convection, arising because of the temperature gradient. The calculation of temperature gradient and penetration depth required solution of the heat problem with taking into account the surface evaporation. However, instead of the direct heat source the boundary conditions in phase transitions were changed in the thermal conductivity equation, assuming the evaporated material takes part in the heat exchange. The data on the penetration depth and temperature distribution are used for the thermocapillary model. The thermocapillary model embraces Navier-Stocks and convection heat transfer equations, as well as the boundary conditions with the outflow of evaporated material included. The solution of these equations by finite elements methods pointed at formation of a multi-vortices structure when electron-beam treatment and its expansion over new zones of material. As the result, strengthening particles are found at the depth exceeding manifold their penetration depth in terms of the diffusion mechanism.

Top