Sample records for tangentially geostrophic assumptions

  1. A generalized quasi-geostrophic core flow formalism

    NASA Astrophysics Data System (ADS)

    Amit, H.; Coutelier, M.

    2016-12-01

    The quasi-geostrophic formalism provides a theoretical coupling between toroidal and poloidal core flows. By enforcing impermeable core-mantle boundary, conservation of mass and a linear variation of the axial flow along an axial column, this coupling can be written as div_h · u_h = c tan θ/R u_θ where u_h is the tangential velocity at the top of the core, θ is co-latitude, R is the core radius and c=2 (Amit and Olson, 2004; Amit and Pais, 2013). We extend this theory and develop this expression for different profiles of the axial flow. Our results show that the same expression holds but the value of c may vary depending on the profile of the axial flow, including c=1 as in the tangential geostrophy formalism. These results may therefore provide new constraints on quasi-geostrophic core flow inversions from geomagnetic SV.

  2. Quasi-geostrophic dynamo theory

    NASA Astrophysics Data System (ADS)

    Calkins, Michael A.

    2018-03-01

    The asymptotic theory of rapidly rotating, convection-driven dynamos in a plane layer is discussed. A key characteristic of these quasi-geostrophic dynamos is that the Lorentz force is comparable in magnitude to the ageostrophic component of the Coriolis force, rather than the leading order component that yields geostrophy. This characteristic is consistent with both observations of planetary dynamos and numerical dynamo investigations, where the traditional Elssasser number, ΛT = O (1) . Thus, while numerical dynamo simulations currently cannot access the strongly turbulent flows that are thought to be characteristic of planetary interiors, it is argued that they are in the appropriate geostrophically balanced regime provided that inertial and viscous forces are both small relative to the leading order Coriolis force. Four distinct quasi-geostrophic dynamo regimes are discussed, with each regime characterized by a unique magnetic to kinetic energy density ratio and differing dynamics. The axial torque due to the Lorentz force is shown to be asymptotically small for such quasi-geostrophic dynamos, suggesting that 'Taylor's constraint' represents an ambiguous measure of the primary force balance in a rapidly rotating dynamo.

  3. Nonlinear Theory of The Geostrophic Adjustment

    NASA Astrophysics Data System (ADS)

    Zeitlin, V.

    Nonlinear geostrophic adjustment and splitting of the fast and slow dynamical vari- ables are analysed in the framework of multi-layer and continuously stratified prim- itive equations by means of the multi-scale perturbation theory in the Rossby num- ber applied to localized initial disturbances. Two basic dynamical regimes: the quasi- geostrophic (QG) and the frontal geostrophic (FG) with small and large deviations of the isopycnal surfaces, respectively, are considered and differences in corresponding adjustment scenarios are displayed. Decoupling of the fast component of the flow is proven up to the third order in Rossby number and long-time corrections to the stan- dard balanced QG and FG models are found. Peculiarities of splitting in the FG regime due to the quasi-inertial oscillations are displayed and a Schrodinger-like modulation equations for the envelope of these latter are derived.

  4. Derivation of Inviscid Quasi-geostrophic Equation from Rotational Compressible Magnetohydrodynamic Flows

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Sam; Lin, Ying-Chieh; Su, Cheng-Fang

    2018-04-01

    In this paper, we consider the compressible models of magnetohydrodynamic flows giving rise to a variety of mathematical problems in many areas. We derive a rigorous quasi-geostrophic equation governed by magnetic field from the rotational compressible magnetohydrodynamic flows with the well-prepared initial data. It is a first derivation of quasi-geostrophic equation governed by the magnetic field, and the tool is based on the relative entropy method. This paper covers two results: the existence of the unique local strong solution of quasi-geostrophic equation with the good regularity and the derivation of a quasi-geostrophic equation.

  5. On the coupled evolution of oceanic internal waves and quasi-geostrophic flow

    NASA Astrophysics Data System (ADS)

    Wagner, Gregory LeClaire

    Oceanic motion outside thin boundary layers is primarily a mixture of quasi-geostrophic flow and internal waves with either near-inertial frequencies or the frequency of the semidiurnal lunar tide. This dissertation seeks a deeper understanding of waves and flow through reduced models that isolate their nonlinear and coupled evolution from the Boussinesq equations. Three physical-space models are developed: an equation that describes quasi-geostrophic evolution in an arbitrary and prescribed field of hydrostatic internal waves; a three-component model that couples quasi-geostrophic flow to both near-inertial waves and the near-inertial second harmonic; and a model for the slow evolution of hydrostatic internal tides in quasi-geostrophic flow of near-arbitrary scale. This slow internal tide equation opens the path to a coupled model for the energetic interaction of quasi-geostrophic flow and oceanic internal tides. Four results emerge. First, the wave-averaged quasi-geostrophic equation reveals that finite-amplitude waves give rise to a mean flow that advects quasi-geostrophic potential vorticity. Second is the definition of a new material invariant: Available Potential Vorticity, or APV. APV isolates the part of Ertel potential vorticity available for balanced-flow evolution in Eulerian frames and proves necessary in the separating waves and quasi-geostrophic flow. The third result, hashed out for near-inertial waves and quasi-geostrophic flow, is that wave-flow interaction leads to energy exchange even under conditions of weak nonlinearity. For storm-forced oceanic near-inertial waves the interaction often energizes waves at the expense of flow. We call this extraction of balanced quasi-geostrophic energy 'stimulated generation' since it requires externally-forced rather than spontaneously-generated waves. The fourth result is that quasi-geostrophic flow can encourage or 'catalyze' a nonlinear interaction between a near-inertial wave field and its second harmonic

  6. Shallow Water Quasi-Geostrophic Theory on the Sphere

    NASA Astrophysics Data System (ADS)

    Schubert, Wayne H.; Taft, Richard K.; Silvers, Levi G.

    2009-02-01

    Quasi-geostrophic theory forms the basis for much of our understanding of mid-latitude atmospheric dynamics. The theory is typically presented in either its f-plane form or its β-plane form. However, for many applications, including diagnostic use in global climate modeling, a fully spherical version would be most useful. Such a global theory does in fact exist and has for many years, but few in the scientific community seem to have ever been aware of it. In the context of shallow water dynamics, it is shown that the spherical version of quasigeostrophic theory is easily derived (re-derived) based on a partitioning of the flow between nondivergent and irrotational components, as opposed to a partitioning between geostrophic and ageostrophic components. In this way, the invertibility principle is expressed as a relation between the streamfunction and the potential vorticity, rather than between the geopotential and the potential vorticity. This global theory is then extended by showing that the invertibility principle can be solved analytically using spheroidal harmonic transforms, an advancement that greatly improves the usefulness of this "forgotten" theory. When the governing equation for the time evolution of the potential vorticity is linearized about a state of rest, a simple Rossby-Haurwitz wave dispersion relation is derived and examined. These waves have a horizontal structure described by spheroidal harmonics, and the Rossby-Haurwitz wave frequencies are given in terms of the eigenvalues of the spheroidal harmonic operator. Except for sectoral harmonics with low zonal wavenumber, the quasi-geostrophic Rossby-Haurwitz frequencies agree very well with those calculated from the primitive equations. One of the many possible applications of spherical quasi-geostrophic theory is to the study of quasi-geostrophic turbulence on the sphere. In this context, the theory is used to derive an anisotropic Rhines barrier in three-dimensional wavenumber space.

  7. On the consequences of strong stable stratification at the top of earth's outer core

    NASA Technical Reports Server (NTRS)

    Bloxham, Jeremy

    1990-01-01

    The consequences of strong stable stratification at the top of the earth's fluid outer core are considered, concentrating on the generation of the geomagnetic secular variation. It is assumed that the core near the core-mantle boundary is both strongly stably stratified and free of Lorentz forces: it is found that this set of assumptions severely limits the class of possible motions, none of which is compatible with the geomagnetic secular variation. Relaxing either assumption is adequate: tangentially geostrophic flows are consistent with the secular variation if the assumption that the core is strongly stably stratified is relaxed (while retaining the assumption that Lorentz forces are negligible); purely toroidal flows may explain the secular variation if Lorentz forces are included.

  8. Nonlinear interaction of a fast magnetogasdynamic shock with a tangential discontinuity

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.

    1973-01-01

    A basic problem, which is of considerable interest in geoastrophysical applications of magnetogasdynamics, is the nonlinear interaction of a fast shock (S sub f) with a tangential discontinuity (T). The problem is treated for an arbitrary S sub f interacting with an arbitrary T under the assumption that in the frame of reference in which S sub f and T are at rest, the flow is superfast on both sides of T, and that a steady flow develops. As a result of the nonlinear analysis a flow pattern is obtained consisting of the incident discontinuities S sub f 1 and T2 and a transmitted fast shock S sub f 3, the modified tangential discontinuity T4 and a reflected fast shock S sub f 5 or fast rarefaction wave R sub f 5. The results are discussed in terms of seven significant similarity parameters. In addition special cases like changes in magnetic field direction only, changes in desnity or velocity shear only etc. are discussed.

  9. Do uniform tangential interfacial stresses enhance adhesion?

    NASA Astrophysics Data System (ADS)

    Menga, Nicola; Carbone, Giuseppe; Dini, Daniele

    2018-03-01

    We present theoretical arguments, based on linear elasticity and thermodynamics, to show that interfacial tangential stresses in sliding adhesive soft contacts may lead to a significant increase of the effective energy of adhesion. A sizable expansion of the contact area is predicted in conditions corresponding to such scenario. These results are easily explained and are valid under the assumptions that: (i) sliding at the interface does not lead to any loss of adhesive interaction and (ii) spatial fluctuations of frictional stresses can be considered negligible. Our results are seemingly supported by existing experiments, and show that frictional stresses may lead to an increase of the effective energy of adhesion depending on which conditions are established at the interface of contacting bodies in the presence of adhesive forces.

  10. A Variational Formalism for the Radiative Transfer Equation and a Geostrophic, Hydrostatic Atmosphere: Prelude to Model 3

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.

    1991-01-01

    The second step in development of MODEL III is summarized. It combines the four radiative transfer equations of the first step with the equations for a geostrophic and hydrostatic atmosphere. This step is intended to bring radiance into a three dimensional balance with wind, height, and temperature. The use of the geostrophic approximation in place of the full set of primitive equations allows for an easier evaluation of how the inclusion of the radiative transfer equation increases the complexity of the variational equations. Seven different variational formulations were developed for geostrophic, hydrostatic, and radiative transfer equations. The first derivation was too complex to yield solutions that were physically meaningful. For the remaining six derivations, the variational method gave the same physical interpretation (the observed brightness temperatures could provide no meaningful input to a geostrophic, hydrostatic balance) at least through the problem solving methodology used in these studies. The variational method is presented and the Euler-Lagrange equations rederived for the geostrophic, hydrostatic, and radiative transfer equations.

  11. Downscaling ocean conditions: Experiments with a quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Katavouta, A.; Thompson, K. R.

    2013-12-01

    The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.

  12. On the Impact of Sea Level Fingerprints on the Estimation of the Meridional Geostrophic Transport in the Atlantic Basin

    NASA Astrophysics Data System (ADS)

    Hsu, C. W.; Velicogna, I.

    2017-12-01

    The mid-ocean geostrophic transport accounts for more than half of the seasonal and inter-annual variabilities in Atlantic meridional overturning circulation (AMOC) based on the in-situ measurement from RAPID MOC/MOCHA array since 2004. Here, we demonstrate that the mid-ocean geostrophic transport estimates derived from ocean bottom pressure (OBP) are affected by the sea level fingerprint (SLF), which is a variation of the equi-geopotential height (relative sea level) due to rapid mass unloading of the entire Earth system and in particular from glaciers and ice sheets. This potential height change, although it alters the OBP, should not be included in the derivation of the mid-ocean geostrophic transport. This "pseudo" geostrophic-transport due to the SLF is in-phase with the seasonal and interannual signal in the upper mid-ocean geostrophic transport. The east-west SLF gradient across the Atlantic basin could be mistaken as a north-south geostrophic transport that increases by 54% of its seasonal variability and by 20% of its inter-annual variability. This study demonstrates for the first time the importance of this pseudo transport in both the annual and interannual signals by comparing the SLF with in-situ observation from RAPID MOC/MOCHA array. The pseudo transport needs to be taken into account if OBP measurements and remote sensing are used to derive mid-ocean geostrophic transport.

  13. Wave Response during Hydrostatic and Geostrophic Adjustment. Part I: Transient Dynamics.

    NASA Astrophysics Data System (ADS)

    Chagnon, Jeffrey M.; Bannon, Peter R.

    2005-05-01

    The adjustment of a compressible, stably stratified atmosphere to sources of hydrostatic and geostrophic imbalance is investigated using a linear model. Imbalance is produced by prescribed, time-dependent injections of mass, heat, or momentum that model those processes considered “external” to the scales of motion on which the linearization and other model assumptions are justifiable. Solutions are demonstrated in response to a localized warming characteristic of small isolated clouds, larger thunderstorms, and convective systems.For a semi-infinite atmosphere, solutions consist of a set of vertical modes of continuously varying wavenumber, each of which contains time dependencies classified as steady, acoustic wave, and buoyancy wave contributions. Additionally, a rigid lower-boundary condition implies the existence of a discrete mode—the Lamb mode— containing only a steady and acoustic wave contribution. The forced solutions are generalized in terms of a temporal Green's function, which represents the response to an instantaneous injection.The response to an instantaneous warming with geometry representative of a small, isolated cloud takes place in two stages. Within the first few minutes, acoustic and Lamb waves accomplish an expansion of the heated region. Within the first quarter-hour, nonhydrostatic buoyancy waves accomplish an upward displacement inside of the heated region with inflow below, outflow above, and weak subsidence on the periphery—all mainly accomplished by the lowest vertical wavenumber modes, which have the largest horizontal group speed. More complicated transient patterns of inflow aloft and outflow along the lower boundary are accomplished by higher vertical wavenumber modes. Among these is an outwardly propagating rotor along the lower boundary that effectively displaces the low-level inflow upward and outward.A warming of 20 min duration with geometry representative of a large thunderstorm generates only a weak acoustic

  14. Conserved pattern of tangential neuronal migration during forebrain development.

    PubMed

    Métin, Christine; Alvarez, Chantal; Moudoux, David; Vitalis, Tania; Pieau, Claude; Molnár, Zoltán

    2007-08-01

    Origin, timing and direction of neuronal migration during brain development determine the distinct organization of adult structures. Changes in these processes might have driven the evolution of the forebrain in vertebrates. GABAergic neurons originate from the ganglionic eminence in mammals and migrate tangentially to the cortex. We are interested in differences and similarities in tangential migration patterns across corresponding telencephalic territories in mammals and reptiles. Using morphological criteria and expression patterns of Darpp-32, Tbr1, Nkx2.1 and Pax6 genes, we show in slice cultures of turtle embryos that early cohorts of tangentially migrating cells are released from the medial ganglionic eminence between stages 14 and 18. Additional populations migrate tangentially from the dorsal subpallium. Large cohorts of tangentially migrating neurons originate ventral to the dorsal ventricular ridge at stage 14 and from the lateral ganglionic eminence from stage 15. Release of GABAergic cells from these regions was investigated further in explant cultures. Tangential migration in turtle proceeds in a fashion similar to mammals. In chimeric slice culture and in ovo graft experiments, the tangentially migrating cells behaved according to the host environment - turtle cells responded to the available cues in mouse slices and mouse cells assumed characteristic migratory routes in turtle brains, indicating highly conserved embryonic signals between these distant species. Our study contributes to the evaluation of theories on the origin of the dorsal cortex and indicates that tangential migration is universal in mammals and sauropsids.

  15. Computational analysis of forebody tangential slot blowing

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Agosta-Greenman, Roxana M.; Rizk, Yehia M.; Schiff, Lewis B.; Cummings, Russell M.

    1994-01-01

    An overview of the computational effort to analyze forebody tangential slot blowing is presented. Tangential slot blowing generates side force and yawing moment which may be used to control an aircraft flying at high-angle-of-attack. Two different geometries are used in the analysis: (1) The High Alpha Research Vehicle; and (2) a generic chined forebody. Computations using the isolated F/A-18 forebody are obtained at full-scale wind tunnel test conditions for direct comparison with available experimental data. The effects of over- and under-blowing on force and moment production are analyzed. Time-accurate solutions using the isolated forebody are obtained to study the force onset timelag of tangential slot blowing. Computations using the generic chined forebody are obtained at experimental wind tunnel conditions, and the results compared with available experimental data. This computational analysis compliments the experimental results and provides a detailed understanding of the effects of tangential slot blowing on the flow field about simple and complex geometries.

  16. Obliquity dependence of the tangential YORP

    NASA Astrophysics Data System (ADS)

    Ševeček, P.; Golubov, O.; Scheeres, D. J.; Krugly, Yu. N.

    2016-08-01

    Context. The tangential Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect is a thermophysical effect that can alter the rotation rate of asteroids and is distinct from the so-called normal YORP effect, but to date has only been studied for asteroids with zero obliquity. Aims: We aim to study the tangential YORP force produced by spherical boulders on the surface of an asteroid with an arbitrary obliquity. Methods: A finite element method is used to simulate heat conductivity inside a boulder, to find the recoil force experienced by it. Then an ellipsoidal asteroid uniformly covered by these types of boulders is considered and the torque is numerically integrated over its surface. Results: Tangential YORP is found to operate on non-zero obliquities and decreases by a factor of two for increasing obliquity.

  17. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  18. A Theory For The Variability of The Baroclinic Quasi-geostrophic Winnd Driven Circulation.

    NASA Astrophysics Data System (ADS)

    Ben Jelloul, M.; Huck, T.

    We propose a theory of the wind driven circulation based on the large scale (i.e. small Burger number) quasi-geostrophic assumptions retained in the Rhines and Young (1982) classical study of the steady baroclinic flow. We therefore use multiple time scale and asymptotic expansions to separate steady and the time dependent component of the flow. The barotropic flow is given by the Sverdrup balance. At first order in Burger number, the baroclinic flow can be decom- posed in two parts. A steady contribution ensures no flow in the deep layer which is at rest in absence of dissipative processes. Since the baroclinic instability is inhibited at large scale a spectrum of neutral modes also arises. These are of three type, classical Rossby basin modes deformed through advection by the barotropic flow, recirculating modes localized in the recirculation gyre and blocked modes corresponding to closed potential vorticity contours. At next order in Burger number, amplitude equations for baroclinic modes are derived. If dissipative processes are included at this order, the system adjusts towards Rhines and Young solution with a homogenized potential vorticity pool.

  19. Greater Role of Geostrophic Currents on Ekman Dynamics in the Western Arctic Ocean as a Mechanism for Beaufort Gyre Stabilization

    NASA Astrophysics Data System (ADS)

    Steele, M.; Zhong, W.; Zhang, J.; Zhao, J.

    2017-12-01

    Seven different methods, with and without including geostrophic currents, were used to explore Ekman dynamics in the western Arctic Ocean for the period 1992-2014. Results show that surface geostrophic currents have been increasing and are much stronger than Ekman layer velocities in recent years (2003-2014) when the oceanic Beaufort Gyre (BG) is spinning up in the region. The new methods that include geostrophic currents result in more realistic Ekman pumping velocities than a previous iterative method that does not consider geostrophic currents and therefore overestimates Ekman pumping velocities by up to 52% in the central area of the BG over the period 2003-2014. When the BG is spinning up as seen in recent years, geostrophic currents become stronger, which tend to modify the ice-ocean stress and to cause an Ekman divergence that counteracts wind-driven Ekman convergence in the Canada Basin. This is a mechanism we have identified to play an important and growing role in stabilizing the Ekman convergence and therefore the BG in recent years. This mechanism may be used to explain three scenarios that describe the interplay of changes in wind forcing, sea ice motion, and geostrophic currents that control the variability of the Ekman dynamics in the central BG during 1992-2014. Results also reveal several upwelling regions in the southern and northern Canada Basin and the Chukchi Abyssal Plain which may plays a significant role in biological processes in these regions.

  20. Greater Role of Geostrophic Currents in Ekman Dynamics in the Western Arctic Ocean as a Mechanism for Beaufort Gyre Stabilization

    NASA Astrophysics Data System (ADS)

    Zhong, Wenli; Steele, Michael; Zhang, Jinlun; Zhao, Jinping

    2018-01-01

    Seven different methods, with and without including geostrophic currents, were used to explore Ekman dynamics in the western Arctic Ocean for the period 1992-2014. Results show that surface geostrophic currents have been increasing and are much stronger than Ekman layer velocities in recent years (2003-2014) when the oceanic Beaufort Gyre (BG) is spinning up in the region. The new methods that include geostrophic currents result in more realistic Ekman pumping velocities than a previous iterative method that does not consider geostrophic currents and therefore overestimates Ekman pumping velocities by up to 52% in the central area of the BG over the period 2003-2014. When the BG is spinning up as seen in recent years, geostrophic currents become stronger, which tend to modify the ice-ocean stress and moderate the wind-driven Ekman convergence in the Canada Basin. This is a mechanism we have identified to play an important and growing role in stabilizing the Ekman convergence and therefore the BG in recent years. This mechanism may be used to explain three scenarios that describe the interplay of changes in wind forcing, sea ice motion, and geostrophic currents that control the variability of the Ekman dynamics in the central BG during 1992-2014. Results also reveal several upwelling regions in the southern and northern Canada Basin and the Chukchi Abyssal Plain which may play a significant role in physical and biological processes in these regions.

  1. Bi-tangential hybrid IMRT for sparing the shoulder in whole breast irradiation.

    PubMed

    Farace, P; Deidda, M A; Iamundo de Cumis, I; Iamundo de Curtis, I; Deiana, E; Farigu, R; Lay, G; Porru, S

    2013-11-01

    A bi-tangential technique is proposed to reduce undesired doses to the shoulder produced by standard tangential irradiation. A total of 6 patients affected by shoulder pain and reduced functional capacity after whole-breast irradiation were retrospectively analysed. The standard tangential plan used for treatment was compared with (1) a single bi-tangential plan where, to spare the shoulder, the lateral open tangent was split into two half-beams at isocentre, with the superior portion rotated by 10-20° medially with respect to the standard lateral beam; (2) a double bi-tangential plan, where both the tangential open beams were split. The planning target volume (PTV) coverage and the dose to the portion of muscles and axilla included in the standard tangential beams were compared. PTV95 % of standard plan (91.9 ± 3.8) was not significantly different from single bi-tangential plan (91.8 ± 3.4); a small but significant (p < 0.01) decrease was observed with the double bi-tangential plan (90.1 ± 3.7). A marked dose reduction to the muscle was produced by the single bi-tangential plan around 30-40 Gy. The application of the double bi-tangential technique further reduced the volume receiving around 20 Gy, but did not markedly affect the higher doses. The dose to the axilla was reduced both in the single and the double bi-tangential plans. The single bi-tangential technique would have been able to reduce the dose to shoulder and axilla, without compromising target coverage. This simple technique is valuable for irradiation after axillary lymph node dissection or in patients without dissection due to negative or low-volume sentinel lymph node disease.

  2. Quasi-Geostrophic Diagnosis of Mixed-Layer Dynamics Embedded in a Mesoscale Turbulent Field

    NASA Astrophysics Data System (ADS)

    Chavanne, C. P.; Klein, P.

    2016-02-01

    A new quasi-geostrophic model has been developed to diagnose the three-dimensional circulation, including the vertical velocity, in the upper ocean from high-resolution observations of sea surface height and buoyancy. The formulation for the adiabatic component departs from the classical surface quasi-geostrophic framework considered before since it takes into account the stratification within the surface mixed-layer that is usually much weaker than that in the ocean interior. To achieve this, the model approximates the ocean with two constant-stratification layers : a finite-thickness surface layer (or the mixed-layer) and an infinitely-deep interior layer. It is shown that the leading-order adiabatic circulation is entirely determined if both the surface streamfunction and buoyancy anomalies are considered. The surface layer further includes a diabatic dynamical contribution. Parameterization of diabatic vertical velocities is based on their restoring impacts of the thermal-wind balance that is perturbed by turbulent vertical mixing of momentum and buoyancy. The model skill in reproducing the three-dimensional circulation in the upper ocean from surface data is checked against the output of a high-resolution primitive-equation numerical simulation. Correlation between simulated and diagnosed vertical velocities are significantly improved in the mixed-layer for the new model compared to the classical surface quasi-geostrophic model, reaching 0.9 near the surface.

  3. Properties of Tangential and Cyclic Polygons: An Application of Circulant Matrices

    ERIC Educational Resources Information Center

    Leung, Allen; Lopez-Real, Francis

    2003-01-01

    In this paper, the properties of tangential and cyclic polygons proposed by Lopez-Real are proved rigorously using the theory of circulant matrices. In particular, the concepts of slippable tangential polygons and conformable cyclic polygons are defined. It is shown that an n-sided tangential (or cyclic) polygon P[subscript n] with n even is…

  4. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    NASA Astrophysics Data System (ADS)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  5. On Instability of Geostrophic Current with Linear Vertical Shear at Length Scales of Interleaving

    NASA Astrophysics Data System (ADS)

    Kuzmina, N. P.; Skorokhodov, S. L.; Zhurbas, N. V.; Lyzhkov, D. A.

    2018-01-01

    The instability of long-wave disturbances of a geostrophic current with linear velocity shear is studied with allowance for the diffusion of buoyancy. A detailed derivation of the model problem in dimensionless variables is presented, which is used for analyzing the dynamics of disturbances in a vertically bounded layer and for describing the formation of large-scale intrusions in the Arctic basin. The problem is solved numerically based on a high-precision method developed for solving fourth-order differential equations. It is established that there is an eigenvalue in the spectrum of eigenvalues that corresponds to unstable (growing with time) disturbances, which are characterized by a phase velocity exceeding the maximum velocity of the geostrophic flow. A discussion is presented to explain some features of the instability.

  6. Simulation of a tangential soft x-ray imaging system.

    PubMed

    Battaglia, D J; Shafer, M W; Unterberg, E A; Bell, R E; Hillis, D L; LeBlanc, B P; Maingi, R; Sabbagh, S; Stratton, B C

    2010-10-01

    Tangentially viewing soft x-ray (SXR) cameras are capable of detecting nonaxisymmetric plasma structures in magnetically confined plasmas. They are particularly useful for studying stationary perturbations or phenomenon that occur on a timescale faster than the plasma rotation period. Tangential SXR camera diagnostics are planned for the DIII-D and NSTX tokamaks to elucidate the static edge magnetic structure during the application of 3D perturbations. To support the design of the proposed diagnostics, a synthetic diagnostic model was developed using the CHIANTI database to estimate the SXR emission. The model is shown to be in good agreement with the measurements from an existing tangential SXR camera diagnostic on NSTX.

  7. Tangential synthetic jets for separation control

    NASA Astrophysics Data System (ADS)

    Esmaeili Monir, H.; Tadjfar, M.; Bakhtian, A.

    2014-02-01

    A numerical study of separation control has been made to investigate aerodynamic characteristics of a NACA23012 airfoil with a tangential synthetic jet. Simulations are carried out at the chord Reynolds number of Re=2.19×106. The present approach relies on solving the Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. The turbulence model used in the present computation is the Spalart-Allmaras one-equation model. All computations are performed with a finite volume based code. Stall characteristics are significantly improved by controlling the formation of separation vortices in the flow. We placed the synthetic jet at the 12% chord, xj=0.12c, where we expected the separation to occur. Two distinct jet oscillating frequencies: Fj+=0.159 and Fj+=1 were considered. We studied the effect of blowing ratio, Vj/U∞, where it was varied from 0 to 5. The inclined angle of the synthetic jet was varied from αj=0° up to αj=83°. For the non-zero inclined angles, the local maximum in the aerodynamic performance, Cl/Cd, of 6.89 was found for the inclined angle of about 43°. In the present method, by means of creating a dent on the airfoil, linear momentum is transferred to the flow system in tangential direction to the airfoil surface. Thus the absolute maximum of 11.19 was found for the tangential synthetic jet at the inclined angle of the jet of 0°. The mechanisms involved for a tangential jet appear to behave linearly, as by multiplying the activation frequency of the jet by a factor produces the same multiplication factor in the resulting frequency in the flow. However, the mechanisms involved in the non-zero inclined angle cases behave nonlinearly when the activation frequency is multiplied.

  8. Use of surface drifters to increase resolution and accuracy of oceanic geostrophic circulation mapped from satellite only (altimetry and gravimetry)

    NASA Astrophysics Data System (ADS)

    Mulet, Sandrine; Rio, Marie-Hélène; Etienne, Hélène

    2017-04-01

    Strong improvements have been made in our knowledge of the surface ocean geostrophic circulation thanks to satellite observations. For instance, the use of the latest GOCE (Gravity field and steady-state Ocean Circulation Explorer) geoid model with altimetry data gives good estimate of the mean oceanic circulation at spatial scales down to 125 km. However, surface drifters are essential to resolve smaller scales, it is thus mandatory to carefully process drifter data and then to combine these different data sources. In this framework, the global 1/4° CNES-CLS13 Mean Dynamic Topography (MDT) and associated mean geostrophic currents have been computed (Rio et al, 2014). First a satellite only MDT was computed from altimetric and gravimetric data. Then, an important work was to pre-process drifter data to extract only the geostrophic component in order to be consistent with physical content of satellite only MDT. This step include estimate and remove of Ekman current and wind slippage. Finally drifters and satellite only MDT were combined. Similar approaches are used regionally to go further toward higher resolution, for instance in the Agulhas current or along the Brazilian coast. Also, a case study in the Gulf of Mexico intends to use drifters in the same way to improve weekly geostrophic current estimate.

  9. The tangential velocity of M31: CLUES from constrained simulations

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Courtois, Hélène; Tully, R. Brent

    2016-07-01

    Determining the precise value of the tangential component of the velocity of M31 is a non-trivial astrophysical issue that relies on complicated modelling. This has recently lead to conflicting estimates, obtained by several groups that used different methodologies and assumptions. This Letter addresses the issue by computing a Bayesian posterior distribution function of this quantity, in order to measure the compatibility of those estimates with Λ cold dark matter (ΛCDM). This is achieved using an ensemble of Local Group (LG) look-alikes collected from a set of constrained simulations (CSs) of the local Universe, and a standard unconstrained ΛCDM. The latter allows us to build a control sample of LG-like pairs and to single out the influence of the environment in our results. We find that neither estimate is at odds with ΛCDM; however, whereas CSs favour higher values of vtan, the reverse is true for estimates based on LG samples gathered from unconstrained simulations, overlooking the environmental element.

  10. Geostrophic balance with a full Coriolis Force: implications for low latitutde studies

    NASA Technical Reports Server (NTRS)

    Juarez, M. de la Torre

    2002-01-01

    In its standard form, geostrophic balance uses a partial representation of the Coriolis force. The resulting formation has a singularity at the equator, and violates mass and momentum conservation. When the horizontal projection of the planetary rotation vector is considered, the singularity at the equator disappears, continuity can be preserved, and quasigeostrophy can be formulated at planetary scale.

  11. Nonlinear Cascades of Surface Oceanic Geostrophic Kinetic Energy in the Frequency Domain

    DTIC Science & Technology

    2012-09-01

    kinetic energy in wavenumber k space for surface ocean geostrophic flows have been computed from sat - ellite altimetry data of sea surface height (Scott...5 0.65kN, where kN corresponds to the Nyquist scale. The filter is applied to bq 1 and bq 2 , the Fourier transforms of q1 and q2, at every time step

  12. A Unified Model of Geostrophic Adjustment and Frontogenesis

    NASA Astrophysics Data System (ADS)

    Taylor, John; Shakespeare, Callum

    2013-11-01

    Fronts, or regions with strong horizontal density gradients, are ubiquitous and dynamically important features of the ocean and atmosphere. In the ocean, fronts are associated with enhanced air-sea fluxes, turbulence, and biological productivity, while atmospheric fronts are associated with some of the most extreme weather events. Here, we describe a new mathematical framework for describing the formation of fronts, or frontogenesis. This framework unifies two classical problems in geophysical fluid dynamics, geostrophic adjustment and strain-driven frontogenesis, and provides a number of important extensions beyond previous efforts. The model solutions closely match numerical simulations during the early stages of frontogenesis, and provide a means to describe the development of turbulence at mature fronts.

  13. Kinematic validation of a quasi-geostrophic model for the fast dynamics in the Earth's outer core

    NASA Astrophysics Data System (ADS)

    Maffei, S.; Jackson, A.

    2017-09-01

    We derive a quasi-geostrophic (QG) system of equations suitable for the description of the Earth's core dynamics on interannual to decadal timescales. Over these timescales, rotation is assumed to be the dominant force and fluid motions are strongly invariant along the direction parallel to the rotation axis. The diffusion-free, QG system derived here is similar to the one derived in Canet et al. but the projection of the governing equations on the equatorial disc is handled via vertical integration and mass conservation is applied to the velocity field. Here we carefully analyse the properties of the resulting equations and we validate them neglecting the action of the Lorentz force in the momentum equation. We derive a novel analytical solution describing the evolution of the magnetic field under these assumptions in the presence of a purely azimuthal flow and an alternative formulation that allows us to numerically solve the evolution equations with a finite element method. The excellent agreement we found with the analytical solution proves that numerical integration of the QG system is possible and that it preserves important physical properties of the magnetic field. Implementation of magnetic diffusion is also briefly considered.

  14. Tangential gunshot wound with MagSafe ammunition.

    PubMed

    Rapkiewicz, Amy V; Tamburri, Robert; Basoa, Mark E; Catanese, Charles A

    2005-09-01

    MagSafe ammunition is a type of unconventional prefragmented ammunition. A fatal tangential gunshot wound involving MagSafe ammunition is presented. The ammunition and wound characteristics are discussed.

  15. Large Eddy Simulations of a Bottom Boundary Layer Under a Shallow Geostrophic Front

    NASA Astrophysics Data System (ADS)

    Bateman, S. P.; Simeonov, J.; Calantoni, J.

    2017-12-01

    The unstratified surf zone and the stratified shelf waters are often separated by dynamic fronts that can strongly impact the character of the Ekman bottom boundary layer. Here, we use large eddy simulations to study the turbulent bottom boundary layer associated with a geostrophic current on a stratified shelf of uniform depth. The simulations are initialized with a spatially uniform vertical shear that is in geostrophic balance with a pressure gradient due to a linear horizontal temperature variation. Superposed on the temperature front is a stable vertical temperature gradient. As turbulence develops near the bottom, the turbulence-induced mixing gradually erodes the initial uniform temperature stratification and a well-mixed layer grows in height until the turbulence becomes fully developed. The simulations provide the spatial distribution of the turbulent dissipation and the Reynolds stresses in the fully developed boundary layer. We vary the initial linear stratification and investigate its effect on the height of the bottom boundary layer and the turbulence statistics. The results are compared to previous models and simulations of stratified bottom Ekman layers.

  16. Frontal Generation of Waves: A Geostrophic Adjustment Interpretation of The Observations

    NASA Astrophysics Data System (ADS)

    Blumen, W.; Lundquist, J. K.

    Data were collected during the stable boundary layer observational field program, the Cooperative Atmosphere-Surface Exchange Study 1999 (CASES-99), carried out in southeastern Kansas USA during the month of October 1999 These data reveal that on at least two different occasions, 16 and 22 October, the passage of surface cold fronts were associated with the initiation of gravity-inertia waves. The periods of these waves ranged from about 4 minutes for gravity waves, relatively unaffected by the Earth's rotation, to about 20 hours for inertial oscillations, characterized by the Coriolis frequency f. Boundary layer radar wind profilers at locations surrounding the main observational site provided wind data through the boundary layer and above. A 60 m tower at the main site contained high frequency temperature, wind, humidity and pressure sensors distributed at various levels along the vertical. These data were used to identify the frontal passages and the wave characteristics. The wind profiler data were used to identify the inertial oscillations. These data indicate that as time progresses, following the frontal passages, the postfrontal energy levels return to pre- frontal levels, and inertial oscillations represent the dominant frequency observed. A linear model is developed and solved to provide evidence that a geostrophic adjust- ment process occurs during the postfrontal period of each frontal passage. the solution obtained shows that the higher frequency waves disperse their energy rapidly leaving the lower frequency inertial oscillation, which is characterized by a zero group ve- locity, at the site of its initiation. The observations reveal that the adjustment to this state occurs within a time span of about 8 hours for each frontal event. This time span is consistent with the model solution using parameter values that are based on ob- servational data. The present model also provides a means to estimate how much of the initial energy is distributed to wave

  17. Analysis of Tangential Slot Blowing on F/A-18 Isolated Forebody

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Rizk, Yehia M.; Schiff, Lewis B.

    1995-01-01

    The generation of significant side forces and yawing moments on an F/A-18 fuselage through tangential slot blowing is analyzed using computational fluid dynamics. The effects of freestream Mach number, jet exit conditions, jet length, and jet location are studied. The effects of over- and underblowing on force and moment production are analyzed. Non-time-accurate solutions are obtained to determine the steady-state side forces, yawing moments, and surface pressure distributions generated by tangential slot blowing. Time-accurate solutions are obtained to study the force onset time lag of tangential slot blowing. Comparison with available experimental data from full-scale wind-tunnel and subscale wind-tunnel tests are made. This computational analysis complements the experimental results and provides a detailed understanding of the effects of tangential slot blowing on the flowfield about the isolated F/A-18 forebody. Additionally, it extends the slot-blowing database to transonic maneuvering Mach numbers.

  18. Sea level anomaly on the Patagonian continental shelf: Trends, annual patterns and geostrophic flows

    PubMed Central

    Saraceno, M.; Piola, A. R.; Strub, P. T.

    2016-01-01

    Abstract We study the annual patterns and linear trend of satellite sea level anomaly (SLA) over the southwest South Atlantic continental shelf (SWACS) between 54ºS and 36ºS. Results show that south of 42°S the thermal steric effect explains nearly 100% of the annual amplitude of the SLA, while north of 42°S it explains less than 60%. This difference is due to the halosteric contribution. The annual wind variability plays a minor role over the whole continental shelf. The temporal linear trend in SLA ranges between 1 and 5 mm/yr (95% confidence level). The largest linear trends are found north of 39°S, at 42°S and at 50°S. We propose that in the northern region the large positive linear trends are associated with local changes in the density field caused by advective effects in response to a southward displacement of the South Atlantic High. The causes of the relative large SLA trends in two southern coastal regions are discussed as a function meridional wind stress and river discharge. Finally, we combined the annual cycle of SLA with the mean dynamic topography to estimate the absolute geostrophic velocities. This approach provides the first comprehensive description of the seasonal component of SWACS circulation based on satellite observations. The general circulation of the SWACS is northeastward with stronger/weaker geostrophic currents in austral summer/winter. At all latitudes, geostrophic velocities are larger (up to 20 cm/s) close to the shelf‐break and decrease toward the coast. This spatio‐temporal pattern is more intense north of 45°S. PMID:27840784

  19. The vertical structure of tangential winds in tropical cyclones: Observations, theory, and numerical simulations

    NASA Astrophysics Data System (ADS)

    Stern, Daniel P.

    The vertical structure of the tangential wind field in tropical cyclones is investigated through observations, theory, and numerical simulations. First, a dataset of Doppler radar wind swaths obtained from NOAA/AOML/HRD is used to create azimuthal mean tangential wind fields for 7 storms on 17 different days. Three conventional wisdoms of vertical structure are reexamined: the outward slope of the Radius of Maximum Winds (RMW) decreases with increasing intensity, the slope increases with the size of the RMW, and the RMW is a surface of constant absolute angular momentum (M). The slopes of the RMW and of M surfaces are objectively determined. The slopes are found to increase linearly with the size of the low-level RMW, and to be independent of the intensity of the storm. While the RMW is approximately an M surface, M systematically decreases with height along the RMW. The steady-state analytical theory of Emanuel (1986) is shown to make specific predictions regarding the vertical structure of tropical cyclones. It is found that in this model, the slope of the RMW is a linear function of its size and is independent of intensity, and that the RMW is almost exactly an M surface. A simple time-dependent model which is governed by the same assumptions as the analytical theory yields the same results. Idealized hurricane simulations are conducted using the Weather Research and Forecasting (WRF) model. The assumptions of Emanuel's theory, slantwise moist neutrality and thermal wind balance, are both found to be violated. Nevertheless, the vertical structure of the wind field itself is generally well predicted by the theory. The percentage rate at which the winds decay with height is found to be nearly independent of both size and intensity, in agreement with observations and theory. Deviations from this decay profile are shown to be due to gradient wind imbalance. The slope of the RMW increases linearly with its size, but is systematically too large compared to

  20. Vibrotactile Compliance Feedback for Tangential Force Interaction.

    PubMed

    Heo, Seongkook; Lee, Geehyuk

    2017-01-01

    This paper presents a method to generate a haptic illusion of compliance using a vibrotactile actuator when a tangential force is applied to a rigid surface. The novel method builds on a conceptual compliance model where a physical object moves on a textured surface in response to a tangential force. The method plays vibration patterns simulating friction-induced vibrations as an applied tangential force changes. We built a prototype consisting of a two-dimensional tangential force sensor and a surface transducer to test the effectiveness of the model. Participants in user experiments with the prototype perceived the rigid surface of the prototype as a moving, rubber-like plate. The main findings of the experiments are: 1) the perceived stiffness of a simulated material can be controlled by controlling the force-playback transfer function, 2) its perceptual properties such as softness and pleasantness can be controlled by changing friction grain parameters, and 3) the use of the vibrotactile compliance feedback reduces participants' workload including physical demand and frustration while performing a force repetition task.

  1. Upwelling Response to Hurricane Isaac in Geostrophic Oceanic Vortices

    NASA Astrophysics Data System (ADS)

    Jaimes, B.; Shay, L. K.; Brewster, J. K.; Schuster, R.

    2013-05-01

    As a tropical cyclone (TC) moves over the ocean, the cyclonic curl of the wind stress produces a region of upwelling waters under the TC center that is compensated by downwelling waters at regions outside the center. Direct measurements conducted during hurricane Rita and recent numerical studies indicate that this is not necessarily the case when TCs move over geostrophic oceanic features, where its background relative vorticity impacts wind-driven horizontal current divergence and the upwelling velocity. Modulation of the upwelling response in these energetic oceanic regimes impacts vertical mixing across the oceanic mixed layer base, air-sea fluxes into the atmosphere, and ultimately storm intensity. As part of NOAA Intensity Forecasting Experiment, an experiment was conducted during the passage of TC Isaac over the energetic geostrophic eddy field in the Gulf of Mexico in August 2012. Expendable bathythermographs, current profilers, and conductivity-temperature-depth probes were deployed in Isaac from NOAA WP-3D aircraft during four in-storm flights to measure oceanic variability and its impact on TC-driven upwelling and surface fluxes of heat and momentum. During intensification to hurricane, the cyclonic curl of the wind stress of Isaac extended over a region of more than 300 km in diameter (4 to 5 times the radius of maximum winds). Isaac's center moved over a cold cyclonic feature, while its right and left sides moved over warm anticyclones. Contrasting upwelling and downwelling regimes developed inside the region of cyclonic curl of the wind stress. Both positive (upwelling) and negative (downwelling) vertical displacements of 40 and 60 m, respectively, were measured inside the region of cyclonic curl of the wind stress, which are between 3 to 4 times larger than predicted vertical displacements for a quiescent ocean based on scaling arguments. Oceanic mixed layer (OML) currents of 0.2 to 0.7 m s-1 were measured, which are about 50% smaller than the

  2. Effects of Geostrophic Kinetic Energy on the Distribution of Mesopelagic Fish Larvae in the Southern Gulf of California in Summer/Fall Stratified Seasons.

    PubMed

    Contreras-Catala, Fernando; Sánchez-Velasco, Laura; Beier, Emilio; Godínez, Victor M; Barton, Eric D; Santamaría-Del-Angel, Eduardo

    2016-01-01

    Effects of geostrophic kinetic energy flux on the three-dimensional distribution of fish larvae of mesopelagic species (Vinciguerria lucetia, Diogenichthys laternatus, Benthosema panamense and Triphoturus mexicanus) in the southern Gulf of California during summer and fall seasons of stronger stratification were analyzed. The greatest larval abundance was found at sampling stations in geostrophic kinetic energy-poor areas (<7.5 J/m3), where the distribution of the dominant species tended to be stratified. Larvae of V. lucetia (average abundance of 318 larvae/10m2) and B. panamense (174 larvae/10m2) were mostly located in and above the pycnocline (typically ~ 40 m depth). In contrast, larvae of D. laternatus (60 larvae/10m2) were mainly located in and below the pycnocline. On the other hand, in sampling stations from geostrophic kinetic energy-rich areas (> 21 J/m3), where mesoscale eddies were present, the larvae of the dominant species had low abundance and were spread more evenly through the water column, in spite of the water column stratification. For example, in a cyclonic eddy, V. lucetia larvae (34 larvae/10m2) extended their distribution to, at least, the limit of sampling 200 m depth below the pycnocline, while D. laternatus larvae (29 larvae/10m2) were found right up to the surface, both probably as a consequence mixing and secondary circulation in the eddy. Results showed that the level of the geostrophic kinetic energy flux affects the abundance and the three-dimensional distribution of mesopelagic fish larvae during the seasons of stronger stratification, indicating that areas with low geostrophic kinetic energy may be advantageous for feeding and development of mesopelagic fish larvae because of greater water column stability.

  3. Effects of Geostrophic Kinetic Energy on the Distribution of Mesopelagic Fish Larvae in the Southern Gulf of California in Summer/Fall Stratified Seasons

    PubMed Central

    Contreras-Catala, Fernando; Beier, Emilio; Godínez, Victor M.; Barton, Eric D.; Santamaría-del-Angel, Eduardo

    2016-01-01

    Effects of geostrophic kinetic energy flux on the three-dimensional distribution of fish larvae of mesopelagic species (Vinciguerria lucetia, Diogenichthys laternatus, Benthosema panamense and Triphoturus mexicanus) in the southern Gulf of California during summer and fall seasons of stronger stratification were analyzed. The greatest larval abundance was found at sampling stations in geostrophic kinetic energy-poor areas (<7.5 J/m3), where the distribution of the dominant species tended to be stratified. Larvae of V. lucetia (average abundance of 318 larvae/10m2) and B. panamense (174 larvae/10m2) were mostly located in and above the pycnocline (typically ~ 40 m depth). In contrast, larvae of D. laternatus (60 larvae/10m2) were mainly located in and below the pycnocline. On the other hand, in sampling stations from geostrophic kinetic energy-rich areas (> 21 J/m3), where mesoscale eddies were present, the larvae of the dominant species had low abundance and were spread more evenly through the water column, in spite of the water column stratification. For example, in a cyclonic eddy, V. lucetia larvae (34 larvae/10m2) extended their distribution to, at least, the limit of sampling 200 m depth below the pycnocline, while D. laternatus larvae (29 larvae/10m2) were found right up to the surface, both probably as a consequence mixing and secondary circulation in the eddy. Results showed that the level of the geostrophic kinetic energy flux affects the abundance and the three-dimensional distribution of mesopelagic fish larvae during the seasons of stronger stratification, indicating that areas with low geostrophic kinetic energy may be advantageous for feeding and development of mesopelagic fish larvae because of greater water column stability. PMID:27760185

  4. Forebody tangential blowing for control at high angles of attack

    NASA Technical Reports Server (NTRS)

    Kroo, I.; Rock, S.; Roberts, L.

    1991-01-01

    A feasibility study to determine if the use of tangential leading edge blowing over the forebody could produce effective and practical control of the F-18 HARV aircraft at high angles of attack was conducted. A simplified model of the F-18 configuration using a vortex-lattice model was developed to obtain a better understanding of basic aerodynamic coupling effects and the influence of forebody circulation on lifting surface behavior. The effect of tangential blowing was estimated using existing wind tunnel data on normal forebody blowing and analytical studies of tangential blowing over conical forebodies. Incorporation of forebody blowing into the flight control system was investigated by adding this additional yaw control and sideforce generating actuator into the existing F-18 HARV simulation model. A control law was synthesized using LQG design methods that would schedule blowing rates as a function of vehicle sideslip, angle of attack, and roll and yaw rates.

  5. Effects of magnetic drift tangential to magnetic surfaces on neoclassical transport in non-axisymmetric plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuoka, Seikichi, E-mail: matsuoka@rist.or.jp; Satake, Shinsuke; Kanno, Ryutaro

    2015-07-15

    In evaluating neoclassical transport by radially local simulations, the magnetic drift tangential to a flux surface is usually ignored in order to keep the phase-space volume conservation. In this paper, effect of the tangential magnetic drift on the local neoclassical transport is investigated. To retain the effect of the tangential magnetic drift in the local treatment of neoclassical transport, a new local formulation for the drift kinetic simulation is developed. The compressibility of the phase-space volume caused by the tangential magnetic drift is regarded as a source term for the drift kinetic equation, which is solved by using a two-weightmore » δf Monte Carlo method for non-Hamiltonian system [G. Hu and J. A. Krommes, Phys. Plasmas 1, 863 (1994)]. It is demonstrated that the effect of the drift is negligible for the neoclassical transport in tokamaks. In non-axisymmetric systems, however, the tangential magnetic drift substantially changes the dependence of the neoclassical transport on the radial electric field E{sub r}. The peaked behavior of the neoclassical radial fluxes around E{sub r }={sub  }0 observed in conventional local neoclassical transport simulations is removed by taking the tangential magnetic drift into account.« less

  6. Dynamics of fingertip contact during the onset of tangential slip

    PubMed Central

    Delhaye, Benoit; Lefèvre, Philippe; Thonnard, Jean-Louis

    2014-01-01

    Through highly precise perceptual and sensorimotor activities, the human tactile system continuously acquires information about the environment. Mechanical interactions between the skin at the point of contact and a touched surface serve as the source of this tactile information. Using a dedicated custom robotic platform, we imaged skin deformation at the contact area between the finger and a flat surface during the onset of tangential sliding movements in four different directions (proximal, distal, radial and ulnar) and with varying normal force and tangential speeds. This simple tactile event evidenced complex mechanics. We observed a reduction of the contact area while increasing the tangential force and proposed to explain this phenomenon by nonlinear stiffening of the skin. The deformation's shape and amplitude were highly dependent on stimulation direction. We conclude that the complex, but highly patterned and reproducible, deformations measured in this study are a potential source of information for the central nervous system and that further mechanical measurement are needed to better understand tactile perceptual and motor performances. PMID:25253033

  7. Numerical analysis of tangential slot blowing on a generic chined forebody

    NASA Technical Reports Server (NTRS)

    Agosta, Roxana M.

    1994-01-01

    A numerical study is performed to investigate the effects of tangential slot blowing on a generic chined forebody. The Reynolds-averaged, thin-layer, Navier-Stokes equations are solved to obtain the high-angle-of-attack viscous flow field about a generic chined forebody. Tangential slot blowing is investigated as a means of forebody flow control to generate side force and yawing moment on the forebody. The effects of jet mass flow ratios, angle of attack, and blowing slot location in the axial and circumferential directions are studied. The computed results are compared with available wind tunnel experimental data. The solutions with and without blowing are also analyzed using helicity density contours, surface flow patterns, and off-surface instantaneous streamlines. The results of this analysis provide details of the flow field about the generic chined forebody, as well as show that tangential slot blowing can be used as a means of forebody flow control to generate side force and yawing moment.

  8. Ascl1 promotes tangential migration and confines migratory routes by induction of Ephb2 in the telencephalon

    PubMed Central

    Liu, Yuan-Hsuan; Tsai, Jin-Wu; Chen, Jia-Long; Yang, Wan-Shan; Chang, Pei-Ching; Cheng, Pei-Lin; Turner, David L.; Yanagawa, Yuchio; Wang, Tsu-Wei; Yu, Jenn-Yah

    2017-01-01

    During development, cortical interneurons generated from the ventral telencephalon migrate tangentially into the dorsal telencephalon. Although Achaete-scute family bHLH transcription factor 1 (Ascl1) plays important roles in the developing telencephalon, whether Ascl1 regulates tangential migration remains unclear. Here, we found that Ascl1 promoted tangential migration along the ventricular zone/subventricular zone (VZ/SVZ) and intermediate zone (IZ) of the dorsal telencephalon. Distal-less homeobox 2 (Dlx2) acted downstream of Ascl1 in promoting tangential migration along the VZ/SVZ but not IZ. We further identified Eph receptor B2 (Ephb2) as a direct target of Ascl1. Knockdown of EphB2 disrupted the separation of the VZ/SVZ and IZ migratory routes. Ephrin-A5, a ligand of EphB2, was sufficient to repel both Ascl1-expressing cells in vitro and tangentially migrating cortical interneurons in vivo. Together, our results demonstrate that Ascl1 induces expression of Dlx2 and Ephb2 to maintain distinct tangential migratory routes in the dorsal telencephalon. PMID:28276447

  9. Absolute Geostrophic Velocity Inverted from World Ocean Atlas 2013 (WOAV13) with the P-Vector Method

    DTIC Science & Technology

    2015-11-01

    The WOAV13 dataset comprises 3D global gridded climatological fields of absolute geostrophic velocity inverted...from World Ocean Atlas-2013 (WOA13) temperature and salinity fields using the P-vector method. It provides a climatological velocity field that is... climatology Dataset Identifier: gov.noaa.nodc:0121576 Creator: NOAP Lab, Department of Oceanography, Naval Postgraduate School, Monterey, CA Title

  10. Poincare oscillations and geostrophic adjustment in a rotating paraboloid

    NASA Astrophysics Data System (ADS)

    Kalashnik, M.; Kakhiani, V.; Patarashvili, K.; Tsakadze, S.

    2009-10-01

    Free liquid oscillations (Poincare oscillations) in a rotating paraboloid are investigated theoretically and experimentally. Within the framework of shallow-water theory, with account for the centrifugal force, expressions for the free oscillation frequencies are obtained and corrections to the frequencies related with the finiteness of the liquid depth are found. It is shown that in the rotating liquid, apart from the wave modes of free oscillations, a stationary vortex mode is also generated, that is, a process of geostrophic adjustment takes place. Solutions of the shallow-water equations which describe the wave dynamics of the adjustment process are presented. In the experiments performed the wave and vortex modes were excited by removing a previously immersed hemisphere from the central part of the paraboloid. Good agreement between theory and experiment was obtained. Address: alex_gaina@yahoo.com Database: phy

  11. Absolute geostrophic currents over the SR02 section south of Africa in December 2009

    NASA Astrophysics Data System (ADS)

    Tarakanov, Roman

    2017-04-01

    The structure of the absolute geostrophic currents is investigated on the basis of CTD-, SADCP- and LADCP-data over the hydrographic section occupied south of Africa from the Good Hope Cape to 57° S along the Prime Meridian, and on the basis of satellite data on absolute dynamic topography (ADT) produced by Ssalto/Duacs and distributed by Aviso, with a support from Cnes (http://www.aviso.altimetry.fr/duacs/). Thus the section crossed the subtropical zone (at the junction of the subtropical gyres of the Indian and Atlantic oceans), the Antarctic Circumpolar Current (ACC) and terminated at the northern periphery of the Weddell Gyre. A total of 87 stations were occupied here with CTD-, and LADCP-profiling in the entire water column. The distance between stations was 20 nautical miles. Absolute geostrophic currents were calculated between each pair of CTD-stations with barotropic correction based on two methods: by SADCP data and by ADT at these stations. The subtropical part of the section crossed a large segment of the Agulhas meander, already separated from the current and disintegrating into individual eddies. In addition, smaller formed cyclones and anticyclones of the Agulhas Current were also observed in this zone. These structural elements of the upper layer of the ocean currents do not penetrate deeper than 1000-1500 m. Oppositely directed barotropic currents with velocities up to 30 cm/s were observed below these depths extending to the ocean bottom. Such large velocities agree well with the data of the bottom tracking of Lowered ADCP. Only these data were the reliable results of LADCP measurements because of the high transparency of the deep waters of the subtropical zone. The total transport of absolute geostrophic currents in the section is estimated as 144 and 179 Sv to the east, based on the SADCP and ADT barotropic correction, respectively. A transport of 4 (2) Sv to the east was observed on the northern periphery of the Weddell Gyre, 187 (182) Sv to

  12. Large-Scale Flows and Magnetic Fields Produced by Rotating Convection in a Quasi-Geostrophic Model of Planetary Cores

    NASA Astrophysics Data System (ADS)

    Guervilly, C.; Cardin, P.

    2017-12-01

    Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.

  13. Radial vorticity constraint in core flow modeling

    NASA Astrophysics Data System (ADS)

    Asari, S.; Lesur, V.

    2011-11-01

    We present a new method for estimating core surface flows by relaxing the tangentially geostrophic (TG) constraint. Ageostrophic flows are allowed if they are consistent with the radial component of the vorticity equation under assumptions of the magnetostrophic force balance and an insulating mantle. We thus derive a tangentially magnetostrophic (TM) constraint for flows in the spherical harmonic domain and implement it in a least squares inversion of GRIMM-2, a recently proposed core field model, for temporally continuous core flow models (2000.0-2010.0). Comparing the flows calculated using the TG and TM constraints, we show that the number of degrees of freedom for the poloidal flows is notably increased by admitting ageostrophic flows compatible with the TM constraint. We find a significantly improved fit to the GRIMM-2 secular variation (SV) by including zonal poloidal flow in TM flow models. Correlations between the predicted and observed length-of-day variations are equally good under the TG and TM constraints. In addition, we estimate flow models by imposing the TM constraint together with other dynamical constraints: either purely toroidal (PT) flow or helical flow constraint. For the PT case we cannot find any flow which explains the observed SV, while for the helical case the SV can be fitted. The poor compatibility between the TM and PT constraints seems to arise from the absence of zonal poloidal flows. The PT flow assumption is likely to be negated when the radial magnetostrophic vorticity balance is taken into account, even if otherwise consistent with magnetic observations.

  14. Design of tangential viewing phase contrast imaging for turbulence measurements in JT-60SA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, K., E-mail: ktanaka@nifs.ac.jp; Department of Advanced Energy Engineering, Kyushu University, Kasuga, Fukuoka 816-8580; Coda, S.

    2016-11-15

    A tangential viewing phase contrast imaging system is being designed for the JT-60SA tokamak to investigate microturbulence. In order to obtain localized information on the turbulence, a spatial-filtering technique is applied, based on magnetic shearing. The tangential viewing geometry enhances the radial localization. The probing laser beam is injected tangentially and traverses the entire plasma region including both low and high field sides. The spatial resolution for an Internal Transport Barrier discharge is estimated at 30%–70% of the minor radius at k = 5 cm{sup −1}, which is the typical expected wave number of ion scale turbulence such as ionmore » temperature gradient/trapped electron mode.« less

  15. Normal and Tangential Momentum Accommodation for Earth Satellite Conditions

    NASA Technical Reports Server (NTRS)

    Knechtel, Earl D.; Pitts, William C.

    1973-01-01

    Momentum accommodation was determined experimentally for gas-surface interactions simulating in a practical way those of near-earth satellites. Throughout the ranges of gas energies and incidence angles of interest for earth-conditions, two components of force were measured by means of a vacuum microbalance to determine the normal and tangential momentum-accommodation coefficients for nitrogen ions on technical-quality aluminum surfaces. For these experimental conditions, the electrodynamics of ion neutralization near the surface indicate that results for nitrogen ions should differ relatively little from those for nitrogen molecules, which comprise the largest component of momentum flux for near-earth satellites. The experimental results indicated that both normal and tangential momentum-accommodation coefficients varied widely with energy, tending to be relatively well accommodated at the higher energies, but becoming progressively less accommodated as the energy was reduced to and below that for earth-satellite speeds. Both coefficients also varied greatly with incidence angle, the normal momentum becoming less accommodated as the incidence angle became more glancing, whereas the tangential momentum generally became more fully accommodated. For each momentum coefficient, an empirical correlation function was obtained which closely approximated the experimental results over the ranges of energy and incidence angle. Most of the observed variations of momentum accommodation with energy and incidence angle were qualitatively indicated by a calculation using a three-dimensional model that simulated the target surface by a one-dimensional attractive potential and hard sphere reflectors.

  16. Tangential System of Thomson Scattering for Tokamak T-15

    NASA Astrophysics Data System (ADS)

    Asadulin, G. M.; Bel'bas, I. S.; Gorshkov, A. V.

    2017-12-01

    Two systems of Thomson scattering diagnostics, with vertical and tangential probing, are used in the D-shaped plasma cross section in tokamak T-15. The tangential system allows measuring plasma temperature and density profiles along the major radius of the tokamak. This paper presents the tangential system project. The system is based on a Nd:YAG laser with wavelength of 1064 nm, pulse energy of 3 J, pulse duration of 10 ns, and repetition rate of 100 Hz. The chosen geometry allows collecting light from ten uniformly spaced points. Optimization of the registration system has been accomplished. The collected light will be transmitted through an optical fiber bundle with diameter of 3 mm and quartz fibers (numerical aperture is 0.22). Six-channel polychromators based on high-contrast interference filters have been chosen as spectral equipment. The radiation will be registered by avalanche photodiodes. The technique of electron temperature and density measurement is described, and estimation of its accuracy is carried out. The proposed system allows measuring the electron temperature with accuracy not worse than 10% within the range of 50 eV to 10 keV on the pinch edge over the internal contour, from 20 eV to 9 keV in the plasma central region, and from 2 eV to 400 eV on the pinch edge over the outer contour. The estimation is made for electron density of not less than 2.6 × 1013 cm-3.

  17. Langevin Dynamics, Large Deviations and Instantons for the Quasi-Geostrophic Model and Two-Dimensional Euler Equations

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2014-09-01

    We investigate a class of simple models for Langevin dynamics of turbulent flows, including the one-layer quasi-geostrophic equation and the two-dimensional Euler equations. Starting from a path integral representation of the transition probability, we compute the most probable fluctuation paths from one attractor to any state within its basin of attraction. We prove that such fluctuation paths are the time reversed trajectories of the relaxation paths for a corresponding dual dynamics, which are also within the framework of quasi-geostrophic Langevin dynamics. Cases with or without detailed balance are studied. We discuss a specific example for which the stationary measure displays either a second order (continuous) or a first order (discontinuous) phase transition and a tricritical point. In situations where a first order phase transition is observed, the dynamics are bistable. Then, the transition paths between two coexisting attractors are instantons (fluctuation paths from an attractor to a saddle), which are related to the relaxation paths of the corresponding dual dynamics. For this example, we show how one can analytically determine the instantons and compute the transition probabilities for rare transitions between two attractors.

  18. Mapping sub-surface geostrophic currents from altimetry and a fleet of gliders

    NASA Astrophysics Data System (ADS)

    Alvarez, A.; Chiggiato, J.; Schroeder, K.

    2013-04-01

    Integrating the observations gathered by different platforms into a unique physical picture of the environment is a fundamental aspect of networked ocean observing systems. These are constituted by a spatially distributed set of sensors and platforms that simultaneously monitor a given ocean region. Remote sensing from satellites is an integral part of present ocean observing systems. Due to their autonomy, mobility and controllability, underwater gliders are envisioned to play a significant role in the development of networked ocean observatories. Exploiting synergism between remote sensing and underwater gliders is expected to result on a better characterization of the marine environment than using these observational sources individually. This study investigates a methodology to estimate the three dimensional distribution of geostrophic currents resulting from merging satellite altimetry and in situ samples gathered by a fleet of Slocum gliders. Specifically, the approach computes the volumetric or three dimensional distribution of absolute dynamic height (ADH) that minimizes the total energy of the system while being close to in situ observations and matching the absolute dynamic topography (ADT) observed from satellite at the sea surface. A three dimensional finite element technique is employed to solve the minimization problem. The methodology is validated making use of the dataset collected during the field experiment called Rapid Environmental Picture-2010 (REP-10) carried out by the NATO Undersea Research Center-NURC during August 2010. A marine region off-shore La Spezia (northwest coast of Italy) was sampled by a fleet of three coastal Slocum gliders. Results indicate that the geostrophic current field estimated from gliders and altimetry significantly improves the estimates obtained using only the data gathered by the glider fleet.

  19. Optimization of Tangential Mass Injection for Minimizing Flow Separation in a Scramjet Inlet

    DTIC Science & Technology

    1991-12-01

    34 Aerospace EnQineering, Vol. 11. No. 8, August 1991, p.23. 26. Heppenheimer , Thomas A . Lecture notes from Hypersonic Technologies seminar. University...AFIT/GAE/ENY,/9 lD-2 ( /~ AD-A243 868 "DTIC OPTIMIZATION OF TANGENTIAL MASS INJECTION FOR MINIMIZING FLOW SEPARATION IN A SC.R-.MJET INLET THESIS...OF TANGENTIAL MASS INJECTION FOR MINIMIZING FLOW SEPARATION IN A SCRAMJET INLEr THESIS Presented to the Faculty of the School of E.ngineering of the

  20. Tangential migratory pathways of subpallial origin in the embryonic telencephalon of sharks: evolutionary implications.

    PubMed

    Quintana-Urzainqui, Idoia; Rodríguez-Moldes, Isabel; Mazan, Sylvie; Candal, Eva

    2015-09-01

    Tangential neuronal migration occurs along different axes from the axis demarcated by radial glia and it is thought to have evolved as a mechanism to increase the diversity of cell types in brain areas, which in turn resulted in increased complexity of functional networks. In the telencephalon of amniotes, different embryonic tangential pathways have been characterized. However, little is known about the exact routes of migrations in basal vertebrates. Cartilaginous fishes occupy a key phylogenetic position to assess the ancestral condition of vertebrate brain organization. In order to identify putative subpallial-derived tangential migratory pathways in the telencephalon of sharks, we performed a detailed analysis of the distribution pattern of GAD and Dlx2, two reliable markers of tangentially migrating interneurons of subpallial origin in the developing forebrain. We propose the existence of five tangential routes directed toward different telencephalic regions. We conclude that four of the five routes might have emerged in the common ancestor of jawed vertebrates. We have paid special attention to the characterization of the proposed migratory pathway directed towards the olfactory bulbs. Our results suggest that it may be equivalent to the "rostral migratory stream" of mammals and led us to propose a hypothesis about its evolution. The analysis of the final destinations of two other streams allowed us to identify the putative dorsal and medial pallium of sharks, the regions from which the neocortex and hippocampus might have, respectively, evolved. Derived features were also reported and served to explain some distinctive traits in the morphology of the telencephalon of cartilaginous fishes.

  1. Nudging Satellite Altimeter Data Into Quasi-Geostrophic Ocean Models

    NASA Astrophysics Data System (ADS)

    Verron, Jacques

    1992-05-01

    This paper discusses the efficiency of several variants of the nudging technique (derived from the technique of the same name developed by meteorologists) for assimilating altimeter data into numerical ocean models based on quasi-geostrophic formulation. Assimilation experiments are performed with data simulated in the nominal sampling conditions of the Topex-Poseidon satellite mission. Under experimental conditions it is found that nudging on the altimetric sea level is as efficient as nudging on the vorticity (second derivative in space of the dynamic topography), the technique used thus far in studies of this type. The use of altimetric residuals only, instead of the total altimetric sea level signal, is also explored. The critical importance of having an adequate reference mean sea level is largely confirmed. Finally, the possibility of nudging only the signal of sea level tendency (i.e., the successive time differences of the sea level height) is examined. Apart from the barotropic mode, results are not very successful compared with those obtained by assimilating the residuals.

  2. Tangential migration of glutamatergic neurons and cortical patterning during development: Lessons from Cajal-Retzius cells.

    PubMed

    Barber, Melissa; Pierani, Alessandra

    2016-08-01

    Tangential migration is a mode of cell movement, which in the developing cerebral cortex, is defined by displacement parallel to the ventricular surface and orthogonal to the radial glial fibers. This mode of long-range migration is a strategy by which distinct neuronal classes generated from spatially and molecularly distinct origins can integrate to form appropriate neural circuits within the cortical plate. While it was previously believed that only GABAergic cortical interneurons migrate tangentially from their origins in the subpallial ganglionic eminences to integrate in the cortical plate, it is now known that transient populations of glutamatergic neurons also adopt this mode of migration. These include Cajal-Retzius cells (CRs), subplate neurons (SPs), and cortical plate transient neurons (CPTs), which have crucial roles in orchestrating the radial and tangential development of the embryonic cerebral cortex in a noncell-autonomous manner. While CRs have been extensively studied, it is only in the last decade that the molecular mechanisms governing their tangential migration have begun to be elucidated. To date, the mechanisms of SPs and CPTs tangential migration remain unknown. We therefore review the known signaling pathways, which regulate parameters of CRs migration including their motility, contact-redistribution and adhesion to the pial surface, and discuss this in the context of how CR migration may regulate their signaling activity in a spatial and temporal manner. © 2015 Wiley Periodicals, Inc. Develop Neurobiol 76: 847-881, 2016. © 2015 Wiley Periodicals, Inc.

  3. Multiple zonal jets and convective heat transport barriers in a quasi-geostrophic model of planetary cores

    NASA Astrophysics Data System (ADS)

    Guervilly, C.; Cardin, P.

    2017-10-01

    We study rapidly rotating Boussinesq convection driven by internal heating in a full sphere. We use a numerical model based on the quasi-geostrophic approximation for the velocity field, whereas the temperature field is 3-D. This approximation allows us to perform simulations for Ekman numbers down to 10-8, Prandtl numbers relevant for liquid metals (˜10-1) and Reynolds numbers up to 3 × 104. Persistent zonal flows composed of multiple jets form as a result of the mixing of potential vorticity. For the largest Rayleigh numbers computed, the zonal velocity is larger than the convective velocity despite the presence of boundary friction. The convective structures and the zonal jets widen when the thermal forcing increases. Prograde and retrograde zonal jets are dynamically different: in the prograde jets (which correspond to weak potential vorticity gradients) the convection transports heat efficiently and the mean temperature tends to be homogenized; by contrast, in the cores of the retrograde jets (which correspond to steep gradients of potential vorticity) the dynamics is dominated by the propagation of Rossby waves, resulting in the formation of steep mean temperature gradients and the dominance of conduction in the heat transfer process. Consequently, in quasi-geostrophic systems, the width of the retrograde zonal jets controls the efficiency of the heat transfer.

  4. Kalker's algorithm Fastsim solves tangential contact problems with slip-dependent friction and friction anisotropy

    NASA Astrophysics Data System (ADS)

    Piotrowski, J.

    2010-07-01

    This paper presents two extensions of Kalker's algorithm Fastsim of the simplified theory of rolling contact. The first extension is for solving tangential contact problems with the coefficient of friction depending on slip velocity. Two friction laws have been considered: with and without recuperation of the static friction. According to the tribological hypothesis for metallic bodies shear failure, the friction law without recuperation of static friction is more suitable for wheel and rail than the other one. Sample results present local quantities inside the contact area (division to slip and adhesion, traction) as well as global ones (creep forces as functions of creepages and rolling velocity). For the coefficient of friction diminishing with slip, the creep forces decay after reaching the maximum and they depend on the rolling velocity. The second extension is for solving tangential contact problems with friction anisotropy characterised by a convex set of the permissible tangential tractions. The effect of the anisotropy has been shown on examples of rolling without spin and in the presence of pure spin for the elliptical set. The friction anisotropy influences tangential tractions and creep forces. Sample results present local and global quantities. Both extensions have been described with the same language of formulation and they may be merged into one, joint algorithm.

  5. The quality assessment of radial and tangential neutron radiography beamlines of TRR

    NASA Astrophysics Data System (ADS)

    Choopan Dastjerdi, M. H.; Movafeghi, A.; Khalafi, H.; Kasesaz, Y.

    2017-07-01

    To achieve a quality neutron radiographic image in a relatively short exposure time, the neutron radiography beam must be of good quality and relatively high neutron flux. Characterization of a neutron radiography beam, such as determination of the image quality and the neutron flux, is vital for producing quality radiographic images and also provides a means to compare the quality of different neutron radiography facilities. This paper provides a characterization of the radial and tangential neutron radiography beamlines at the Tehran research reactor. This work includes determination of the facilities category according to the American Society for Testing and Materials (ASTM) standards, and also uses the gold foils to determine the neutron beam flux. The radial neutron beam is a Category I neutron radiography facility, the highest possible quality level according to the ASTM. The tangential beam is a Category IV neutron radiography facility. Gold foil activation experiments show that the measured neutron flux for radial beamline with length-to-diameter ratio (L/D) =150 is 6.1× 106 n cm-2 s-1 and for tangential beamline with (L/D)=115 is 2.4× 104 n cm-2 s-1.

  6. Forbidden tangential orbit transfers between intersecting Keplerian orbits

    NASA Technical Reports Server (NTRS)

    Burns, Rowland E.

    1990-01-01

    The classical problem of tangential impulse transfer between coplanar Keplerian orbits is addressed. A completely analytic solution which does not rely on sequential calculation is obtained and this solution is used to demonstrate that certain initially chosen angles can produce singularities in the parameters of the transfer orbit. A necessary and sufficient condition for such singularities is that the initial and final orbits intersect.

  7. Generalization of the quasi-geostrophic Eliassen-Palm flux to include eddy forcing of condensation heating

    NASA Technical Reports Server (NTRS)

    Stone, P. H.; Salustri, G.

    1984-01-01

    A modified Eulerian form of the Eliassen-Palm flux which includes the effect of eddy forcing on condensation heating is defined. With the two-dimensional vector flux in the meridional plane which is a function of the zonal mean eddy fluxes replaced by the modified flux, both the Eliassen-Palm theorem and a modified but more general form of the nonacceleration theorem for quasi-geostrophic motion still hold. Calculations of the divergence of the modified flux and of the eddy forcing of the moisture field are presented.

  8. Comparison of histologic margin status in low-grade cutaneous and subcutaneous canine mast cell tumours examined by radial and tangential sections.

    PubMed

    Dores, C B; Milovancev, M; Russell, D S

    2018-03-01

    Radial sections are widely used to estimate adequacy of excision in canine cutaneous mast cell tumours (MCTs); however, this sectioning technique estimates only a small fraction of total margin circumference. This study aimed to compare histologic margin status in grade II/low grade MCTs sectioned using both radial and tangential sectioning techniques. A total of 43 circumferential margins were evaluated from 21 different tumours. Margins were first sectioned radially, followed by tangential sections. Tissues were examined by routine histopathology. Tangential margin status differed in 10 of 43 (23.3%) margins compared with their initial status on radial section. Of 39 margins, 9 (23.1%) categorized as histologic tumour-free margin (HTFM) >0 mm were positive on tangential sectioning. Tangential sections detected a significantly higher proportion of positive margins relative to radial sections (exact 2-tailed P-value = .0215). The HTFM was significantly longer in negative tangential margins than positive tangential margins (mean 10.1 vs 3.2 mm; P = .0008). A receiver operating characteristic curve comparing HTFM and tangentially negative margins found an area under the curve of 0.83 (95% confidence interval: 0.71-0.96). Although correct classification peaked at the sixth cut-point of HTFM ≥1 mm, radial sections still incorrectly classified 50% of margins as lacking tumour cells. Radial sections had 100% specificity for predicting negative tangential margins at a cut-point of 10.9 mm. These data indicate that for low grade MCTs, HTFMs >0 mm should not be considered completely excised, particularly when HTFM is <10.9 mm. This will inform future studies that use HTFM and overall excisional status as dependent variables in multivariable prognostic models. © 2017 John Wiley & Sons Ltd.

  9. A role for intermediate radial glia in the tangential expansion of the mammalian cerebral cortex.

    PubMed

    Reillo, Isabel; de Juan Romero, Camino; García-Cabezas, Miguel Ángel; Borrell, Víctor

    2011-07-01

    The cerebral cortex of large mammals undergoes massive surface area expansion and folding during development. Specific mechanisms to orchestrate the growth of the cortex in surface area rather than in thickness are likely to exist, but they have not been identified. Analyzing multiple species, we have identified a specialized type of progenitor cell that is exclusive to mammals with a folded cerebral cortex, which we named intermediate radial glia cell (IRGC). IRGCs express Pax6 but not Tbr2, have a radial fiber contacting the pial surface but not the ventricular surface, and are found in both the inner subventricular zone and outer subventricular zone (OSVZ). We find that IRGCs are massively generated in the OSVZ, thus augmenting the numbers of radial fibers. Fanning out of this expanding radial fiber scaffold promotes the tangential dispersion of radially migrating neurons, allowing for the growth in surface area of the cortical sheet. Accordingly, the tangential expansion of particular cortical regions was preceded by high proliferation in the underlying OSVZ, whereas the experimental reduction of IRGCs impaired the tangential dispersion of neurons and resulted in a smaller cortical surface. Thus, the generation of IRGCs plays a key role in the tangential expansion of the mammalian cerebral cortex.

  10. Turbulent convection in geostrophic circulation with wind and buoyancy forcing

    NASA Astrophysics Data System (ADS)

    Sohail, Taimoor; Gayen, Bishakhdatta; Hogg, Andy

    2017-11-01

    We conduct a direct numerical simulation of geostrophic circulation forced by surface wind and buoyancy to model a circumpolar ocean. The imposed buoyancy forcing (represented by Rayleigh number) drives a zonal current and supports small-scale convection in the buoyancy destabilizing region. In addition, we observe eddy activity which transports heat southward, supporting a large amount of heat uptake. Increasing wind stress enhances the meridional buoyancy gradient, triggering more eddy activity inside the boundary layer. Therefore, heat uptake increases with higher wind stress. The majority of dissipation is confined within the surface boundary layer, while mixing is dominant inside the convective plume and the buoyancy destabilizing region of the domain. The relative strength of the mixing and dissipation in the system can be expressed by mixing efficiency. This study finds that mixing is much greater than viscous dissipation, resulting in higher values of mixing efficiency than previously used. Supported by Australian Research Council Grant DP140103706.

  11. Further studies on cortical tangential migration in wild type and Pax-6 mutant mice.

    PubMed

    Jiménez, D; López-Mascaraque, L; de Carlos, J A; Valverde, F

    2002-01-01

    In this study we present new data concerning the tangential migration from the medial and lateral ganglionic eminences (MGE and LGE) to the cerebral cortex during development. We have used Calbindin as a useful marker to follow the itinerary of tangential migratory cells during early developmental stages in wild-type and Pax-6 homozygous mutant mice. In the wild-type mice, at early developmental stages, migrating cells advance through the intermediate zone (IZ) and preplate (PP). At more advanced stages, migrating cells were present in the subplate (SP) and cortical plate (CP) to reach the entire developing cerebral cortex. We found that, in the homozygous mutant mice (Pax-6(Sey-Neu)/Pax-6(Sey-Neu)), this tangential migration is severely affected at early developmental stages: migrating cells were absent in the IZ, which were only found some days later, suggesting that in the mutant mice, there is a temporal delay in tangential migration. We have also defined some possible mechanisms to explain certain migratory routes from the basal telencephalon to the cerebral cortex. We describe the existence of two factors, which we consider to be essential for the normal migration; the first one is the cell adhesion molecule PSA-NCAM, whose role in other migratory systems is well known. The second factor is Robo-2, whose expression delimits a channel for the passage of migratory cells from the basal telencephalon to the cerebral cortex.

  12. Helicity, geostrophic balance and mixing in rotating stratified turbulence: a multi-scale problem

    NASA Astrophysics Data System (ADS)

    Pouquet, A.; Marino, R.; Mininni, P.; Rorai, C.; Rosenberg, D. L.

    2012-12-01

    Helicity, geostrophic balance and mixing in rotating stratified turbulence: a multi-scale problem A. Pouquet, R. Marino, P. D. Mininni, C. Rorai & D. Rosenberg, NCAR Interactions between winds and waves have important roles in planetary and oceanic boundary layers, affecting momentum, heat and CO2 transport. Within the Abyssal Southern Ocean at Mid latitude, this may result in a mixed layer which is too shallow in climate models thereby affecting the overall evolution because of poor handling of wave breaking as in Kelvin-Helmoltz instabilities: gravity waves couple nonlinearly on slow time scales and undergo steepening through resonant interactions, or due to the presence of shear. In the oceans, sub-mesoscale frontogenesis and significant departure from quasi-geostrophy can be seen as turbulence intensifies. The ensuing anomalous vertical dispersion may not be simply modeled by a random walk, due to intermittent structures, wave propagation and to their interactions. Conversely, the energy and seeds required for such intermittent events to occur, say in the stable planetary boundary layer, may come from the wave field that is perturbed, or from winds and the effect of topography. Under the assumption of stationarity, weak nonlinearities, dissipation and forcing, one obtains large-scale geostrophic balance linking pressure gradient, gravity and Coriolis force. The role of helicity (velocity-vorticity correlations) has not received as much attention, outside the realm of astrophysics when considering the growth of large-scale magnetic fields. However, it is measured routinely in the atmosphere in order to gauge the likelihood of supercell convective storms to strengthen, and it may be a factor to consider in the formation of hurricanes. In this context, we examine the transition from a wave-dominated regime to an isotropic small-scale turbulent one in rotating flows with helical forcing. Using a direct numerical simulation (DNS) on a 3072^3 grid with Rossby and

  13. Ocean data assimilation using optimal interpolation with a quasi-geostrophic model

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele M.; Miller, Robert N.

    1991-01-01

    A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.

  14. Effects of Tangential Edge Constraints on the Postbuckling Behavior of Flat and Curved Panels Subjected to Thermal and Mechanical Loads

    NASA Technical Reports Server (NTRS)

    Lin, W.; Librescu, L.; Nemeth, M. P.; Starnes, J. H. , Jr.

    1994-01-01

    A parametric study of the effects of tangential edge constraints on the postbuckling response of flat and shallow curved panels subjected to thermal and mechanical loads is presented. The mechanical loads investigated are uniform compressive edge loads and transverse lateral pressure. The temperature fields considered are associated with spatially nonuniform heating over the panels, and a linear through-the-thickness temperature gradient. The structural model is based on a higher-order transverse-shear-deformation theory of shallow shells that incorporates the effects of geometric nonlinearities, initial geometric imperfections, and tangential edge motion constraints. Results are presented for three-layer sandwich panels made from transversely isotropic materials. Simply supported panels are considered in which the tangential motion of the unloaded edges is either unrestrained, partially restrained, or fully restrained. These results focus on the effects of the tangential edge restraint on the postbuckling response. The results of this study indicate that tangentially restraining the edges of a curved panel can make the panel insensitive to initial geometric imperfections in some cases.

  15. Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy.

    PubMed

    Lorenzen, Ebbe L; Brink, Carsten; Taylor, Carolyn W; Darby, Sarah C; Ewertz, Marianne

    2016-04-01

    We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. Three tangential radiotherapy regimens were reconstructed using CT-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. For left-sided breast cancer, mean heart dose estimated from individual CT-scans varied from <1Gy to >8Gy, and maximum dose from 5 to 50Gy for all three regimens, so that estimates based only on regimen had substantial uncertainty. When maximum heart distance was taken into account, the uncertainty was reduced and was comparable to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always <1Gy and maximum dose always <5Gy for all three regimens. The use of stored individual simulator films provides a method for estimating heart doses in left-tangential radiotherapy for breast cancer that is almost as accurate as estimates based on individual CT-scans. Copyright © 2016. Published by Elsevier Ireland Ltd.

  16. A vectorized Poisson solver over a spherical shell and its application to the quasi-geostrophic omega-equation

    NASA Technical Reports Server (NTRS)

    Mullenmeister, Paul

    1988-01-01

    The quasi-geostrophic omega-equation in flux form is developed as an example of a Poisson problem over a spherical shell. Solutions of this equation are obtained by applying a two-parameter Chebyshev solver in vector layout for CDC 200 series computers. The performance of this vectorized algorithm greatly exceeds the performance of its scalar analog. The algorithm generates solutions of the omega-equation which are compared with the omega fields calculated with the aid of the mass continuity equation.

  17. Experimental investigation of magnetically actuated separation using tangential microfluidic channels and magnetic nanoparticles.

    PubMed

    Munir, Ahsan; Zhu, Zanzan; Wang, Jianlong; Zhou, Hong Susan

    2014-06-01

    A novel continuous switching/separation scheme of magnetic nanoparticles (MNPs) in a sub-microlitre fluid volume surrounded by neodymium permanent magnet is studied in this work using tangential microfluidic channels. Polydimethylsiloxane tangential microchannels are fabricated using a novel micromoulding technique that can be done without a clean room and at much lower cost and time. Negligible switching of MNPs is seen in the absence of magnetic field, whereas 90% of switching is observed in the presence of magnetic field. The flow rate of MNPs solution had dramatic impact on separation performance. An optimum value of the flow rate is found that resulted in providing effective MNP separation at much faster rate. Separation performance is also investigated for a mixture containing non-magnetic polystyrene particles and MNPs. It is found that MNPs preferentially moved from lower microchannel to upper microchannel resulting in efficient separation. The proof-of-concept experiments performed in this work demonstrates that microfluidic bioseparation can be efficiently achieved using functionalised MNPs, together with tangential microchannels, appropriate magnetic field strength and optimum flow rates. This work verifies that a simple low-cost magnetic switching scheme can be potentially of great utility for the separation and detection of biomolecules in microfluidic lab-on-a-chip systems.

  18. Droplet condensation on superhydrophobic surfaces with enhanced dewetting under a tangential AC electric field

    NASA Astrophysics Data System (ADS)

    Yan, Xinzhu; Li, Jian; Li, Licheng; Huang, Zhengyong; Wang, Feipeng; Wei, Yuan

    2016-10-01

    In this Letter, the dewetting behavior of superhydrophobic condensing surfaces under a tangential AC electric field is reported. The surface coverage of condensed droplets only exhibits a negligible increase with time. The jumping frequency of droplets is enhanced. The AC electric field motivates the dynamic transition of droplets from stretch to recoil, resulting in the counterforce propelling droplet jumping. The considerable horizontal component of jumping velocity facilitates droplet departure from superhydrophobic surfaces. Both the amplitude and frequency of AC voltage are important factors for droplet departure and dewetting effect. Thereby, the tangential electric field provides a unique and easily implementable approach to enhance droplet removal from superhydrophobic condensing surfaces.

  19. Meninges control tangential migration of hem-derived Cajal-Retzius cells via CXCL12/CXCR4 signaling.

    PubMed

    Borrell, Víctor; Marín, Oscar

    2006-10-01

    Cajal-Retzius cells are critical in the development of the cerebral cortex, but little is known about the mechanisms controlling their development. Three focal sources of Cajal-Retzius cells have been identified in mice-the cortical hem, the ventral pallium and the septum-from where they migrate tangentially to populate the cortical surface. Using a variety of tissue culture assays and in vivo manipulations, we demonstrate that the tangential migration of cortical hem-derived Cajal-Retzius cells is controlled by the meninges. We show that the meningeal membranes are a necessary and sufficient substrate for the tangential migration of Cajal-Retzius cells. We also show that the chemokine CXCL12 secreted by the meninges enhances the dispersion of Cajal-Retzius cells along the cortical surface, while retaining them within the marginal zone in a CXCR4-dependent manner. Thus, the meningeal membranes are fundamental in the development of Cajal-Retzius cells and, hence, in the normal development of the cerebral cortex.

  20. Long-term variabilities of meridional geostrophic volumn transport in North Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Yuan, D.; Dewar, W. K.

    2016-02-01

    The meridional geostrophic volumn transport (MGVT) by the ocean plays a very important role in the climatic water mass and heat balance because of its large heat capacity which enables the oceans to store the large amount of radiation received in the summer and to release it in winter. Better understanding of the role of the oceans in climate variability is essential to assess the likely range of future climate fluctuations. In the last century the North Pacific Ocean experienced considerable climate variability, especially on decadal time scale. Some studies have shown that the North Pacific Ocean is the origin of North Pacific multidecadal variability (Latif and Barnett, 1994; Barnett et al., 1999). These fluctuations were associated with large anomalies in sea level, temperature, storminess and rainfall, the heat transport and other extremes are changing as well. If the MGVT of the ocean is well-determined, it can be used as a test of the validity of numerical, global climate models. In this paper, we investigate the long-term variability of the MGVT in North Pacific ocean based on 55 years long global ocean heat and salt content data (Levitus et al., 2012). Very clear inter-decadal variations can be seen in tropical , subtropical and subpolar regions of North Pacific Ocean. There are very consistent variations between the MGVT anomalies and the inter-decadal pacific oscillation (IPO) index in the tropical gyre with cold phase of IPO corresponding to negative MGVT anomalies and warm phase corresponding to positive MGVT anomalies. The subtropical gyre shows more complex variations, and the subpolar gyre shows a negative MGVT anomaly before late 1970's and a positive anomaly after that time. The geostrophic velocities of North Pacific Ocean show significantly different anomalies during the two IPO cold phases of 1955-1976 and 1999 to present, which suggests a different mechanism of the two cold phases. The long term variations of Sverdrup transport compares well

  1. Automated planning of tangential breast intensity-modulated radiotherapy using heuristic optimization.

    PubMed

    Purdie, Thomas G; Dinniwell, Robert E; Letourneau, Daniel; Hill, Christine; Sharpe, Michael B

    2011-10-01

    To present an automated technique for two-field tangential breast intensity-modulated radiotherapy (IMRT) treatment planning. A total of 158 planned patients with Stage 0, I, and II breast cancer treated using whole-breast IMRT were retrospectively replanned using automated treatment planning tools. The tools developed are integrated into the existing clinical treatment planning system (Pinnacle(3)) and are designed to perform the manual volume delineation, beam placement, and IMRT treatment planning steps carried out by the treatment planning radiation therapist. The automated algorithm, using only the radio-opaque markers placed at CT simulation as inputs, optimizes the tangential beam parameters to geometrically minimize the amount of lung and heart treated while covering the whole-breast volume. The IMRT parameters are optimized according to the automatically delineated whole-breast volume. The mean time to generate a complete treatment plan was 6 min, 50 s ± 1 min 12 s. For the automated plans, 157 of 158 plans (99%) were deemed clinically acceptable, and 138 of 158 plans (87%) were deemed clinically improved or equal to the corresponding clinical plan when reviewed in a randomized, double-blinded study by one experienced breast radiation oncologist. In addition, overall the automated plans were dosimetrically equivalent to the clinical plans when scored for target coverage and lung and heart doses. We have developed robust and efficient automated tools for fully inversed planned tangential breast IMRT planning that can be readily integrated into clinical practice. The tools produce clinically acceptable plans using only the common anatomic landmarks from the CT simulation process as an input. We anticipate the tools will improve patient access to high-quality IMRT treatment by simplifying the planning process and will reduce the effort and cost of incorporating more advanced planning into clinical practice. Crown Copyright © 2011. Published by Elsevier Inc

  2. Maximum entropy production principle for geostrophic turbulence

    NASA Astrophysics Data System (ADS)

    Sommeria, J.; Bouchet, F.; Chavanis, P. H.

    2003-04-01

    In 2D turbulence, complex stirring leads to the formation of steady organized states, once fine scale fluctuations have been filtered out. This self-organization can be explained in terms of statistical equilibrium for vorticity, as the most likely outcome of vorticity parcel rearrangements with the constraints of the conservation laws. A mixing entropy describing the vorticity rearrangements is introduced. Extension to the shallow water system has been proposed by Chavanis P.H. and Sommeria J. (2002), Phys. Rev. E. Generalization to multi-layer geostrophic flows is formally straightforward. Outside equilibrium, eddy fluxes should drive the system toward equilibrium, in the spirit of non equilibrium linear thermodynamics. This can been formalized in terms of a principle of maximum entropy production (MEP), as shown by Robert and Sommeria (1991), Phys. Rev. Lett. 69. Then a parameterization of eddy fluxes is obtained, involving an eddy diffusivity plus a drift term acting at larger scale. These two terms balance each other at equilibrium, resulting in a non trivial steady flow, which is the mean state of the statistical equilibrium. Applications of this eddy parametrization will be presented, in the context of oceanic circulation and Jupiter's Great Red Spot. Quantitative tests will be discussed, obtained by comparisons with direct numerical simulations. Kinetic models, inspired from plasma physics, provide a more precise description of the relaxation toward equilibrium, as shown by Chavanis P.H. 2000 ``Quasilinear theory of the 2D Euler equation'', Phys. Rev. Lett. 84. This approach provides relaxation equations with a form similar to the MEP, but not identical. In conclusion, the MEP provides the right trends of the system but its precise justification remains elusive.

  3. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  4. Spiral Galaxy Central Bulge Tangential Speed of Revolution Curves

    NASA Astrophysics Data System (ADS)

    Taff, Laurence

    2013-03-01

    The objective was to, for the first time in a century, scientifically analyze the ``rotation curves'' (sic) of the central bulges of scores of spiral galaxies. I commenced with a methodological, rational, geometrical, arithmetic, and statistical examination--none of them carried through before--of the radial velocity data. The requirement for such a thorough treatment is the paucity of data typically available for the central bulge: fewer than 10 observations and frequently only five. The most must be made of these. A consequence of this logical handling is the discovery of a unique model for the central bulge volume mass density resting on the positive slope, linear, rise of its tangential speed of revolution curve and hence--for the first time--a reliable mass estimate. The deduction comes from a known physics-based, mathematically valid, derivation (not assertion). It rests on the full (not partial) equations of motion plus Poisson's equation. Following that is a prediction for the gravitational potential energy and thence the gravitational force. From this comes a forecast for the tangential speed of revolution curve. It was analyzed in a fashion identical to that of the data thereby closing the circle and demonstrating internal self-consistency. This is a hallmark of a scientific method-informed approach to an experimental problem. Multiple plots of the relevant quantities and measures of goodness of fit will be shown. Astronomy related

  5. A study of the adequacy of quasi-geostrophic dynamics for modeling the effect of frontal cyclones on the larger scale flow

    NASA Technical Reports Server (NTRS)

    Mudrick, S.

    1985-01-01

    The validity of quasi-geostrophic (QG) dynamics were tested on compared to primitive equation (PE) dynamics, for modeling the effect of cyclone waves on the larger scale flow. The formation of frontal cyclones and the dynamics of occluded frontogenesis were studied. Surface friction runs with the PE model and the wavelength of maximum instability is described. Also fine resolution PE simulation of a polar low is described.

  6. Eddy Vertical Structure Observed by Deepgliders: Evidence for the Enstrophy Inertial Range Cascade in Geostrophic Turbulence

    NASA Astrophysics Data System (ADS)

    Eriksen, C. C.

    2016-12-01

    Full water column temperature and salinity profiles and estimates of average current collected with Deepgliders were used to analyze vertical structure of mesoscale features in the western North Atlantic Ocean. Fortnightly repeat surveys over a 58 km by 58 km region centered at the Bermuda Atlantic Time Series (BATS) site southeast of Bermuda were carried out for 3 and 9 months in successive years. In addition, a section from Bermuda along Line W across the Gulf Stream to the New England Continental Slope and a pair of sections from Bermuda to the Bahamas were carried out. Absolute geostrophic current estimates constructed from these measurements and projected upon flat bottom resting ocean dynamic modes for the regions indicate nearly equal kinetic energy in the barotropic mode and first baroclinic mode. An empirical orthogonal mode decomposition of dynamic mode amplitudes demonstrates strong coupling of the barotropic and first baroclinic modes, a result resembling those reported for the Polymode experiment 3 decades ago. Higher baroclinic modes are largely independent of one another. Energy in baroclinic modes varies in inverse proportion to mode number cubed, a result predicted for an enstrophy inertial range cascade of geostrophic turbulence, believed newly detected by these observations. This (mode number)-3 dependence is found at BATS and across the Gulf Stream and Sargasso Sea. On two occasions, submesoscale anticyclones were detected at BATS whose vertical structure closely resembled the second baroclinic mode. Anomalously cold and fresh water within their cores (by as much as 3.5°C and 0.5 in salinity) suggests they were of subpolar (likely Labrador Sea) origin. These provided temporary perturbations to the vertical mode number energy spectrum.

  7. Computational analysis of forebody tangential slot blowing on the high alpha research vehicle

    NASA Technical Reports Server (NTRS)

    Gee, Ken

    1994-01-01

    Current and future fighter aircraft can maneuver in the high-angle-of-attack flight regime while flying at low subsonic and transonic freestream Mach numbers. However, at any flight speed, the ability of the vertical tails to generate yawing moment is limited in high-angle-of-attack flight. Thus, any system designed to provide the pilot with additional side force and yawing moment must work in both low subsonic and transonic flight. However, previous investigations of the effectiveness of forebody tangential slot blowing in generating the desired control forces and moments have been limited to the low subsonic freestream flow regime. In order to investigate the effectiveness of tangential slot blowing in transonic flight, a computational fluid dynamics analysis was carried out during the grant period. Computational solutions were obtained at three different freestream Mach numbers and at various jet mass flow ratios. All results were obtained using the isolated F/A-18 forebody grid geometry at 30.3 degrees angle of attack. One goal of the research was to determine the effect of freestream Mach number on the effectiveness of forebody tangential slot blowing in generating yawing moment. The second part of the research studied the force onset time lag associated with blowing. The time required for the yawing moment to reach a steady-state value from the onset of blowing may have an impact on the implementation of a pneumatic system on a flight vehicle.

  8. A Single Mode Study of a Quasi-Geostrophic Convection-Driven Dynamo Model

    NASA Astrophysics Data System (ADS)

    Plumley, M.; Calkins, M. A.; Julien, K. A.; Tobias, S.

    2017-12-01

    Planetary magnetic fields are thought to be the product of hydromagnetic dynamo action. For Earth, this process occurs within the convecting, turbulent and rapidly rotating outer core, where the dynamics are characterized by low Rossby, low magnetic Prandtl and high Rayleigh numbers. Progress in studying dynamos has been limited by current computing capabilities and the difficulties in replicating the extreme values that define this setting. Asymptotic models that embrace these extreme parameter values and enforce the dominant balance of geostrophy provide an option for the study of convective flows with actual relevance to geophysics. The quasi-geostrophic dynamo model (QGDM) is a multiscale, fully-nonlinear Cartesian dynamo model that is valid in the asymptotic limit of low Rossby number. We investigate the QGDM using a simplified class of solutions that consist of a single horizontal wavenumber which enforces a horizontal structure on the solutions. This single mode study is used to explore multiscale time stepping techniques and analyze the influence of the magnetic field on convection.

  9. Quasi-geostrophic free mode models of long-lived Jovian eddies: Forcing mechanisms and crucial observational tests

    NASA Technical Reports Server (NTRS)

    Read, P. L.

    1986-01-01

    Observations of Jupiter and Saturn long-lived eddies, such as Jupiter's Great Red Spot and White Ovals, are presently compared with laboratory experiments and corresponding numerical simulations for free thermal convection in a rotating fluid that is subject to horizontal differential heating and cooling. Difficulties in determining the essential processes maintaining and dissipating stable eddies, on the basis of global energy budget studies, are discussed; such difficulties do not arise in considerations of the flow's potential vorticity budget. On Jupiter, diabatically forced and transient eddy-driven flows primarily differ in the implied role of transient eddies in transporting potential vorticity across closed geostrophic streamlines in the time mean.

  10. Rationale and Application of Tangential Scanning to Industrial Inspection of Hardwood Logs

    Treesearch

    Nand K. Gupta; Daniel L. Schmoldt; Bruce Isaacson

    1998-01-01

    Industrial computed tomography (CT) inspection of hardwood logs has some unique requirements not found in other CT applications. Sawmill operations demand that large volumes of wood be scanned quickly at high spatial resolution for extended duty cycles. Current CT scanning geometries and commercial systems have both technical and economic [imitations. Tangential...

  11. Injector Element which Maintains a Constant Mean Spray Angle and Optimum Pressure Drop During Throttling by Varying the Geometry of Tangential Inlets

    NASA Technical Reports Server (NTRS)

    Trinh, Huu P. (Inventor); Myers, William Neill (Inventor)

    2014-01-01

    A method for determining the optimum inlet geometry of a liquid rocket engine swirl injector includes obtaining a throttleable level phase value, volume flow rate, chamber pressure, liquid propellant density, inlet injector pressure, desired target spray angle and desired target optimum delta pressure value between an inlet and a chamber for a plurality of engine stages. The tangential inlet area for each throttleable stage is calculated. The correlation between the tangential inlet areas and delta pressure values is used to calculate the spring displacement and variable inlet geometry. An injector designed using the method includes a plurality of geometrically calculated tangential inlets in an injection tube; an injection tube cap with a plurality of inlet slots slidably engages the injection tube. A pressure differential across the injector element causes the cap to slide along the injection tube and variably align the inlet slots with the tangential inlets.

  12. Radial and tangential gravity rates from GRACE in areas of glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Kurtenbach, Enrico; Kusche, Jürgen; Vermeersen, Bert

    2011-11-01

    In areas dominated by Glacial Isostatic Adjustment (GIA), the free-air gravity anomaly rate can be converted to uplift rate to good approximation by using a simple spectral relation. We provide quantitative comparisons between gravity rates derived from monthly gravity field solutions (GFZ Potsdam, CSR Texas, IGG Bonn) from the Gravity Recovery and Climate Experiment (GRACE) satellite mission with uplift rates measured by GPS in these areas. The band-limited gravity data from the GRACE satellite mission can be brought to very good agreement with the point data from GPS by using scaling factors derived from a GIA model (the root-mean-square of differences is 0.55 mm yr-1 for a maximum uplift rate signal of 10 mm yr-1). The root-mean-square of the differences between GRACE derived uplift rates and GPS derived uplift rates decreases with increasing GRACE time period to a level below the uncertainty that is expected from GRACE observations, GPS measurements and the conversion from gravity rate to uplift rate. With the current length of time-series (more than 8 yr) applying filters and a hydrology correction to the GRACE data does not reduce the root-mean-square of differences significantly. The smallest root-mean-square was obtained with the GFZ solution in Fennoscandia and with the CSR solution in North America. With radial gravity rates in excellent agreement with GPS uplift rates, more information on the GIA process can be extracted from GRACE gravity field solutions in the form of tangential gravity rates, which are equivalent to a rate of change in the deflection of the vertical scaled by the magnitude of gravity rate vector. Tangential gravity rates derived from GRACE point towards the centre of the previously glaciated area, and are largest in a location close to the centre of the former ice sheet. Forward modelling showed that present day tangential gravity rates have maximum sensitivity between the centre and edge of the former ice sheet, while radial gravity

  13. Hetonic quartets in a two-layer quasi-geostrophic flow: V-states and stability

    NASA Astrophysics Data System (ADS)

    Reinaud, J. N.; Sokolovskiy, M. A.; Carton, X.

    2018-05-01

    We investigate families of finite core vortex quartets in mutual equilibrium in a two-layer quasi-geostrophic flow. The finite core solutions stem from known solutions for discrete (singular) vortex quartets. Two vortices lie in the top layer and two vortices lie in the bottom layer. Two vortices have a positive potential vorticity anomaly, while the two others have negative potential vorticity anomaly. The vortex configurations are therefore related to the baroclinic dipoles known in the literature as hetons. Two main branches of solutions exist depending on the arrangement of the vortices: the translating zigzag-shaped hetonic quartets and the rotating zigzag-shaped hetonic quartets. By addressing their linear stability, we show that while the rotating quartets can be unstable over a large range of the parameter space, most translating quartets are stable. This has implications on the longevity of such vortex equilibria in the oceans.

  14. Antecedent Avian Immunity Limits Tangential Transmission of West Nile Virus to Humans

    PubMed Central

    Kwan, Jennifer L.; Kluh, Susanne; Reisen, William K.

    2012-01-01

    Background West Nile virus (WNV) is a mosquito-borne flavivirus maintained and amplified among birds and tangentially transmitted to humans and horses which may develop terminal neuroinvasive disease. Outbreaks typically have a three-year pattern of silent introduction, rapid amplification and subsidence, followed by intermittent recrudescence. Our hypothesis that amplification to outbreak levels is contingent upon antecedent seroprevalence within maintenance host populations was tested by tracking WNV transmission in Los Angeles, California from 2003 through 2011. Methods Prevalence of antibodies against WNV was monitored weekly in House Finches and House Sparrows. Tangential or spillover transmission was measured by seroconversions in sentinel chickens and by the number of West Nile neuroinvasive disease (WNND) cases reported to the Los Angeles County Department of Public Health. Results Elevated seroprevalence in these avian populations was associated with the subsidence of outbreaks and in the antecedent dampening of amplification during succeeding years. Dilution of seroprevalence by recruitment resulted in the progressive loss of herd immunity following the 2004 outbreak, leading to recrudescence during 2008 and 2011. WNV appeared to be a significant cause of death in these avian species, because the survivorship of antibody positive birds significantly exceeded that of antibody negative birds. Cross-correlation analysis showed that seroprevalence was negatively correlated prior to the onset of human cases and then positively correlated, peaking at 4–6 weeks after the onset of tangential transmission. Antecedent seroprevalence during winter (Jan – Mar) was negatively correlated with the number of WNND cases during the succeeding summer (Jul–Sep). Conclusions Herd immunity levels within after hatching year avian maintenance host populations <10% during the antecedent late winter and spring period were followed on three occasions by outbreaks of WNND

  15. Can the starpatch on Xi Bootis A be explained by using tangential flows?

    NASA Technical Reports Server (NTRS)

    Toner, Clifford G.; Labonte, Barry J.

    1991-01-01

    It is demonstrated that a modification of the starpatch model of Toner and Gray (1988), using tangential flows instead of an enhanced granulation velocity dispersion within the patch, is very successful at reproducing both the observed line asymmetry and the line broadening variations observed in the G8 dwarf Xi Boo A. Areal coverage of 10 percent + or - 3 percent of the visible disk, latitude 30 deg + or - 4 deg, mean brightness 0.85 + or - 0.05 relative to the 'quiet' photosphere, mean tangential flow velocities of 8.0 + or - 1.5 km/s, and dispersions about the mean of 8/0 + or - 2.0 km/s are inferred for the patch. A feature at a latitude of about 30 deg is inferred which covers about 10 percent of the visible disk and is 10-20 percent fainter than the rest of the photosphere. It is inferred that 70-80 percent of the patch is penumbra.

  16. Use of the quasi-geostrophic dynamical framework to reconstruct the 3-D ocean state in a high-resolution realistic simulation of North Atlantic.

    NASA Astrophysics Data System (ADS)

    Fresnay, Simon; Ponte, Aurélien

    2017-04-01

    The quasi-geostrophic (QG) framework has been, is and will be still for years to come a cornerstone method linking observations with estimates of the ocean circulation and state. We have used here the QG framework to reconstruct dynamical variables of the 3-D ocean in a state-of-the-art high-resolution (1/60 deg, 300 vertical levels) numerical simulation of the North Atlantic (NATL60). The work was carried out in 3 boxes of the simulation: Gulf Stream, Azores and Reykjaness Ridge. In a first part, general diagnostics describing the eddying dynamics have been performed and show that the QG scaling verifies in general, at depths distant from mixed layer and bathymetric gradients. Correlations with surface observables variables (e.g. temperature, sea level) were computed and estimates of quasi-geostrophic potential vorticity (QGPV) were reconstructed by the means of regression laws. It is shown that that reconstruction of QGPV exhibits valuable skill for a restricted scale range, mainly using sea level as the variable of regression. Additional discussion is given, based on the flow balanced with QGPV. This work is part of the DIMUP project, aiming to improve our ability to operationnaly estimate the ocean state.

  17. Tangential velocity measurement using interferometric MTI radar

    DOEpatents

    Doerry, Armin W.; Mileshosky, Brian P.; Bickel, Douglas L.

    2006-01-03

    Radar systems use time delay measurements between a transmitted signal and its echo to calculate range to a target. Ranges that change with time cause a Doppler offset in phase and frequency of the echo. Consequently, the closing velocity between target and radar can be measured by measuring the Doppler offset of the echo. The closing velocity is also known as radial velocity, or line-of-sight velocity. Doppler frequency is measured in a pulse-Doppler radar as a linear phase shift over a set of radar pulses during some Coherent Processing Interval (CPI). An Interferometric Moving Target Indicator (MTI) radar can be used to measure the tangential velocity component of a moving target. Multiple baselines, along with the conventional radial velocity measurement, allow estimating the true 3-D velocity of a target.

  18. Tangential Field Changes in the Great Flare of 1990 May 24.

    PubMed

    Cameron; Sammis

    1999-11-01

    We examine the great (solar) flare of 1990 May 24 that occurred in active region NOAA 6063. The Big Bear Solar Observatory videomagnetograph Stokes V and I images show a change in the longitudinal field before and after the flare. Since the flare occurred near the limb, the change reflects a rearrangement of the tangential components of the magnetic field. These observations lack the 180 degrees ambiguity that characterizes vector magnetograms.

  19. Asymmetry of Radial and Symmetry of Tangential Neuronal Migration Pathways in Developing Human Fetal Brains

    PubMed Central

    Miyazaki, Yuta; Song, Jae W.; Takahashi, Emi

    2016-01-01

    The radial and tangential neural migration pathways are two major neuronal migration streams in humans that are critical during corticogenesis. Corticogenesis is a complex process of neuronal proliferation that is followed by neuronal migration and the formation of axonal connections. Existing histological assessments of these two neuronal migration pathways have limitations inherent to microscopic studies and are confined to small anatomic regions of interest (ROIs). Thus, little evidence is available about their three-dimensional (3-D) fiber pathways and development throughout the entire brain. In this study, we imaged and analyzed radial and tangential migration pathways in the whole human brain using high-angular resolution diffusion MR imaging (HARDI) tractography. We imaged ten fixed, postmortem fetal (17 gestational weeks (GW), 18 GW, 19 GW, three 20 GW, three 21 GW and 22 GW) and eight in vivo newborn (two 30 GW, 34 GW, 35 GW and four 40 GW) brains with no neurological/pathological conditions. We statistically compared the volume of the left and right radial and tangential migration pathways, and the volume of the radial migration pathways of the anterior and posterior regions of the brain. In specimens 22 GW or younger, the volume of radial migration pathways of the left hemisphere was significantly larger than that of the right hemisphere. The volume of posterior radial migration pathways was also larger when compared to the anterior pathways in specimens 22 GW or younger. In contrast, no significant differences were observed in the radial migration pathways of brains older than 22 GW. Moreover, our study did not identify any significant differences in volumetric laterality in the tangential migration pathways. These results suggest that these two neuronal migration pathways develop and regress differently, and radial neuronal migration varies regionally based on hemispheric and anterior-posterior laterality, potentially explaining regional differences in

  20. Drag reduction and thrust generation by tangential surface motion in flow past a cylinder

    NASA Astrophysics Data System (ADS)

    Mao, Xuerui; Pearson, Emily

    2018-03-01

    Sensitivity of drag to tangential surface motion is calculated in flow past a circular cylinder in both two- and three-dimensional conditions at Reynolds number Re ≤ 1000 . The magnitude of the sensitivity maximises in the region slightly upstream of the separation points where the contour lines of spanwise vorticity are normal to the cylinder surface. A control to reduce drag can be obtained by (negatively) scaling the sensitivity. The high correlation of sensitivities of controlled and uncontrolled flow indicates that the scaled sensitivity is a good approximation of the nonlinear optimal control. It is validated through direct numerical simulations that the linear range of the steady control is much higher than the unsteady control, which synchronises the vortex shedding and induces lock-in effects. The steady control injects angular momentum into the separating boundary layer, stabilises the flow and increases the base pressure significantly. At Re=100 , when the maximum tangential motion reaches 50% of the free-stream velocity, the vortex shedding, boundary-layer separation and recirculation bubbles are eliminated and 32% of the drag is reduced. When the maximum tangential motion reaches 2.5 times of the free-stream velocity, thrust is generated and the power savings ratio, defined as the ratio of the reduced drag power to the control input power, reaches 19.6. The mechanism of drag reduction is attributed to the change of the radial gradient of spanwise vorticity (partial r \\hat{ζ } ) and the subsequent accelerated pressure recovery from the uncontrolled separation points to the rear stagnation point.

  1. DEMONSTRATION OF SORBENT INJECTION TECHNOLOGY ON A TANGENTIALLY COAL-FIRED UTILITY BOILER (YORKTOWN LIMB DEMONSTRATION)

    EPA Science Inventory

    The report summarizes activities conducted and results achieved in an EPA-sponsored program to demonstrate Limestone Injection Multistage Burner (LIMB) technology on a tangentially fired coal-burning utility boiler, Virginia Power's 180-MWe Yorktown Unit No. 2. his successfully d...

  2. Absence of splash singularities for surface quasi-geostrophic sharp fronts and the Muskat problem.

    PubMed

    Gancedo, Francisco; Strain, Robert M

    2014-01-14

    In this paper, for both the sharp front surface quasi-geostrophic equation and the Muskat problem, we rule out the "splash singularity" blow-up scenario; in other words, we prove that the contours evolving from either of these systems cannot intersect at a single point while the free boundary remains smooth. Splash singularities have been shown to hold for the free boundary incompressible Euler equation in the form of the water waves contour evolution problem. Our result confirms the numerical simulations in earlier work, in which it was shown that the curvature blows up because the contours collapse at a point. Here, we prove that maintaining control of the curvature will remove the possibility of pointwise interphase collapse. Another conclusion that we provide is a better understanding of earlier work in which squirt singularities are ruled out; in this case, a positive volume of fluid between the contours cannot be ejected in finite time.

  3. Absence of splash singularities for surface quasi-geostrophic sharp fronts and the Muskat problem

    PubMed Central

    Gancedo, Francisco; Strain, Robert M.

    2014-01-01

    In this paper, for both the sharp front surface quasi-geostrophic equation and the Muskat problem, we rule out the “splash singularity” blow-up scenario; in other words, we prove that the contours evolving from either of these systems cannot intersect at a single point while the free boundary remains smooth. Splash singularities have been shown to hold for the free boundary incompressible Euler equation in the form of the water waves contour evolution problem. Our result confirms the numerical simulations in earlier work, in which it was shown that the curvature blows up because the contours collapse at a point. Here, we prove that maintaining control of the curvature will remove the possibility of pointwise interphase collapse. Another conclusion that we provide is a better understanding of earlier work in which squirt singularities are ruled out; in this case, a positive volume of fluid between the contours cannot be ejected in finite time. PMID:24347645

  4. QUAGMIRE v1.3: a quasi-geostrophic model for investigating rotating fluids experiments

    NASA Astrophysics Data System (ADS)

    Williams, P. D.; Haine, T. W. N.; Read, P. L.; Lewis, S. R.; Yamazaki, Y. H.

    2008-09-01

    QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. The model uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.

  5. QUAGMIRE v1.3: a quasi-geostrophic model for investigating rotating fluids experiments

    NASA Astrophysics Data System (ADS)

    Williams, P. D.; Haine, T. W. N.; Read, P. L.; Lewis, S. R.; Yamazaki, Y. H.

    2009-02-01

    QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. The model uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.

  6. Measurement of seismometer orientation using the tangential P-wave receiver function based on harmonic decomposition

    NASA Astrophysics Data System (ADS)

    Lim, Hobin; Kim, YoungHee; Song, Teh-Ru Alex; Shen, Xuzhang

    2018-03-01

    Accurate determination of the seismometer orientation is a prerequisite for seismic studies including, but not limited to seismic anisotropy. While borehole seismometers on land produce seismic waveform data somewhat free of human-induced noise, they might have a drawback of an uncertain orientation. This study calculates a harmonic decomposition of teleseismic receiver functions from the P and PP phases and determines the orientation of a seismometer by minimizing a constant term in a harmonic expansion of tangential receiver functions in backazimuth near and at 0 s. This method normalizes the effect of seismic sources and determines the orientation of a seismometer without having to assume for an isotropic medium. Compared to the method of minimizing the amplitudes of a mean of the tangential receiver functions near and at 0 s, the method yields more accurate orientations in cases where the backazimuthal coverage of earthquake sources (even in the case of ocean bottom seismometers) is uneven and incomplete. We apply this method to data from the Korean seismic network (52 broad-band velocity seismometers, 30 of which are borehole sensors) to estimate the sensor orientation in the period of 2005-2016. We also track temporal changes in the sensor orientation through the change in the polarity and the amplitude of the tangential receiver function. Six borehole stations are confirmed to experience a significant orientation change (10°-180°) over the period of 10 yr. We demonstrate the usefulness of our method by estimating the orientation of ocean bottom sensors, which are known to have high noise level during the relatively short deployment period.

  7. Roll-Yaw control at high angle of attack by forebody tangential blowing

    NASA Technical Reports Server (NTRS)

    Pedreiro, N.; Rock, S. M.; Celik, Z. Z.; Roberts, L.

    1995-01-01

    The feasibility of using forebody tangential blowing to control the roll-yaw motion of a wind tunnel model is experimentally demonstrated. An unsteady model of the aerodynamics is developed based on the fundamental physics of the flow. Data from dynamic experiments is used to validate the aerodynamic model. A unique apparatus is designed and built that allows the wind tunnel model two degrees of freedom, roll and yaw. Dynamic experiments conducted at 45 degrees angle of attack reveal the system to be unstable. The natural motion is divergent. The aerodynamic model is incorporated into the equations of motion of the system and used for the design of closed loop control laws that make the system stable. These laws are proven through dynamic experiments in the wind tunnel using blowing as the only actuator. It is shown that asymmetric blowing is a highly non-linear effector that can be linearized by superimposing symmetric blowing. The effects of forebody tangential blowing and roll and yaw angles on the flow structure are determined through flow visualization experiments. The transient response of roll and yaw moments to a step input blowing are determined. Differences on the roll and yaw moment dependence on blowing are explained based on the physics of the phenomena.

  8. Roll-yaw control at high angle of attack by forebody tangential blowing

    NASA Technical Reports Server (NTRS)

    Pedreiro, N.; Rock, S. M.; Celik, Z. Z.; Roberts, L.

    1995-01-01

    The feasibility of using forebody tangential blowing to control the roll-yaw motion of a wind tunnel model is experimentally demonstrated. An unsteady model of the aerodynamics is developed based on the fundamental physics of the flow. Data from dynamic experiments is used to validate the aerodynamic model. A unique apparatus is designed and built that allows the wind tunnel model two degrees of freedom, roll and yaw. Dynamic experiments conducted at 45 degrees angle of attack reveal the system to be unstable. The natural motion is divergent. The aerodynamic model is incorporated into the equations of motion of the system and used for the design of closed loop control laws that make the system stable. These laws are proven through dynamic experiments in the wind tunnel using blowing as the only actuator. It is shown that asymmetric blowing is a highly non-linear effector that can be linearized by superimposing symmetric blowing. The effects of forebody tangential blowing and roll and yaw angles on the flow structure are determined through flow visualization experiments. The transient response of roll and yaw moments to a step input blowing are determined. Differences on the roll and yaw moment dependence on blowing are explained based on the physics of the phenomena.

  9. An alternative to FASTSIM for tangential solution of the wheel-rail contact

    NASA Astrophysics Data System (ADS)

    Sichani, Matin Sh.; Enblom, Roger; Berg, Mats

    2016-06-01

    In most rail vehicle dynamics simulation packages, tangential solution of the wheel-rail contact is gained by means of Kalker's FASTSIM algorithm. While 5-25% error is expected for creep force estimation, the errors of shear stress distribution, needed for wheel-rail damage analysis, may rise above 30% due to the parabolic traction bound. Therefore, a novel algorithm named FaStrip is proposed as an alternative to FASTSIM. It is based on the strip theory which extends the two-dimensional rolling contact solution to three-dimensional contacts. To form FaStrip, the original strip theory is amended to obtain accurate estimations for any contact ellipse size and it is combined by a numerical algorithm to handle spin. The comparison between the two algorithms shows that using FaStrip improves the accuracy of the estimated shear stress distribution and the creep force estimation in all studied cases. In combined lateral creepage and spin cases, for instance, the error in force estimation reduces from 18% to less than 2%. The estimation of the slip velocities in the slip zone, needed for wear analysis, is also studied. Since FaStrip is as fast as FASTSIM, it can be an alternative for tangential solution of the wheel-rail contact in simulation packages.

  10. A study of the adequacy of quasi-geostrophic dynamics for modeling the effect of frontal cyclones on the larger scale flow

    NASA Technical Reports Server (NTRS)

    Mudrick, Stephen

    1987-01-01

    The evolution of individual cyclone waves is studied in order to see how well quasi-geostrophic (QG) dynamics can simulate the behavior of primitive equations (PE) dynamics. This work is an extension of a similar study (Mudrick, 1982); emphasis is placed here on adding a frontal zone and other more diverse features to the basic states used. In addition, sets of PE integrations, with and without friction, are used to study the formation of surface occluded fronts within the evolving cyclones. Results of the study are summarized at the beginning of the report.

  11. Adult Learning Assumptions

    ERIC Educational Resources Information Center

    Baskas, Richard S.

    2011-01-01

    The purpose of this study is to examine Knowles' theory of andragogy and his six assumptions of how adults learn while providing evidence to support two of his assumptions based on the theory of andragogy. As no single theory explains how adults learn, it can best be assumed that adults learn through the accumulation of formal and informal…

  12. INTRINSIC CURVATURE: A MARKER OF MILLIMETER-SCALE TANGENTIAL CORTICO-CORTICAL CONNECTIVITY?

    PubMed Central

    RONAN, LISA; PIENAAR, RUDOLPH; WILLIAMS, GUY; BULLMORE, ED; CROW, TIM J.; ROBERTS, NEIL; JONES, PETER B.; SUCKLING, JOHN; FLETCHER, PAUL C.

    2012-01-01

    In this paper, we draw a link between cortical intrinsic curvature and the distributions of tangential connection lengths. We suggest that differential rates of surface expansion not only lead to intrinsic curvature of the cortical sheet, but also to differential inter-neuronal spacing. We propose that there follows a consequential change in the profile of neuronal connections: specifically an enhancement of the tendency towards proportionately more short connections. Thus, the degree of cortical intrinsic curvature may have implications for short-range connectivity. PMID:21956929

  13. On type B cyclogenesis in a quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Grotjahn, Richard

    2005-01-01

    A quasi-geostrophic (QG) model is used to approximate some aspects of 'type B' cyclogenesis as described in an observational paper that appeared several decades earlier in this journal. Though often cited, that earlier work has some ambiguity that has propagated into subsequent analyses. The novel aspects examined here include allowing advective nonlinearity to distort and amplify structures that are quasi-coherent and nearly stable in a linear form of the model; also, separate upper and lower structures are localized in space. Cases are studied separately where the upper trough tracks across different low-level features: an enhanced baroclinic zone (stronger horizontal temperature gradient) or a region of augmented temperature. Growth by superposition of lower and upper features is excluded by experimental design. The dynamics are evaluated with the vertical motion equation, the QG vorticity equation, the QG perturbation energy equation, and 'potential-vorticity thinking'. Results are compared against 'control' cases having no additional low-level features. Nonlinearity is examined relative to a corresponding linear calculation and is generally positive. The results are perhaps richer than the seminal article might imply, because growth is enhanced not only when properties of the lower feature reinforce growth but also when the lower feature opposes decay of the upper feature. For example, growth is enhanced where low-level warm advection introduces rising warm air to oppose the rising cold air ahead of the upper trough. Such growth is magnified when adjacent warm and cold anomalies have a strong baroclinic zone between them. The enhanced growth triggers an upstream tilt in the solution whose properties further accelerate the growth.

  14. Effect of pressure on tangential-injection film cooling in a combustor exhaust stream

    NASA Technical Reports Server (NTRS)

    Marek, C. J.

    1973-01-01

    A tangential-injection film cooled test section was placed in the exhaust stream of a high pressure combustor. Film cooling data were taken at pressure of 1, 10, and 20 atmospheres. The film cooling effectiveness was found to be independent of pressure. The data were correlated adequately by a turbulent-mixing film cooling correlation with a turbulent-mixing coefficient of 0.05 + or - 0.02.

  15. Large tangential electric fields in plasmas close to temperature screening

    NASA Astrophysics Data System (ADS)

    Velasco, J. L.; Calvo, I.; García-Regaña, J. M.; Parra, F. I.; Satake, S.; Alonso, J. A.; the LHD team

    2018-07-01

    Low collisionality stellarator plasmas usually display a large negative radial electric field that has been expected to cause accumulation of impurities due to their high charge number. In this paper, two combined effects that can potentially modify this scenario are discussed. First, it is shown that, in low collisionality plasmas, the kinetic contribution of the electrons to the radial electric field can make it negative but small, bringing the plasma close to impurity temperature screening (i.e., to a situation in which the ion temperature gradient is the main drive of impurity transport and causes outward flux); in plasmas of very low collisionality, such as those of the large helical device displaying impurity hole (Ida et al (The LHD Experimental Group) 2009 Phys. Plasmas 16 056111; Yoshinuma et al (The LHD Experimental Group) 2009 Nucl. Fusion 49 062002), screening may actually occur. Second, the component of the electric field that is tangent to the flux surface (in other words, the variation of the electrostatic potential on the flux surface), although smaller than the radial component, has recently been suggested to be an additional relevant drive for radial impurity transport. Here, it is explained that, especially when the radial electric field is small, the tangential magnetic drift has to be kept in order to correctly compute the tangential electric field, that can be larger than previously expected. This can have a strong impact on impurity transport, as we illustrate by means of simulations using the newly developed code kinetic orbit-averaging-solver for stellarators, although it is not enough to explain by itself the behavior of the fluxes in situations like the impurity hole.

  16. QUAGMIRE v1.3: a quasi-geostrophic model for investigating rotating fluids experiments

    NASA Astrophysics Data System (ADS)

    Williams, P. D.; Haine, T. W. N.; Read, P. L.; Lewis, S. R.; Yamazaki, Y. H.

    2009-04-01

    The QUAGMIRE model has recently been made freely available for public use. QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. This presentation describes the model's main features. QUAGMIRE uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.

  17. Tangential Flow Filtration of Colloidal Silver Nanoparticles: A "Green" Laboratory Experiment for Chemistry and Engineering Students

    ERIC Educational Resources Information Center

    Dorney, Kevin M.; Baker, Joshua D.; Edwards, Michelle L.; Kanel, Sushil R.; O'Malley, Matthew; Pavel Sizemore, Ioana E.

    2014-01-01

    Numerous nanoparticle (NP) fabrication methodologies employ "bottom-up" syntheses, which may result in heterogeneous mixtures of NPs or may require toxic capping agents to reduce NP polydispersity. Tangential flow filtration (TFF) is an alternative "green" technique for the purification, concentration, and size-selection of…

  18. Slipping and tangential discontinuity instabilities in quasi-one-dimensional planar and cylindrical flows

    NASA Astrophysics Data System (ADS)

    Kuzelev, M. V.

    2017-09-01

    An analytical linear theory of instability of an electron beam with a nonuniform directional velocity (slipping instability) against perturbations with wavelengths exceeding the transverse beam size is offered. An analogy with hydrodynamic instabilities of tangential discontinuity of an incompressible liquid flow is drawn. The instability growth rates are calculated for particular cases and in a general form in planar and cylindrical geometries. The stabilizing effect of the external magnetic field is analyzed.

  19. Effect of initial tangential velocity distribution on the mean evolution of a swirling turbulent free jet

    NASA Technical Reports Server (NTRS)

    Farokhi, S.; Taghavi, R.; Rice, E. J.

    1988-01-01

    An existing cold jet facility at NASA-Lewis was modified to produce swirling flows with controllable initial tangential velocity distribution. Distinctly different swirl velocity profiles were produced, and their effects on jet mixing characteristics were measured downstream of an 11.43 cm diameter convergent nozzle. It was experimentally shown that in the near field of a swirling turbulent jet, the mean velocity field strongly depends on the initial swirl profile. Two extreme tangential velocity distributions were produced. The two jets shared approximately the same initial mass flow rate of 5.9 kg/s, mass averaged axial Mach number and swirl number. Mean centerline velocity decay characteristics of the solid body rotation jet flow exhibited classical decay features of a swirling jet with S = 0.48 reported in the literature. It is concluded that the integrated swirl effect, reflected in the swirl number, is inadequate in describing the mean swirling jet behavior in the near field.

  20. Initial boundary-value problem for the spherically symmetric Einstein equations with fluids with tangential pressure.

    PubMed

    Brito, Irene; Mena, Filipe C

    2017-08-01

    We prove that, for a given spherically symmetric fluid distribution with tangential pressure on an initial space-like hypersurface with a time-like boundary, there exists a unique, local in time solution to the Einstein equations in a neighbourhood of the boundary. As an application, we consider a particular elastic fluid interior matched to a vacuum exterior.

  1. The estimation of tissue loss during tangential hydrosurgical debridement.

    PubMed

    Matsumura, Hajime; Nozaki, Motohiro; Watanabe, Katsueki; Sakurai, Hiroyuki; Kawakami, Shigehiko; Nakazawa, Hiroaki; Matsumura, Izumi; Katahira, Jiro; Inokuchi, Sadaki; Ichioka, Shigeru; Ikeda, Hiroto; Mole, Trevor; Smith, Jennifer; Martin, Robin; Aikawa, Naoki

    2012-11-01

    The preservation of healthy tissue during surgical debridement is desirable as this may improve clinical outcomes. This study has estimated for the first time the amount of tissue lost during debridement using the VERSAJET system of tangential hydrosurgery. A multicenter, prospective case series was carried out on 47 patients with mixed wound types: 21 (45%) burns, 13 (28%) chronic wounds, and 13 (28%) acute wounds. Overall, 44 (94%) of 47 patients achieved appropriate debridement after a single debridement procedure as verified by an independent photographic assessment. The percentage of necrotic tissue reduced from a median of 50% to 0% (P < 0.001). Median wound area and depth increased by only 0.3 cm (6.8%) and 0.5 mm (25%), respectively. Notably, 43 (91%) of 47 wounds did not progress into a deeper compartment, indicating a high degree of tissue preservation.

  2. Far-infrared tangential interferometer/polarimeter design and installation for NSTX-U

    DOE PAGES

    Scott, E. R.; Barchfeld, R.; Riemenschneider, P.; ...

    2016-08-09

    Here, the Far-infrared Tangential Interferometer/Polarimeter (FIReTIP) system has been refurbished and is being reinstalled on the National Spherical Torus Experiment—Upgrade (NSTX-U) to supply real-time line-integrated core electron density measurements for use in the NSTX-U plasma control system (PCS) to facilitate real-time density feedback control of the NSTX-U plasma. Inclusion of a visible light heterodyne interferometer in the FIReTIP system allows for real-time vibration compensation due to movement of an internally mounted retroreflector and the FIReTIP front-end optics. Real-time signal correction is achieved through use of a National Instruments CompactRIO field-programmable gate array.

  3. INVESTIGATION OF CONVENTIONAL MEMBRANE AND TANGENTIAL FLOW ULTRAFILTRATION ARTIFACTS AND THEIR APPLICATION TO THE CHARACTERIZATION OF FRESHWATER COLLOIDS

    EPA Science Inventory

    Artifacts associated with the fractionation of colloids in a freshwater sample were investigated for conventional membrane filtration (0.45 micron cutoff), and two tangential flow ultrafiltration cartridges (0.1 micron cutoff and 3000 MW cutoff). Membrane clogging during conventi...

  4. A computer program for performance prediction of tripropellant rocket engines with tangential slot injection

    NASA Technical Reports Server (NTRS)

    Dang, Anthony; Nickerson, Gary R.

    1987-01-01

    For the development of a Heavy Lift Launch Vehicle (HLLV) several engines with different operating cycles and using LOX/Hydrocarbon propellants are presently being examined. Some concepts utilize hydrogen for thrust chamber wall cooling followed by a gas generator turbine drive cycle with subsequent dumping of H2/O2 combustion products into the nozzle downstream of the throat. In the Space Transportation Booster Engine (STBE) selection process the specific impulse will be one of the optimization criteria; however, the current performance prediction programs do not have the capability to include a third propellant in this process, nor to account for the effect of dumping the gas-generator product tangentially inside the nozzle. The purpose is to describe a computer program for accurately predicting the performance of such an engine. The code consists of two modules; one for the inviscid performance, and the other for the viscous loss. For the first module, the two-dimensional kinetics program (TDK) was modified to account for tripropellant chemistry, and for the effect of tangential slot injection. For the viscous loss, the Mass Addition Boundary Layer program (MABL) was modified to include the effects of the boundary layer-shear layer interaction, and tripropellant chemistry. Calculations were made for a real engine and compared with available data.

  5. A three-dimensional model of Tangential YORP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golubov, O.; Scheeres, D. J.; Krugly, Yu. N., E-mail: golubov@astron.kharkov.ua

    2014-10-10

    Tangential YORP, or TYORP, has recently been demonstrated to be an important factor in the evolution of an asteroid's rotation state. It is complementary to normal YORP, or NYORP, which used to be considered previously. While NYORP is produced by non-symmetry in the large-scale geometry of an asteroid, TYORP is due to heat conductivity in stones on the surface of the asteroid. To date, TYORP has been studied only in a simplified one-dimensional model, substituting stones with high long walls. This article for the first time considers TYORP in a realistic three-dimensional model, also including shadowing and self-illumination effects viamore » ray tracing. TYORP is simulated for spherical stones lying on regolith. The model includes only five free parameters and the dependence of the TYORP on each of them is studied. The TYORP torque appears to be smaller than previous estimates from the one-dimensional model, but is still comparable to the NYORP torques. These results can be used to estimate TYORP of different asteroids and also as a basis for more sophisticated models of TYORP.« less

  6. SU-E-T-373: Evaluation and Reduction of Contralateral Skin /subcutaneous Dose for Tangential Breast Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butson, M; Carroll, S; Whitaker, M

    2015-06-15

    Purpose: Tangential breast irradiation is a standard treatment technique for breast cancer therapy. One aspect of dose delivery includes dose delivered to the skin caused by electron contamination. This effect is especially important for highly oblique beams used on the medical tangent where the electron contamination deposits dose on the contralateral breast side. This work aims to investigate and predict as well as define a method to reduce this dose during tangential breast radiotherapy. Methods: Analysis and calculation of breast skin and subcutaneous dose is performed using a Varian Eclipse planning system, AAA algorithm for 6MV x-ray treatments. Measurements weremore » made using EBT3 Gafchromic film to verify the accuracy of planning data. Various materials were tested to assess their ability to remove electron contamination on the contralateral breast. Results: Results showed that the Varian Eclipse AAA algorithm could accurately estimate contralateral breast dose in the build-up region at depths of 2mm or deeper. Surface dose was underestimated by the AAA algorithm. Doses up to 12% of applied dose were seen on the contralateral breast surface and up to 9 % at 2mm depth. Due to the nature of this radiation, being mainly low energy electron contamination, a bolus material could be used to reduce this dose to less than 3%. This is accomplished by 10 mm of superflab bolus or by 1 mm of lead. Conclusion: Contralateral breast skin and subcutaneous dose is present for tangential breast treatment and has been measured to be up to 12% of applied dose from the medial tangent beam. This dose is deposited at shallow depths and is accurately calculated by the Eclipse AAA algorithm at depths of 2mm or greater. Bolus material placed over the contralateral can be used to effectively reduce this skin dose.« less

  7. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    NASA Astrophysics Data System (ADS)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  8. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...

  9. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...

  10. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...

  11. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...

  12. 5 CFR 841.405 - Economic assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...

  13. Individualized Selection of Beam Angles and Treatment Isocenter in Tangential Breast Intensity Modulated Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penninkhof, Joan, E-mail: j.penninkhof@erasmusmc.nl; Spadola, Sara; Department of Physics and Astronomy, Alma Mater Studiorum, University of Bologna, Bologna

    Purpose and Objective: Propose a novel method for individualized selection of beam angles and treatment isocenter in tangential breast intensity modulated radiation therapy (IMRT). Methods and Materials: For each patient, beam and isocenter selection starts with the fully automatic generation of a large database of IMRT plans (up to 847 in this study); each of these plans belongs to a unique combination of isocenter position, lateral beam angle, and medial beam angle. The imposed hard planning constraint on patient maximum dose may result in plans with unacceptable target dose delivery. Such plans are excluded from further analyses. Owing to differencesmore » in beam setup, database plans differ in mean doses to organs at risk (OARs). These mean doses are used to construct 2-dimensional graphs, showing relationships between: (1) contralateral breast dose and ipsilateral lung dose; and (2) contralateral breast dose and heart dose (analyzed only for left-sided). The graphs can be used for selection of the isocenter and beam angles with the optimal, patient-specific tradeoffs between the mean OAR doses. For 30 previously treated patients (15 left-sided and 15 right-sided tumors), graphs were generated considering only the clinically applied isocenter with 121 tangential beam angle pairs. For 20 of the 30 patients, 6 alternative isocenters were also investigated. Results: Computation time for automatic generation of 121 IMRT plans took on average 30 minutes. The generated graphs demonstrated large variations in tradeoffs between conflicting OAR objectives, depending on beam angles and patient anatomy. For patients with isocenter optimization, 847 IMRT plans were considered. Adding isocenter position optimization next to beam angle optimization had a small impact on the final plan quality. Conclusion: A method is proposed for individualized selection of beam angles in tangential breast IMRT. This may be especially important for patients with cardiac risk factors or

  14. The synoptic- and planetary-scale environments associated with significant 1000-hPa geostrophic wind events along the Beaufort Sea coast

    NASA Astrophysics Data System (ADS)

    Cooke, Melanie

    The substantial interannual variability and the observed warming trend of the Beaufort Sea region are important motivators for the study of regional climate and weather there. In an attempt to further our understanding of strong wind events, which can drive sea ice dynamics and storm surges, their characteristic environments at the synoptic and planetary scales are defined and analysed using global reanalysis data. A dependency on an enhanced or suppressed Aleutian low is found. This produces either a strong southeasterly or north-westerly 1000-hPa geostrophic wind event. The characteristic mid-tropospheric patterns for these two distinct event types show similarities to the positive and negative Pacific/North American teleconnection patterns, but their correlations have yet to be assessed.

  15. Radial force distribution changes associated with tangential force production in cylindrical grasping, and the importance of anatomical registration.

    PubMed

    Pataky, Todd C; Slota, Gregory P; Latash, Mark L; Zatsiorsky, Vladimir M

    2012-01-10

    Radial force (F(r)) distributions describe grip force coordination about a cylindrical object. Recent studies have employed only explicit F(r) tasks, and have not normalized for anatomical variance when considering F(r) distributions. The goals of the present study were (i) to explore F(r) during tangential force production tasks, and (ii) to examine the extent to which anatomical registration (i.e. spatial normalization of anatomically analogous structures) could improve signal detectability in F(r) data. Twelve subjects grasped a vertically oriented cylindrical handle (diameter=6 cm) and matched target upward tangential forces of 10, 20, and 30 N. F(r) data were measured using a flexible pressure mat with an angular resolution of 4.8°, and were registered using piecewise-linear interpolation between five manually identified points-of-interest. Results indicate that F(r) was primarily limited to three contact regions: the distal thumb, the distal fingers, and the fingers' metatacarpal heads, and that, while increases in tangential force caused significant increases in F(r) for these regions, they did not significantly affect the F(r) distribution across the hand. Registration was found to substantially reduce between-subject variability, as indicated by both accentuated F(r) trends, and amplification of the test statistic. These results imply that, while subjects focus F(r) primarily on three anatomical regions during cylindrical grasp, inter-subject anatomical differences introduce a variability that, if not corrected for via registration, may compromise one's ability to draw anatomically relevant conclusions from grasping force data. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Flight trajectories with maximum tangential thrust in a central Newtonian field

    NASA Astrophysics Data System (ADS)

    Azizov, A. G.; Korshunova, N. A.

    1983-07-01

    The paper examines the two-dimensional problem of determining the optimal trajectories of a point moving with a limited per-second mass consumption in a central Newtonian field. It is shown that one of the cases in which the variational equations in the Meier formulation can be integrated in quadratures is motion with maximum tangential thrust. Trajectories corresponding to this motion are determined. By way of application, attention is given to the problem of determining the thrust which assures maximum kinetic energy for the point at the moment t = t1, corresponding to the mass consumption M0 - M1, where M0 and M1 are, respectively, the initial and final mass.

  17. A tangentially viewing fast ion D-alpha diagnostic for NSTX.

    PubMed

    Bortolon, A; Heidbrink, W W; Podestà, M

    2010-10-01

    A second fast ion D-alpha (FIDA) installation is planned at NSTX to complement the present perpendicular viewing FIDA diagnostics. Following the present diagnostic scheme, the new diagnostic will consist of two instruments: a spectroscopic diagnostic that measures fast ion spectra and profiles at 16 radial points with 5-10 ms resolution and a system that uses a band pass filter and photomultiplier to measure changes in FIDA light with 50 kHz sampling rate. The new pair of FIDA instruments will view the heating beams tangentially. The viewing geometry minimizes spectral contamination by beam emission or edge sources of background emission. The improved velocity-space resolution will provide detailed information about neutral-beam current drive and about fast ion acceleration and transport by injected radio frequency waves and plasma instabilities.

  18. Improved design of a tangential entry cyclone separator for separation of particles from exhaust gas of diesel engine.

    PubMed

    Mukhopadhyay, N

    2011-01-01

    An effective design of cyclone separator with tangential inlet is developed applying an equation derived from the correlation of collection efficiency with maximum pressure drop components of the cyclone, which can efficiently remove the particles around 1microm of the exhaust gas of diesel engine.

  19. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Assumptions. 214.3 Section 214.3 Telecommunication OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL PROCEDURES FOR THE USE AND COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of this...

  20. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Assumptions. 214.3 Section 214.3 Telecommunication OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL PROCEDURES FOR THE USE AND COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of this...

  1. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Assumptions. 214.3 Section 214.3 Telecommunication OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL PROCEDURES FOR THE USE AND COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of this...

  2. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Assumptions. 214.3 Section 214.3 Telecommunication OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL PROCEDURES FOR THE USE AND COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of this...

  3. 47 CFR 214.3 - Assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Assumptions. 214.3 Section 214.3 Telecommunication OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL PROCEDURES FOR THE USE AND COORDINATION OF THE RADIO SPECTRUM DURING A WARTIME EMERGENCY § 214.3 Assumptions. When the provisions of this...

  4. Tangential Flow Filtration of Hemoglobin

    PubMed Central

    Sun, Guoyong; Harris, David R.

    2009-01-01

    Bovine and human hemoglobin (bHb and hHb, respectively) was purified from bovine and human red blood cells (bRBCs and hRBCs, respectively) via tangential flow filtration (TFF) in four successive stages. TFF is a fast and simple method to purify Hb from RBCs using filtration through hollow fiber (HF) membranes. Most of the Hb was retained in stage III (100 kDa HF membrane) and displayed methemoglobin levels less than 1%, yielding final concentrations of 318 and 300 mg/mL for bHb and hHb, respectively. Purified Hb exhibited much lower endotoxin levels than their respective RBCs. The purity of Hb was initially assessed via SDS-PAGE, and showed tiny impurity bands for the stage III retentate. The oxygen affinity (P50), and cooperativity coefficient (n) were regressed from the measured oxygen-RBC/Hb equilibrium curves of RBCs and purified Hb. These results suggest that TFF yielded oxygen affinities of bHb and hHb that are comparable to values in the literature. LC-MS was used to measure the molecular weight of the alpha (α) and beta (β) globin chains of purified Hb. No impurity peaks were present in the HPLC chromatograms of purified Hb. The mass of the molecular ions corresponding to the α and β globin chains agreed well with the calculated theoretical mass of the α-and β-globin chains. Taken together, our results demonstrate that HPLC grade Hb can be generated via TFF. In general, this method can be more broadly applied to purify Hb from any source of RBCs. This work is significant, since it outlines a simple method for generating Hb for synthesis and/or formulation of Hb-based oxygen carriers (HBOCs). PMID:19224583

  5. Teaching the Pursuit of Assumptions

    ERIC Educational Resources Information Center

    Gardner, Peter; Johnson, Stephen

    2015-01-01

    Within the school of thought known as Critical Thinking, identifying or finding missing assumptions is viewed as one of the principal thinking skills. Within the new subject in schools and colleges, usually called Critical Thinking, the skill of finding missing assumptions is similarly prominent, as it is in that subject's public examinations. In…

  6. Further evidence for the EPNT assumption

    NASA Technical Reports Server (NTRS)

    Greenberger, Daniel M.; Bernstein, Herbert J.; Horne, Michael; Zeilinger, Anton

    1994-01-01

    We recently proved a theorem extending the Greenberger-Horne-Zeilinger (GHZ) Theorem from multi-particle systems to two-particle systems. This proof depended upon an auxiliary assumption, the EPNT assumption (Emptiness of Paths Not Taken). According to this assumption, if there exists an Einstein-Rosen-Podolsky (EPR) element of reality that determines that a path is empty, then there can be no entity associated with the wave that travels this path (pilot-waves, empty waves, etc.) and reports information to the amplitude, when the paths recombine. We produce some further evidence in support of this assumption, which is certainly true in quantum theory. The alternative is that such a pilot-wave theory would have to violate EPR locality.

  7. Experiments in Aircraft Roll-Yaw Control using Forebody Tangential Blowing

    NASA Technical Reports Server (NTRS)

    Pedreiro, Nelson

    1997-01-01

    Advantages of flight at high angles of attack include increased maneuverability and lift capabilities. These are beneficial not only for fighter aircraft, but also for future supersonic and hypersonic transport aircraft during take-off and landing. At high angles of attack the aerodynamics of the vehicle are dominated by separation, vortex shedding and possibly vortex breakdown. These phenomena severely compromise the effectiveness of conventional control surfaces. As a result, controlled flight at high angles of attack is not feasible for current aircraft configurations. Alternate means to augment the control of the vehicle at these flight regimes are therefore necessary. The present work investigates the augmentation of an aircraft flight control system by the injection of a thin sheet of air tangentially to the forebody of the vehicle. This method, known as Forebody Tangential Blowing (FTB), has been proposed as an effective means of increasing the controllability of aircraft at high angles of attack. The idea is based on the fact that a small amount of air is sufficient to change the separation lines on the forebody. As a consequence, the strength and position of the vortices are altered causing a change on the aerodynamic loads. Although a very effective actuator, forebody tangential blowing is also highly non-linear which makes its use for aircraft control very difficult. In this work, the feasibility of using FTB to control the roll-yaw motion of a wind tunnel model was demonstrated both through simulations and experimentally. The wind tunnel model used in the experiments consists of a wing-body configuration incorporating a delta wing with 70-degree sweep angle and a cone-cylinder fuselage. The model is equipped with forebody slots through which blowing is applied. There are no movable control surfaces, therefore blowing is the only form of actuation. Experiments were conducted at a nominal angle of attack of 45 degrees. A unique apparatus that constrains

  8. Contact problem on indentation of an elastic half-plane with an inhomogeneous coating by a flat punch in the presence of tangential stresses on a surface

    NASA Astrophysics Data System (ADS)

    Volkov, Sergei S.; Vasiliev, Andrey S.; Aizikovich, Sergei M.; Sadyrin, Evgeniy V.

    2018-05-01

    Indentation of an elastic half-space with functionally graded coating by a rigid flat punch is studied. The half-plane is additionally subjected to distributed tangential stresses. Tangential stresses are represented in a form of Fourier series. The problem is reduced to the solution of two dual integral equations over even and odd functions describing distribution of unknown normal contact stresses. The solutions of these dual integral equations are constructed by the bilateral asymptotic method. Approximated analytical expressions for contact normal stresses are provided.

  9. Misleading Theoretical Assumptions in Hypertext/Hypermedia Research.

    ERIC Educational Resources Information Center

    Tergan, Sigmar-Olaf

    1997-01-01

    Reviews basic theoretical assumptions of research on learning with hypertext/hypermedia. Focuses on whether the results of research on hypertext/hypermedia-based learning support these assumptions. Results of empirical studies and theoretical analysis reveal that many research approaches have been misled by inappropriate theoretical assumptions on…

  10. Long-lived planetary vortices and their evolution: Conservative intermediate geostrophic model.

    PubMed

    Sutyrin, Georgi G.

    1994-06-01

    Large, long-lived vortices, surviving during many turnaround times and far longer than the dispersive linear Rossby wave packets, are abundant in planetary atmospheres and oceans. Nonlinear effects which prevent dispersive decay of intense cyclones and anticyclones and provide their self-propelling propagation are revised here using shallow water equations and their balanced approximations. The main physical mechanism allowing vortical structures to be long-lived in planetary fluid is the quick fluid rotation inside their cores which prevents growth in the amplitude of asymmetric circulation arising due to the beta-effect. Intense vortices of both signs survive essentially longer than the linear Rossby wave packet if their azimuthal velocity is much larger than the Rossby wave speed. However, in the long-time evolution, cyclonic and anticyclonic vortices behave essentially differently that is illustrated by the conservative intermediate geostrophic model. Asymmetric circulation governing vortex propagation is described by the azimuthal mode m=1 for the initial value problem as well as for steadily propagating solutions. Cyclonic vortices move west-poleward decaying gradually due to Rossby wave radiation while anticyclonic ones adjust to non-radiating solitary vortices. Slow weakening of an intense cyclone with decreasing of its size and shrinking of the core is described assuming zero azimuthal velocity outside the core while drifting poleward. The poleward tendency of the cyclone motion relative to the stirring flow corresponds to characteristic trajectories of tropical cyclones in the Earth's atmosphere. The asymmetry in dispersion-nonlinear properties of cyclones and anticyclones is thought to be one of the essential reasons for the observed predominance of anticyclones among long-lived vortices in the atmospheres of the giant planets and also among intrathermoclinic eddies in the ocean.

  11. Tangential symbols: using visual symbolization to teach pharmacological principles of drug addiction to international audiences.

    PubMed

    Giannini, A J

    1993-12-01

    Visual art was used to teach the biopsychiatric model of addiction to audiences in the Caribbean, Europe and Mideast. Art slides were tangentially linked to slides of pharmacological data. Stylistically dense art was processed by the intuitive right brain while spare notational pharmacological data was processed by the intellectual (rationalistic) left brain. Simultaneous presentation of these data enhanced attention and retention. This teaching paradigm was based on the nonliterate methods developed by Medieval architects and refined by Italian Renaissance philosopher, Marsilio Ficino.

  12. Computational analysis of forebody tangential slot blowing on the high alpha research vehicle

    NASA Technical Reports Server (NTRS)

    Gee, Ken

    1995-01-01

    A numerical analysis of forebody tangential slot blowing as a means of generating side force and yawing moment is conducted using an aircraft geometry. The Reynolds-averaged, thin-layer, Navier-Stokes equations are solved using a partially flux-split, approximately-factored algorithm. An algebraic turbulence model is used to determine the turbulent eddy viscosity values. Solutions are obtained using both patched and overset grid systems. In the patched grid model, and actuator plane is used to introduce jet variables into the flow field. The overset grid model is used to model the physical slot geometry and facilitate modeling of the full aircraft configuration. A slot optimization study indicates that a short slot located close to the nose of the aircraft provided the most side force and yawing moment per unit blowing coefficient. Comparison of computed surface pressure with that obtained in full-scale wind tunnel tests produce good agreement, indicating the numerical method and grid system used in the study are valid. Full aircraft computations resolve the changes in vortex burst point due to blowing. A time-accurate full-aircraft solution shows the effect of blowing on the changes in the frequency of the aerodynamic loads over the vertical tails. A study of the effects of freestream Mach number and various jet parameters indicates blowing remains effective through the transonic Mach range. An investigation of the force onset time lag associated with forebody blowing shows the lag to be minimal. The knowledge obtained in this study may be applied to the design of a forebody tangential slot blowing system for use on flight aircraft.

  13. Mean electromotive force generated by asymmetric fluid flow near the surface of earth's outer core

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Archana

    1992-10-01

    The phi component of the mean electromotive force, (ETF) generated by asymmetric flow of fluid just beneath the core-mantle boundary (CMB), is obtained using a geomagnetic field model. This analysis is based on the supposition that the axisymmetric part of fluid flow beneath the CMB is tangentially geostrophic and toroidal. For all the epochs studied, the computed phi component is stronger in the Southern Hemisphere than that in the Northern Hemisphere. Assuming a linear relationship between (ETF) and the azimuthally averaged magnetic field (AAMF), the only nonzero off-diagonal components of the pseudotensor relating ETF to AAMF, are estimated as functions of colatitude, and the physical implications of the results are discussed.

  14. Vorticity and Vertical Motions Diagnosed from Satellite Deep-Layer Temperatures. Revised

    NASA Technical Reports Server (NTRS)

    Spencer, Roy W.; Lapenta, William M.; Robertson, Franklin R.

    1994-01-01

    Spatial fields of satellite-measured deep-layer temperatures are examined in the context of quasigeostrophic theory. It is found that midtropospheric geostrophic vorticity and quasigeostrophic vertical motions can be diagnosed from microwave temperature measurements of only two deep layers. The lower- ( 1000-400 hPa) and upper- (400-50 hPa) layer temperatures are estimated from limb-corrected TIROS-N Microwave Sounding Units (MSU) channel 2 and 3 data, spatial fields of which can be used to estimate the midtropospheric thermal wind and geostrophic vorticity fields. Together with Trenberth's simplification of the quasigeostrophic omega equation, these two quantities can be then used to estimate the geostrophic vorticity advection by the thermal wind, which is related to the quasigeostrophic vertical velocity in the midtroposphere. Critical to the technique is the observation that geostrophic vorticity fields calculated from the channel 3 temperature features are very similar to those calculated from traditional, 'bottom-up' integrated height fields from radiosonde data. This suggests a lack of cyclone-scale height features near the top of the channel 3 weighting function, making the channel 3 cyclone-scale 'thickness' features approximately the same as height features near the bottom of the weighting function. Thus, the MSU data provide observational validation of the LID (level of insignificant dynamics) assumption of Hirshberg and Fritsch.

  15. An evaluation of complementary relationship assumptions

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Salvucci, G. D.

    2004-12-01

    Complementary relationship (CR) models, based on Bouchet's (1963) somewhat heuristic CR hypothesis, are advantageous in their sole reliance on readily available climatological data. While Bouchet's CR hypothesis requires a number of questionable assumptions, CR models have been evaluated on variable time and length scales with relative success. Bouchet's hypothesis is grounded on the assumption that a change in potential evapotranspiration (Ep}) is equal and opposite in sign to a change in actual evapotranspiration (Ea), i.e., -dEp / dEa = 1. In his mathematical rationalization of the CR, Morton (1965) similarly assumes that a change in potential sensible heat flux (Hp) is equal and opposite in sign to a change in actual sensible heat flux (Ha), i.e., -dHp / dHa = 1. CR models have maintained these assumptions while focusing on defining Ep and equilibrium evapotranspiration (Epo). We question Bouchet and Morton's aforementioned assumptions by revisiting CR derivation in light of a proposed variable, φ = -dEp/dEa. We evaluate φ in a simplified Monin Obukhov surface similarity framework and demonstrate how previous error in the application of CR models may be explained in part by previous assumptions that φ =1. Finally, we discuss the various time and length scales to which φ may be evaluated.

  16. Tangential migration of corridor guidepost neurons contributes to anxiety circuits.

    PubMed

    Tinterri, Andrea; Deck, Marie; Keita, Maryama; Mailhes, Caroline; Rubin, Anna Noren; Kessaris, Nicoletta; Lokmane, Ludmilla; Bielle, Franck; Garel, Sonia

    2018-02-15

    In mammals, thalamic axons are guided internally toward their neocortical target by corridor (Co) neurons that act as axonal guideposts. The existence of Co-like neurons in non-mammalian species, in which thalamic axons do not grow internally, raised the possibility that Co cells might have an ancestral role. Here, we investigated the contribution of corridor (Co) cells to mature brain circuits using a combination of genetic fate-mapping and assays in mice. We unexpectedly found that Co neurons contribute to striatal-like projection neurons in the central extended amygdala. In particular, Co-like neurons participate in specific nuclei of the bed nucleus of the stria terminalis, which plays essential roles in anxiety circuits. Our study shows that Co neurons possess an evolutionary conserved role in anxiety circuits independently from an acquired guidepost function. It furthermore highlights that neurons can have multiple sequential functions during brain wiring and supports a general role of tangential migration in the building of subpallial circuits. © 2017 Wiley Periodicals, Inc.

  17. Very High Density of Chinese Hamster Ovary Cells in Perfusion by Alternating Tangential Flow or Tangential Flow Filtration in WAVE Bioreactor™—Part II: Applications for Antibody Production and Cryopreservation

    PubMed Central

    Clincke, Marie-Françoise; Mölleryd, Carin; Samani, Puneeth K; Lindskog, Eva; Fäldt, Eric; Walsh, Kieron; Chotteau, Véronique

    2013-01-01

    A high cell density perfusion process of monoclonal antibody (MAb) producing Chinese hamster ovary (CHO) cells was developed in disposable WAVE Bioreactor™ using external hollow fiber (HF) filter as cell separation device. Tangential flow filtration (TFF) and alternating tangential flow (ATF) systems were compared and process applications of high cell density perfusion were studied here: MAb production and cryopreservation. Operations by perfusion using microfiltration (MF) or ultrafiltration (UF) with ATF or TFF and by fed-batch were compared. Cell densities higher than 108 cells/mL were obtained using UF TFF or UF ATF. The cells produced comparable amounts of MAb in perfusion by ATF or TFF, MF or UF. MAbs were partially retained by the MF using ATF or TFF but more severely using TFF. Consequently, MAbs were lost when cell broth was discarded from the bioreactor in the daily bleeds. The MAb cell-specific productivity was comparable at cell densities up to 1.3 × 108 cells/mL in perfusion and was comparable or lower in fed-batch. After 12 days, six times more MAbs were harvested using perfusion by ATF or TFF with MF or UF, compared to fed-batch and 28× more in a 1-month perfusion at 108 cells/mL density. Pumping at a recirculation rate up to 2.75 L/min did not damage the cells with the present TFF settings with HF short circuited. Cell cryopreservation at 0.5 × 108 and 108 cells/mL was performed using cells from a perfusion run at 108 cells/mL density. Cell resuscitation was very successful, showing that this system was a reliable process for cell bank manufacturing. © 2013 American Institute of Chemical Engineers Biotechnol. Prog., 29:768–777, 2013 PMID:23436783

  18. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...

  19. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...

  20. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...

  1. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...

  2. 10 CFR 436.14 - Methodological assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...

  3. Teaching Critical Thinking by Examining Assumptions

    ERIC Educational Resources Information Center

    Yanchar, Stephen C.; Slife, Brent D.

    2004-01-01

    We describe how instructors can integrate the critical thinking skill of examining theoretical assumptions (e.g., determinism and materialism) and implications into psychology courses. In this instructional approach, students formulate questions that help them identify assumptions and implications, use those questions to identify and examine the…

  4. Computational Investigation of Tangential Slot Blowing on a Generic Chined Forebody

    NASA Technical Reports Server (NTRS)

    Agosta-Greenman, Roxana M.; Gee, Ken; Cummings, Russell M.; Schiff, Lewis B.

    1995-01-01

    The effect of tangential slot blowing on the flowfield about a generic chined forebody at high angles of attack is investigated numerically using solutions of the thin-layer, Reynolds-averaged, Navier-Stokes equations. The effects of jet mass now ratios, angle of attack, and blowing slot location in the axial and circumferential directions are studied. The computed results compare well with available wind-tunnel experimental data. Computational results show that for a given mass now rate, the yawing moments generated by slot blowing increase as the body angle of attack increases. It is observed that greater changes in the yawing moments are produced by a slot located closest to the lip of the nose. Also, computational solutions show that inboard blowing across the top surface is more effective at generating yawing moments than blowing outboard from the bottom surface.

  5. Tangential acceleration feedback control of friction induced vibration

    NASA Astrophysics Data System (ADS)

    Nath, Jyayasi; Chatterjee, S.

    2016-09-01

    Tangential control action is studied on a phenomenological mass-on-belt model exhibiting friction-induced self-excited vibration attributed to the low-velocity drooping characteristics of friction which is also known as Stribeck effect. The friction phenomenon is modelled by the exponential model. Linear stability analysis is carried out near the equilibrium point and local stability boundary is delineated in the plane of control parameters. The system is observed to undergo a Hopf bifurcation as the eigenvalues determined from the linear stability analysis are found to cross the imaginary axis transversally from RHS s-plane to LHS s-plane or vice-versa as one varies the control parameters, namely non-dimensional belt velocity and the control gain. A nonlinear stability analysis by the method of Averaging reveals the subcritical nature of the Hopf bifurcation. Thus, a global stability boundary is constructed so that any choice of control parameters from the globally stable region leads to a stable equilibrium. Numerical simulations in a MATLAB SIMULINK model and bifurcation diagrams obtained in AUTO validate these analytically obtained results. Pole crossover design is implemented to optimize the filter parameters with an independent choice of belt velocity and control gain. The efficacy of this optimization (based on numerical results) in the delicate low velocity region is also enclosed.

  6. The joint use of the tangential electric field and surface Laplacian in EEG classification.

    PubMed

    Carvalhaes, C G; de Barros, J Acacio; Perreau-Guimaraes, M; Suppes, P

    2014-01-01

    We investigate the joint use of the tangential electric field (EF) and the surface Laplacian (SL) derivation as a method to improve the classification of EEG signals. We considered five classification tasks to test the validity of such approach. In all five tasks, the joint use of the components of the EF and the SL outperformed the scalar potential. The smallest effect occurred in the classification of a mental task, wherein the average classification rate was improved by 0.5 standard deviations. The largest effect was obtained in the classification of visual stimuli and corresponded to an improvement of 2.1 standard deviations.

  7. Assumptions to the Annual Energy Outlook

    EIA Publications

    2017-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.

  8. Impacts of the horizontal and vertical grids on the numerical solutions of the dynamical equations - Part 2: Quasi-geostrophic Rossby modes

    NASA Astrophysics Data System (ADS)

    Konor, Celal S.; Randall, David A.

    2018-05-01

    We use a normal-mode analysis to investigate the impacts of the horizontal and vertical discretizations on the numerical solutions of the quasi-geostrophic anelastic baroclinic and barotropic Rossby modes on a midlatitude β plane. The dispersion equations are derived for the linearized anelastic system, discretized on the Z, C, D, CD, (DC), A, E and B horizontal grids, and on the L and CP vertical grids. The effects of various horizontal grid spacings and vertical wavenumbers are discussed. A companion paper, Part 1, discusses the impacts of the discretization on the inertia-gravity modes on a midlatitude f plane.The results of our normal-mode analyses for the Rossby waves overall support the conclusions of the previous studies obtained with the shallow-water equations. We identify an area of disagreement with the E-grid solution.

  9. New quasi-geostrophic flow estimations for the Earth's core

    NASA Astrophysics Data System (ADS)

    Pais, M. Alexandra

    2014-05-01

    Quasi-geostrophic (QG) flows have been reported in numerical dynamo studies that simulate Boussinesq convection of an electrical conducting fluid inside a rapidly rotating spherical shell. In these cases, the required condition for columnar convection seems to be that inertial waves should propagate much faster in the medium than Alfvén waves. QG models are particularly appealing for studies where Earth's liquid core flows are assessed from information contained in geomagnetic data obtained at and above the Earth's surface. Here, they make the whole difference between perceiving only the core surface expression of the geodynamo or else assessing the whole interior core flow. The QG approximation has now been used in different studies to invert geomagnetic field models, providing a different kinematic interpretation of the observed geomagnetic field secular variation (SV). Under this new perspective, a large eccentric jet flowing westward under the Atlantic Hemisphere and a cyclonic column under the Pacific were pointed out as interesting features of the flow. A large eccentric jet with similar characteristics has been explained in recent numerical geodynamo simulations in terms of dynamical coupling between the solid core, the liquid core and the mantle. Nonetheless, it requires an inner core crystallization on the eastern hemisphere, contrary to what has been proposed in recent dynamical models for the inner core. Some doubts remain, as we see, concerning the dynamics that can explain the radial outward flow in the eastern core hemisphere, actually seen in inverted core flow models. This and other puzzling features justify a new assessment of core flows, taking full advantage of the recent geomagnetic field model COV-OBS and of experience, accumulated over the years, on flow inversion. Assuming the QG approximation already eliminates a large part of non-uniqueness in the inversion. Some important non-uniqueness still remains, inherent to the physical model, given

  10. Purification of Hemoglobin by Tangential Flow Filtration with Diafiltration

    PubMed Central

    Elmer, Jacob; Harris, David R.; Sun, Guoyong; Palmer, Andre F.

    2009-01-01

    A recent study by Palmer et al. (2009) demonstrated that tangential flow filtration (TFF) can be used to produce HPLC-grade bovine and human hemoglobin (Hb). In this current study, we assessed the quality of bovine Hb (bHb) purified by introducing a 10 L batch-mode diafiltration step to the previously mentioned TFF Hb purification process. bHb was purified from bovine red blood cells (RBCs) by filtering clarified RBC lysate through 50 nm (stage I) & 500 kDa (stage II) hollow fiber (HF) membranes. The filtrate was then passed through a 100 kDa (stage III) HF membrane with or without an additional 10 L diafiltration step to potentially remove additional small molecular weight impurities. Protein assays, SDS-PAGE, and LC-MS of the purified bHb (stage III retentate) reveal that addition of a diafiltration step has no effect on bHb purity or yield; however, it does increase the methemoglobin level and oxygen affinity of purified bHb. Therefore, we conclude that no additional benefit is gained from diafiltration at stage III and a three-stage TFF process is sufficient to produce HPLC-grade bHb. PMID:19621471

  11. Philosophy of Technology Assumptions in Educational Technology Leadership

    ERIC Educational Resources Information Center

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  12. Preliminary design of a tangentially viewing imaging bolometer for NSTX-U

    DOE PAGES

    Peterson, B. J.; Sano, R.; Reinke, M. L.; ...

    2016-08-03

    The InfraRed imaging Video Bolometer measures plasma radiated power images using a thin metal foil. Two different designs with a tangential view of NSTX-U are made assuming a 640 x 480 (1280 x 1024) pixel, 30 (105) fps, 50 (20) mK, IR camera imaging the 9 cm x 9 cm x 2 μm Pt foil. The foil is divided into 40 x 40 (64 x 64) IRVB channels. This gives a spatial resolution of 3.4 (2.2) cm on the machine mid-plane. The noise equivalent power density of the IRVB is given as 113 (46) μW/cm 2 for a time resolutionmore » of 33 (20) ms. Synthetic images derived from SOLPS data using the IRVB geometry show peak signal levels ranging from ~0.8 - ~80 (~0.36 - ~26) mW/cm 2.« less

  13. A Modified EPA Method 1623 that Uses Tangential Flow Hollow-Fiber Ultrafiltration and Heat Dissociation Steps to Detect Waterborne Cryptosporidum and Giardia spp.

    EPA Science Inventory

    This protocol describes the use of a tangential flow hollow-fiber ultrafiltration sample concentration system and a heat dissociation as alternative steps for the detection of waterborne Cryptosporidium and Giardia species using EPA Method 1623.

  14. Geostrophic tripolar vortices in a two-layer fluid: Linear stability and nonlinear evolution of equilibria

    NASA Astrophysics Data System (ADS)

    Reinaud, J. N.; Sokolovskiy, M. A.; Carton, X.

    2017-03-01

    We investigate equilibrium solutions for tripolar vortices in a two-layer quasi-geostrophic flow. Two of the vortices are like-signed and lie in one layer. An opposite-signed vortex lies in the other layer. The families of equilibria can be spanned by the distance (called separation) between the two like-signed vortices. Two equilibrium configurations are possible when the opposite-signed vortex lies between the two other vortices. In the first configuration (called ordinary roundabout), the opposite signed vortex is equidistant to the two other vortices. In the second configuration (eccentric roundabouts), the distances are unequal. We determine the equilibria numerically and describe their characteristics for various internal deformation radii. The two branches of equilibria can co-exist and intersect for small deformation radii. Then, the eccentric roundabouts are stable while unstable ordinary roundabouts can be found. Indeed, ordinary roundabouts exist at smaller separations than eccentric roundabouts do, thus inducing stronger vortex interactions. However, for larger deformation radii, eccentric roundabouts can also be unstable. Then, the two branches of equilibria do not cross. The branch of eccentric roundabouts only exists for large separations. Near the end of the branch of eccentric roundabouts (at the smallest separation), one of the like-signed vortices exhibits a sharp inner corner where instabilities can be triggered. Finally, we investigate the nonlinear evolution of a few selected cases of tripoles.

  15. Assumptions of Statistical Tests: What Lies Beneath.

    PubMed

    Jupiter, Daniel C

    We have discussed many statistical tests and tools in this series of commentaries, and while we have mentioned the underlying assumptions of the tests, we have not explored them in detail. We stop to look at some of the assumptions of the t-test and linear regression, justify and explain them, mention what can go wrong when the assumptions are not met, and suggest some solutions in this case. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Learning Assumptions for Compositional Verification

    NASA Technical Reports Server (NTRS)

    Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.

  17. Zooplankton biomass, feeding and metabolism in a geostrophic frontal area (Almeria-Oran Front, western Mediterranean). Significance to pelagic food webs

    NASA Astrophysics Data System (ADS)

    Thibault, D.; Gaudy, R.; Le Fèvre, J.

    1994-08-01

    Mesozooplankton abundance and physiological rates in copepods were measured at selected sites in the Alboran Sea, in May 1991, on Cruise Almofront 1 (JGofs-France). Higher total zooplankton standing stocks, higher copepod abundance, higher feeding activity by the latter and a higher proportion of phytoplankton-derived carbohydrates in their diet were found in the geostrophic jet of inflowing Atlantic water than in surrounding areas, which offered a range of oligotrophic conditions. Relationships with data obtained in other disciplinary fields on the same cruise show that biological enrichment was due to locally enhanced production rather than advection of exogenous living matter. In the most productive context, sustained production effected by phytoplankton cells in the > 10 μm class size (diatoms) was being significantly transferred to higher trophic levels through herbivores with a relatively long generation time (copepods). The processes responsible for the fertilization, and their relationship to the jet and its frontal boundary, are discussed.

  18. Impacts of the horizontal and vertical grids on the numerical solutions of the dynamical equations – Part 2: Quasi-geostrophic Rossby modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konor, Celal S.; Randall, David A.

    We use a normal-mode analysis to investigate the impacts of the horizontal and vertical discretizations on the numerical solutions of the quasi-geostrophic anelastic baroclinic and barotropic Rossby modes on a midlatitude β plane. The dispersion equations are derived for the linearized anelastic system, discretized on the Z, C, D, CD, (DC), A, E and B horizontal grids, and on the L and CP vertical grids. The effects of various horizontal grid spacings and vertical wavenumbers are discussed. A companion paper, Part 1, discusses the impacts of the discretization on the inertia–gravity modes on a midlatitude f plane.The results of our normal-modemore » analyses for the Rossby waves overall support the conclusions of the previous studies obtained with the shallow-water equations. We identify an area of disagreement with the E-grid solution.« less

  19. Impacts of the horizontal and vertical grids on the numerical solutions of the dynamical equations – Part 2: Quasi-geostrophic Rossby modes

    DOE PAGES

    Konor, Celal S.; Randall, David A.

    2018-05-08

    We use a normal-mode analysis to investigate the impacts of the horizontal and vertical discretizations on the numerical solutions of the quasi-geostrophic anelastic baroclinic and barotropic Rossby modes on a midlatitude β plane. The dispersion equations are derived for the linearized anelastic system, discretized on the Z, C, D, CD, (DC), A, E and B horizontal grids, and on the L and CP vertical grids. The effects of various horizontal grid spacings and vertical wavenumbers are discussed. A companion paper, Part 1, discusses the impacts of the discretization on the inertia–gravity modes on a midlatitude f plane.The results of our normal-modemore » analyses for the Rossby waves overall support the conclusions of the previous studies obtained with the shallow-water equations. We identify an area of disagreement with the E-grid solution.« less

  20. Co-Dependency: An Examination of Underlying Assumptions.

    ERIC Educational Resources Information Center

    Myer, Rick A.; And Others

    1991-01-01

    Discusses need for careful examination of codependency as diagnostic category. Critically examines assumptions that codependency is disease, addiction, or predetermined by the environment. Discusses implications of assumptions. Offers recommendations for mental health counselors focusing on need for systematic research, redirection of efforts to…

  1. Curvature and tangential deflection of discrete arcs: a theory based on the commutator of scatter matrix pairs and its application to vertex detection in planar shape data.

    PubMed

    Anderson, I M; Bezdek, J C

    1984-01-01

    This paper introduces a new theory for the tangential deflection and curvature of plane discrete curves. Our theory applies to discrete data in either rectangular boundary coordinate or chain coded formats: its rationale is drawn from the statistical and geometric properties associated with the eigenvalue-eigenvector structure of sample covariance matrices. Specifically, we prove that the nonzero entry of the commutator of a piar of scatter matrices constructed from discrete arcs is related to the angle between their eigenspaces. And further, we show that this entry is-in certain limiting cases-also proportional to the analytical curvature of the plane curve from which the discrete data are drawn. These results lend a sound theoretical basis to the notions of discrete curvature and tangential deflection; and moreover, they provide a means for computationally efficient implementation of algorithms which use these ideas in various image processing contexts. As a concrete example, we develop the commutator vertex detection (CVD) algorithm, which identifies the location of vertices in shape data based on excessive cummulative tangential deflection; and we compare its performance to several well established corner detectors that utilize the alternative strategy of finding (approximate) curvature extrema.

  2. Finite Element Modeling of a Cylindrical Contact Using Hertzian Assumptions

    NASA Technical Reports Server (NTRS)

    Knudsen, Erik

    2003-01-01

    The turbine blades in the high-pressure fuel turbopump/alternate turbopump (HPFTP/AT) are subjected to hot gases rapidly flowing around them. This flow excites vibrations in the blades. Naturally, one has to worry about resonance, so a damping device was added to dissipate some energy from the system. The foundation is now laid for a very complex problem. The damper is in contact with the blade, so now there are contact stresses (both normal and tangential) to contend with. Since these stresses can be very high, it is not all that difficult to yield the material. Friction is another non-linearity and the blade is made out of a Nickel-based single-crystal superalloy that is orthotropic. A few approaches exist to solve such a problem and computer models, using contact elements, have been built with friction, plasticity, etc. These models are quite cumbersome and require many hours to solve just one load case and material orientation. A simpler approach is required. Ideally, the model should be simplified so the analysis can be conducted faster. When working with contact problems determining the contact patch and the stresses in the material are the main concerns. Closed-form solutions for non-conforming bodies, developed by Hertz, made out of isotropic materials are readily available. More involved solutions for 3-D cases using different materials are also available. The question is this: can Hertzian1 solutions be applied, or superimposed, to more complicated problems-like those involving anisotropic materials? That is the point of the investigation here. If these results agree with the more complicated computer models, then the analytical solutions can be used in lieu of the numerical solutions that take a very long time to process. As time goes on, the analytical solution will eventually have to include things like friction and plasticity. The models in this report use no contact elements and are essentially an applied load problem using Hertzian assumptions to

  3. Simultaneous Optical Measurements of Axial and Tangential Steady-State Blade Deflections

    NASA Technical Reports Server (NTRS)

    Kurkov, Anatole P.; Dhadwal, Harbans S.

    1999-01-01

    Currently, the majority of fiber-optic blade instrumentation is being designed and manufactured by aircraft-engine companies for their own use. The most commonly employed probe for optical blade deflection measurements is the spot probe. One of its characteristics is that the incident spot on a blade is not fixed relative to the blade, but changes depending on the blade deformation associated with centrifugal and aerodynamic loading. While there are geometrically more complicated optical probe designs in use by different engine companies, this paper offers an alternate solution derived from a probe-mount design feature that allows one to change the probe axial position until the incident spot contacts either a leading or a trailing edge. By tracing the axial position of either blade edge one is essentially extending the deflection measurement to two dimensions, axial and tangential. The blade deflection measurements were obtained during a wind tunnel test of a fan prototype.

  4. Integrated axial and tangential serpentine cooling circuit in a turbine airfoil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ching-Pang; Jiang, Nan; Marra, John J

    2015-05-05

    A continuous serpentine cooling circuit forming a progression of radial passages (44, 45, 46, 47A, 48A) between pressure and suction side walls (52, 54) in a MID region of a turbine airfoil (24). The circuit progresses first axially, then tangentially, ending in a last radial passage (48A) adjacent to the suction side (54) and not adjacent to the pressure side (52). The passages of the axial progression (44, 45, 46) may be adjacent to both the pressure and suction side walls of the airfoil. The next to last radial passage (47A) may be adjacent to the pressure side wall andmore » not adjacent to the suction side wall. The last two radial passages (47A, 48A) may be longer along the pressure and suction side walls respectively than they are in a width direction, providing increased direct cooling surface area on the interiors of these hot walls.« less

  5. Preliminary design of a tangentially viewing imaging bolometer for NSTX-U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, B. J., E-mail: peterson@LHD.nifs.ac.jp; Mukai, K.; SOKENDAI

    2016-11-15

    The infrared imaging video bolometer (IRVB) measures plasma radiated power images using a thin metal foil. Two different designs with a tangential view of NSTX-U are made assuming a 640 × 480 (1280 × 1024) pixel, 30 (105) fps, 50 (20) mK, IR camera imaging the 9 cm × 9 cm × 2 μm Pt foil. The foil is divided into 40 × 40 (64 × 64) IRVB channels. This gives a spatial resolution of 3.4 (2.2) cm on the machine mid-plane. The noise equivalent power density of the IRVB is given as 113 (46) μW/cm{sup 2} for a timemore » resolution of 33 (20) ms. Synthetic images derived from Scrape Off Layer Plasma Simulation data using the IRVB geometry show peak signal levels ranging from ∼0.8 to ∼80 (∼0.36 to ∼26) mW/cm{sup 2}.« less

  6. Purification of monoclonal antibodies from clarified cell culture fluid using Protein A capture continuous countercurrent tangential chromatography

    PubMed Central

    Dutta, Amit K.; Tran, Travis; Napadensky, Boris; Teella, Achyuta; Brookhart, Gary; Ropp, Philip A.; Zhang, Ada W.; Tustian, Andrew D.; Zydney, Andrew L.; Shinkazh, Oleg

    2015-01-01

    Recent studies using simple model systems have demonstrated that Continuous Countercurrent Tangential Chromatography (CCTC) has the potential to overcome many of the limitations of conventional Protein A chromatography using packed columns. The objective of this work was to optimize and implement a CCTC system for monoclonal antibody purification from clarified Chinese Hamster Ovary (CHO) cell culture fluid using a commercial Protein A resin. Several improvements were introduced to the previous CCTC system including the use of retentate pumps to maintain stable resin concentrations in the flowing slurry, the elimination of a slurry holding tank to improve productivity, and the introduction of an “after binder” to the binding step to increase antibody recovery. A kinetic binding model was developed to estimate the required residence times in the multi-stage binding step to optimize yield and productivity. Data were obtained by purifying two commercial antibodies from two different manufactures, one with low titer (~0.67 g/L) and one with high titer (~6.9 g/L), demonstrating the versatility of the CCTC system. Host cell protein removal, antibody yields and purities were similar to that obtained with conventional column chromatography; however, the CCTC system showed much higher productivity. These results clearly demonstrate the capabilities of continuous countercurrent tangential chromatography for the commercial purification of monoclonal antibody products. PMID:25747172

  7. Purification of monoclonal antibodies from clarified cell culture fluid using Protein A capture continuous countercurrent tangential chromatography.

    PubMed

    Dutta, Amit K; Tran, Travis; Napadensky, Boris; Teella, Achyuta; Brookhart, Gary; Ropp, Philip A; Zhang, Ada W; Tustian, Andrew D; Zydney, Andrew L; Shinkazh, Oleg

    2015-11-10

    Recent studies using simple model systems have demonstrated that continuous countercurrent tangential chromatography (CCTC) has the potential to overcome many of the limitations of conventional Protein A chromatography using packed columns. The objective of this work was to optimize and implement a CCTC system for monoclonal antibody purification from clarified Chinese Hamster Ovary (CHO) cell culture fluid using a commercial Protein A resin. Several improvements were introduced to the previous CCTC system including the use of retentate pumps to maintain stable resin concentrations in the flowing slurry, the elimination of a slurry holding tank to improve productivity, and the introduction of an "after binder" to the binding step to increase antibody recovery. A kinetic binding model was developed to estimate the required residence times in the multi-stage binding step to optimize yield and productivity. Data were obtained by purifying two commercial antibodies from two different manufactures, one with low titer (∼ 0.67 g/L) and one with high titer (∼ 6.9 g/L), demonstrating the versatility of the CCTC system. Host cell protein removal, antibody yields and purities were similar to those obtained with conventional column chromatography; however, the CCTC system showed much higher productivity. These results clearly demonstrate the capabilities of continuous countercurrent tangential chromatography for the commercial purification of monoclonal antibody products. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Developing animals flout prominent assumptions of ecological physiology.

    PubMed

    Burggren, Warren W

    2005-08-01

    Every field of biology has its assumptions, but when they grow to be dogma, they can become constraining. This essay presents data-based challenges to several prominent assumptions of developmental physiologists. The ubiquity of allometry is such an assumption, yet animal development is characterized by rate changes that are counter to allometric predictions. Physiological complexity is assumed to increase with development, but examples are provided showing that complexity can be greatest at intermediate developmental stages. It is assumed that organs have functional equivalency in embryos and adults, yet embryonic structures can have quite different functions than inferred from adults. Another assumption challenged is the duality of neural control (typically sympathetic and parasympathetic), since one of these two regulatory mechanisms typically considerably precedes in development the appearance of the other. A final assumption challenged is the notion that divergent phylogeny creates divergent physiologies in embryos just as in adults, when in fact early in development disparate vertebrate taxa show great quantitative as well as qualitative similarity. Collectively, the inappropriateness of these prominent assumptions based on adult studies suggests that investigation of embryos, larvae and fetuses be conducted with appreciation for their potentially unique physiologies.

  9. Relationship between linear velocity and tangential push force while turning to change the direction of the manual wheelchair.

    PubMed

    Hwang, Seonhong; Lin, Yen-Sheng; Hogaboom, Nathan S; Wang, Lin-Hwa; Koontz, Alicia M

    2017-08-28

    Wheelchair propulsion is a major cause of upper limb pain and injuries for manual wheelchair users with spinal cord injuries (SCIs). Few studies have investigated wheelchair turning biomechanics on natural ground surfaces. The purpose of this study was to investigate the relationship between tangential push force and linear velocity of the wheelchair during the turning portions of propulsion. Using an instrumented handrim, velocity and push force data were recorded for 25 subjects while they propel their own wheelchairs on a concrete floor along a figure-eight-shaped course at a maximum velocity. The braking force (1.03 N) of the inside wheel while turning was the largest of all other push forces (p<0.05). Larger changes in squared velocity while turning were significantly correlated with higher propulsive and braking forces used at the pre-turning, turning, and post-turning phases (p<0.05). Subjects with less change of velocity while turning needed less braking force to maneuver themselves successfully and safely around the turns. Considering the magnitude and direction of tangential force applied to the wheel, it seems that there are higher risks of injury and instability for upper limb joints when braking the inside wheel to turn. The results provide insight into wheelchair setup and mobility skills training for wheelchair users.

  10. Bolus-dependent dosimetric effect of positioning errors for tangential scalp radiotherapy with helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobb, Eric, E-mail: eclobb2@gmail.com

    2014-04-01

    The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less

  11. 7 CFR 1980.366 - Transfer and assumption.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 14 2012-01-01 2012-01-01 false Transfer and assumption. 1980.366 Section 1980.366...) PROGRAM REGULATIONS (CONTINUED) GENERAL Rural Housing Loans § 1980.366 Transfer and assumption. (a) General. Lenders may, but are not required to, permit a transfer to an eligible applicant. A transfer and...

  12. Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and

    Science.gov Websites

    Center: Vehicle Cost Calculator Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Google Bookmark Alternative Fuels

  13. Scramjet fuel injector design parameters and considerations: Development of a two-dimensional tangential fuel injector with constant pressure at the flame

    NASA Technical Reports Server (NTRS)

    Agnone, A. M.

    1972-01-01

    The factors affecting a tangential fuel injector design for scramjet operation are reviewed and their effect on the efficiency of the supersonic combustion process is evaluated using both experimental data and theoretical predictions. A description of the physical problem of supersonic combustion and method of analysis is followed by a presentation and evaluation of some standard and exotic types of fuel injectors. Engineering fuel injector design criteria and hydrogen ignition schemes are presented along with a cursory review of available experimental data. A two-dimensional tangential fuel injector design is developed using analyses as a guide in evaluating the effects on the combustion process of various initial and boundary conditions including splitter plate thickness, injector wall temperature, pressure gradients, etc. The fuel injector wall geometry is shaped so as to maintain approximately constant pressure at the flame as required by a cycle analysis. A viscous characteristics program which accounts for lateral as well as axial pressure variations due to the mixing and combustion process is used in determining the wall geometry.

  14. Parallel momentum input by tangential neutral beam injections in stellarator and heliotron plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, S., E-mail: nishimura.shin@lhd.nifs.ac.jp; Nakamura, Y.; Nishioka, K.

    The configuration dependence of parallel momentum inputs to target plasma particle species by tangentially injected neutral beams is investigated in non-axisymmetric stellarator/heliotron model magnetic fields by assuming the existence of magnetic flux-surfaces. In parallel friction integrals of the full Rosenbluth-MacDonald-Judd collision operator in thermal particles' kinetic equations, numerically obtained eigenfunctions are used for excluding trapped fast ions that cannot contribute to the friction integrals. It is found that the momentum inputs to thermal ions strongly depend on magnetic field strength modulations on the flux-surfaces, while the input to electrons is insensitive to the modulation. In future plasma flow studies requiringmore » flow calculations of all particle species in more general non-symmetric toroidal configurations, the eigenfunction method investigated here will be useful.« less

  15. Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions

    Science.gov Websites

    Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Google Bookmark

  16. Teaching Practices: Reexamining Assumptions.

    ERIC Educational Resources Information Center

    Spodek, Bernard, Ed.

    This publication contains eight papers, selected from papers presented at the Bicentennial Conference on Early Childhood Education, that discuss different aspects of teaching practices. The first two chapters reexamine basic assumptions underlying the organization of curriculum experiences for young children. Chapter 3 discusses the need to…

  17. Microalgae fractionation using steam explosion, dynamic and tangential cross-flow membrane filtration.

    PubMed

    Lorente, E; Hapońska, M; Clavero, E; Torras, C; Salvadó, J

    2017-08-01

    In this study, the microalga Nannochloropsis gaditana was subjected to acid catalysed steam explosion treatment and the resulting exploded material was subsequently fractionated to separate the different fractions (lipids, sugars and solids). Conventional and vibrational membrane setups were used with several polymeric commercial membranes. Two different routes were followed: 1) filtration+lipid solvent extraction and 2) lipid solvent extraction+filtration. Route 1 revealed to be much better since the used membrane for filtration was able to permeate the sugar aqueous phase and retained the fraction containing lipids; after this, an extraction required a much lower amount of solvent and a better recovering yield. Filtration allowed complete lipid rejection. Dynamic filtration improved permeability compared to the tangential cross-flow filtration. Best membrane performance was achieved using a 5000Da membrane with the dynamic system, obtaining a permeability of 6L/h/m 2 /bar. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Sessile multidroplets and salt droplets under high tangential electric fields

    PubMed Central

    Xie, Guoxin; He, Feng; Liu, Xiang; Si, Lina; Guo, Dan

    2016-01-01

    Understanding the interaction behaviors between sessile droplets under imposed high voltages is very important in many practical situations, e.g., microfluidic devices and the degradation/aging problems of outdoor high-power applications. In the present work, the droplet coalescence, the discharge activity and the surface thermal distribution response between sessile multidroplets and chloride salt droplets under high tangential electric fields have been investigated with infrared thermography, high-speed photography and pulse current measurement. Obvious polarity effects on the discharge path direction and the temperature change in the droplets in the initial stage after discharge initiation were observed due to the anodic dissolution of metal ions from the electrode. In the case of sessile aligned multidroplets, the discharge path direction could affect the location of initial droplet coalescence. The smaller unmerged droplet would be drained into the merged large droplet as a result from the pressure difference inside the droplets rather than the asymmetric temperature change due to discharge. The discharge inception voltages and the temperature variations for two salt droplets closely correlated with the ionization degree of the salt, as well as the interfacial electrochemical reactions near the electrodes. Mechanisms of these observed phenomena were discussed. PMID:27121926

  19. Tangential Flow Ultrafiltration Allows Purification and Concentration of Lauric Acid-/Albumin-Coated Particles for Improved Magnetic Treatment.

    PubMed

    Zaloga, Jan; Stapf, Marcus; Nowak, Johannes; Pöttler, Marina; Friedrich, Ralf P; Tietze, Rainer; Lyer, Stefan; Lee, Geoffrey; Odenbach, Stefan; Hilger, Ingrid; Alexiou, Christoph

    2015-08-14

    Superparamagnetic iron oxide nanoparticles (SPIONs) are frequently used for drug targeting, hyperthermia and other biomedical purposes. Recently, we have reported the synthesis of lauric acid-/albumin-coated iron oxide nanoparticles SEON(LA-BSA), which were synthesized using excess albumin. For optimization of magnetic treatment applications, SPION suspensions need to be purified of excess surfactant and concentrated. Conventional methods for the purification and concentration of such ferrofluids often involve high shear stress and low purification rates for macromolecules, like albumin. In this work, removal of albumin by low shear stress tangential ultrafiltration and its influence on SEON(LA-BSA) particles was studied. Hydrodynamic size, surface properties and, consequently, colloidal stability of the nanoparticles remained unchanged by filtration or concentration up to four-fold (v/v). Thereby, the saturation magnetization of the suspension can be increased from 446.5 A/m up to 1667.9 A/m. In vitro analysis revealed that cellular uptake of SEON(LA-BSA) changed only marginally. The specific absorption rate (SAR) was not greatly affected by concentration. In contrast, the maximum temperature Tmax in magnetic hyperthermia is greatly enhanced from 44.4 °C up to 64.9 °C by the concentration of the particles up to 16.9 mg/mL total iron. Taken together, tangential ultrafiltration is feasible for purifying and concentrating complex hybrid coated SPION suspensions without negatively influencing specific particle characteristics. This enhances their potential for magnetic treatment.

  20. An experimental and numerical study of endwall heat transfer in a turbine blade cascade including tangential heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Ratto, Luca; Satta, Francesca; Tanda, Giovanni

    2018-06-01

    This paper presents an experimental and numerical investigation of heat transfer in the endwall region of a large scale turbine cascade. The steady-state liquid crystal technique has been used to obtain the map of the heat transfer coefficient for a constant heat flux boundary condition. In the presence of two- and three-dimensional flows with significant spatial variations of the heat transfer coefficient, tangential heat conduction could lead to error in the heat transfer coefficient determination, since local heat fluxes at the wall-to-fluid interface tend to differ from point to point and surface temperatures to be smoothed out, thus making the uniform-heat-flux boundary condition difficult to be perfectly achieved. For this reason, numerical simulations of flow and heat transfer in the cascade including the effect of tangential heat conduction inside the endwall have been performed. The major objective of numerical simulations was to investigate the influence of wall heat conduction on the convective heat transfer coefficient determined during a nominal iso-flux heat transfer experiment and to interpret possible differences between numerical and experimental heat transfer results. Results were presented and discussed in terms of local Nusselt number and a convenient wall heat flux function for two values of the Reynolds number (270,000 and 960,000).

  1. Tangential blowing for control of strong normal shock - Boundary layer interactions on inlet ramps

    NASA Technical Reports Server (NTRS)

    Schwendemann, M. F.; Sanders, B. W.

    1982-01-01

    The use of tangential blowing from a row of holes in an aft facing step is found to provide good control of the ramp boundary layer, normal shock interaction on a fixed geometry inlet over a wide range of inlet mass flow ratios. Ramp Mach numbers of 1.36 and 1.96 are investigated. The blowing geometry is found to have a significant effect on system performance at the highest Mach number. The use of high-temperature air in the blowing system, however, has only a slight effect on performance. The required blowing rates are significantly high for the most severe test conditions. In addition, the required blowing coefficient is found to be proportional to the normal shock pressure rise.

  2. Occupancy estimation and the closure assumption

    USGS Publications Warehouse

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing

  3. When Proofs Reflect More on Assumptions than Conclusions

    ERIC Educational Resources Information Center

    Dawkins, Paul Christian

    2014-01-01

    This paper demonstrates how questions of "provability" can help students engaged in reinvention of mathematical theory to understand the axiomatic game. While proof demonstrates how conclusions follow from assumptions, "provability" characterizes the dual relation that assumptions are "justified" when they afford…

  4. 13 CFR 120.937 - Assumption.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Assumption. 120.937 Section 120.937 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Development Company... prior written approval. ...

  5. Impact of actuarial assumptions on pension costs: A simulation analysis

    NASA Astrophysics Data System (ADS)

    Yusof, Shaira; Ibrahim, Rose Irnawaty

    2013-04-01

    This study investigates the sensitivity of pension costs to changes in the underlying assumptions of a hypothetical pension plan in order to gain a perspective on the relative importance of the various actuarial assumptions via a simulation analysis. Simulation analyses are used to examine the impact of actuarial assumptions on pension costs. There are two actuarial assumptions will be considered in this study which are mortality rates and interest rates. To calculate pension costs, Accrued Benefit Cost Method, constant amount (CA) modification, constant percentage of salary (CS) modification are used in the study. The mortality assumptions and the implied mortality experience of the plan can potentially have a significant impact on pension costs. While for interest rate assumptions, it is inversely related to the pension costs. Results of the study have important implications for analyst of pension costs.

  6. Tangential Bicortical Locked Fixation Improves Stability in Vancouver B1 Periprosthetic Femur Fractures: A Biomechanical Study.

    PubMed

    Lewis, Gregory S; Caroom, Cyrus T; Wee, Hwabok; Jurgensmeier, Darin; Rothermel, Shane D; Bramer, Michelle A; Reid, John Spence

    2015-10-01

    The biomechanical difficulty in fixation of a Vancouver B1 periprosthetic fracture is purchase of the proximal femoral segment in the presence of the hip stem. Several newer technologies provide the ability to place bicortical locking screws tangential to the hip stem with much longer lengths of screw purchase compared with unicortical screws. This biomechanical study compares the stability of 2 of these newer constructs to previous methods. Thirty composite synthetic femurs were prepared with cemented hip stems. The distal femur segment was osteotomized, and plates were fixed proximally with either (1) cerclage cables, (2) locked unicortical screws, (3) a composite of locked screws and cables, or tangentially directed bicortical locking screws using either (4) a stainless steel locking compression plate system with a Locking Attachment Plate (Synthes) or (5) a titanium alloy Non-Contact Bridging system (Zimmer). Specimens were tested to failure in either axial or torsional quasistatic loading modes (n = 3) after 20 moderate load preconditioning cycles. Stiffness, maximum force, and failure mechanism were determined. Bicortical constructs resisted higher (by an average of at least 27%) maximum forces than the other 3 constructs in torsional loading (P < 0.05). Cables constructs exhibited lower maximum force than all other constructs, in both axial and torsional loading. The bicortical titanium construct was stiffer than the bicortical stainless steel construct in axial loading. Proximal fixation stability is likely improved with the use of bicortical locking screws as compared with traditional unicortical screws and cable techniques. In this study with a limited sample size, we found the addition of cerclage cables to unicortical screws may not offer much improvement in biomechanical stability of unstable B1 fractures.

  7. Tangential Bicortical Locked Fixation Improves Stability in Vancouver B1 Periprosthetic Femur Fractures: A Biomechanical Study

    PubMed Central

    Lewis, Gregory S.; Caroom, Cyrus T.; Wee, Hwabok; Jurgensmeier, Darin; Rothermel, Shane D.; Bramer, Michelle A.; Reid, J. Spence

    2015-01-01

    Objectives The biomechanical difficulty in fixation of a Vancouver B1 periprosthetic fracture is purchase of the proximal femoral segment in the presence of the hip stem. Several newer technologies provide the ability to place bicortical locking screws tangential to the hip stem with much longer lengths of screw purchase compared to unicortical screws. This biomechanical study compares the stability of two of these newer constructs to previous methods. Methods Thirty composite synthetic femurs were prepared with cemented hip stems. The distal femur segment was osteotomized, and plates were fixed proximally with either: (1) cerclage cables; (2) locked unicortical screws; (3) a composite of locked screws and cables; or tangentially directed bicortical locking screws using either (4) a stainless steel LCP system with a Locking Attachment Plate (Synthes), or (5) a titanium alloy NCB system (Zimmer). Specimens were tested to failure in either axial or torsional quasi-static loading modes (n = 3) after 20 moderate load pre-conditioning cycles. Stiffness, maximum force, and failure mechanism were determined. Results Bicortical constructs resisted higher (by an average of at least 27%) maximum forces than the other three constructs in torsional loading (p<0.05). Cables constructs exhibited lower maximum force than all other constructs, in both axial and torsional loading. The bicortical titanium construct was stiffer than the bicortical stainless steel construct in axial loading. Conclusions Proximal fixation stability is likely improved with the use of bicortical locking screws as compared to traditional unicortical screws and cable techniques. In this study with a limited sample size, we found the addition of cerclage cables to unicortical screws may not offer much improvement in biomechanical stability of unstable B1 fractures. PMID:26053467

  8. Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?

    PubMed Central

    Hoekstra, Rink; Kiers, Henk A. L.; Johnson, Addie

    2012-01-01

    A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746

  9. The combustion of different air distribution of foursquare tangential circle boiler by numerical simulation

    NASA Astrophysics Data System (ADS)

    Guo, Yue; Du, Lei; Jiang, Long; Li, Qing; Zhao, Zhenning

    2017-01-01

    In this paper, the combustion and NOx emission characteristics of a 300 MW tangential boiler are simulated, we obtain the flue gas velocity field in the hearth, component concentration distribution of temperature field and combustion products, and the speed, temperature, concentration of oxygen and NOx emissions compared with the test results in the waisting air distribution conditions, found the simulation values coincide well with the test value, to verify the rationality of the model. At the same time, the flow field in the furnace, the combustion and the influence of NOx emission characteristics are simulated by different conditions, including compared with primary zone secondary waisting air distribution, uniform air distribution and pagodas go down air distribution, the results show that, waisting air distribution is useful to reduce NOx emissions.

  10. Structure Optimization of a Grain Impact Piezoelectric Sensor and Its Application for Monitoring Separation Losses on Tangential-Axial Combine Harvesters

    PubMed Central

    Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang

    2015-01-01

    Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice. PMID:25594592

  11. Structure optimization of a grain impact piezoelectric sensor and its application for monitoring separation losses on tangential-axial combine harvesters.

    PubMed

    Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang

    2015-01-14

    Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice.

  12. Roy's specific life values and the philosophical assumption of humanism.

    PubMed

    Hanna, Debra R

    2013-01-01

    Roy's philosophical assumption of humanism, which is shaped by the veritivity assumption, is considered in terms of her specific life values and in contrast to the contemporary view of humanism. Like veritivity, Roy's philosophical assumption of humanism unites a theocentric focus with anthropological values. Roy's perspective enriches the mainly secular, anthropocentric assumption. In this manuscript, the basis for Roy's perspective of humanism will be discussed so that readers will be able to use the Roy adaptation model in an authentic manner.

  13. Tangential-flow ultrafiltration with integrated inhibition detection for recovery of surrogates and human pathogens from large-volume source water and finished drinking water.

    PubMed

    Gibson, Kristen E; Schwab, Kellogg J

    2011-01-01

    Tangential-flow ultrafiltration was optimized for the recovery of Escherichia coli, Enterococcus faecalis, Clostridium perfringens spores, bacteriophages MS2 and PRD1, murine norovirus, and poliovirus seeded into 100-liter surface water (SW) and drinking water (DW) samples. SW and DW collected from two drinking water treatment plants were then evaluated for human enteric viruses.

  14. Why is it Doing That? - Assumptions about the FMS

    NASA Technical Reports Server (NTRS)

    Feary, Michael; Immanuel, Barshi; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    In the glass cockpit, it's not uncommon to hear exclamations such as "why is it doing that?". Sometimes pilots ask "what were they thinking when they set it this way?" or "why doesn't it tell me what it's going to do next?". Pilots may hold a conceptual model of the automation that is the result of fleet lore, which may or may not be consistent with what the engineers had in mind. But what did the engineers have in mind? In this study, we present some of the underlying assumptions surrounding the glass cockpit. Engineers and designers make assumptions about the nature of the flight task; at the other end, instructor and line pilots make assumptions about how the automation works and how it was intended to be used. These underlying assumptions are seldom recognized or acknowledged, This study is an attempt to explicitly arti culate such assumptions to better inform design and training developments. This work is part of a larger project to support training strategies for automation.

  15. Asymptotics and numerics of a family of two-dimensional generalized surface quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Ohkitani, Koji

    2012-09-01

    We study the generalised 2D surface quasi-geostrophic (SQG) equation, where the active scalar is given by a fractional power α of Laplacian applied to the stream function. This includes the 2D SQG and Euler equations as special cases. Using Poincaré's successive approximation to higher α-derivatives of the active scalar, we derive a variational equation for describing perturbations in the generalized SQG equation. In particular, in the limit α → 0, an asymptotic equation is derived on a stretched time variable τ = αt, which unifies equations in the family near α = 0. The successive approximation is also discussed at the other extreme of the 2D Euler limit α = 2-0. Numerical experiments are presented for both limits. We consider whether the solution behaves in a more singular fashion, with more effective nonlinearity, when α is increased. Two competing effects are identified: the regularizing effect of a fractional inverse Laplacian (control by conservation) and cancellation by symmetry (nonlinearity depletion). Near α = 0 (complete depletion), the solution behaves in a more singular fashion as α increases. Near α = 2 (maximal control by conservation), the solution behave in a more singular fashion, as α decreases, suggesting that there may be some α in [0, 2] at which the solution behaves in the most singular manner. We also present some numerical results of the family for α = 0.5, 1, and 1.5. On the original time t, the H1 norm of θ generally grows more rapidly with increasing α. However, on the new time τ, this order is reversed. On the other hand, contour patterns for different α appear to be similar at fixed τ, even though the norms are markedly different in magnitude. Finally, point-vortex systems for the generalized SQG family are discussed to shed light on the above problems of time scale.

  16. Sufficiency and Necessity Assumptions in Causal Structure Induction

    ERIC Educational Resources Information Center

    Mayrhofer, Ralf; Waldmann, Michael R.

    2016-01-01

    Research on human causal induction has shown that people have general prior assumptions about causal strength and about how causes interact with the background. We propose that these prior assumptions about the parameters of causal systems do not only manifest themselves in estimations of causal strength or the selection of causes but also when…

  17. Cdk5 Phosphorylation of ErbB4 is Required for Tangential Migration of Cortical Interneurons

    PubMed Central

    Rakić, Sonja; Kanatani, Shigeaki; Hunt, David; Faux, Clare; Cariboni, Anna; Chiara, Francesca; Khan, Shabana; Wansbury, Olivia; Howard, Beatrice; Nakajima, Kazunori; Nikolić, Margareta; Parnavelas, John G.

    2015-01-01

    Interneuron dysfunction in humans is often associated with neurological and psychiatric disorders, such as epilepsy, schizophrenia, and autism. Some of these disorders are believed to emerge during brain formation, at the time of interneuron specification, migration, and synapse formation. Here, using a mouse model and a host of histological and molecular biological techniques, we report that the signaling molecule cyclin-dependent kinase 5 (Cdk5), and its activator p35, control the tangential migration of interneurons toward and within the cerebral cortex by modulating the critical neurodevelopmental signaling pathway, ErbB4/phosphatidylinositol 3-kinase, that has been repeatedly linked to schizophrenia. This finding identifies Cdk5 as a crucial signaling factor in cortical interneuron development in mammals. PMID:24142862

  18. Cognitive neuroenhancement: false assumptions in the ethical debate.

    PubMed

    Heinz, Andreas; Kipke, Roland; Heimann, Hannah; Wiesing, Urban

    2012-06-01

    The present work critically examines two assumptions frequently stated by supporters of cognitive neuroenhancement. The first, explicitly methodological, assumption is the supposition of effective and side effect-free neuroenhancers. However, there is an evidence-based concern that the most promising drugs currently used for cognitive enhancement can be addictive. Furthermore, this work describes why the neuronal correlates of key cognitive concepts, such as learning and memory, are so deeply connected with mechanisms implicated in the development and maintenance of addictive behaviour so that modification of these systems may inevitably run the risk of addiction to the enhancing drugs. Such a potential risk of addiction could only be falsified by in-depth empirical research. The second, implicit, assumption is that research on neuroenhancement does not pose a serious moral problem. However, the potential for addiction, along with arguments related to research ethics and the potential social impact of neuroenhancement, could invalidate this assumption. It is suggested that ethical evaluation needs to consider the empirical data as well as the question of whether and how such empirical knowledge can be obtained.

  19. The Immoral Assumption Effect: Moralization Drives Negative Trait Attributions.

    PubMed

    Meindl, Peter; Johnson, Kate M; Graham, Jesse

    2016-04-01

    Jumping to negative conclusions about other people's traits is judged as morally bad by many people. Despite this, across six experiments (total N = 2,151), we find that multiple types of moral evaluations--even evaluations related to open-mindedness, tolerance, and compassion--play a causal role in these potentially pernicious trait assumptions. Our results also indicate that moralization affects negative-but not positive-trait assumptions, and that the effect of morality on negative assumptions cannot be explained merely by people's general (nonmoral) preferences or other factors that distinguish moral and nonmoral traits, such as controllability or desirability. Together, these results suggest that one of the more destructive human tendencies--making negative assumptions about others--can be caused by the better angels of our nature. © 2016 by the Society for Personality and Social Psychology, Inc.

  20. Formalization and analysis of reasoning by assumption.

    PubMed

    Bosse, Tibor; Jonker, Catholijn M; Treur, Jan

    2006-01-02

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been specified, some of which are considered characteristic for the reasoning pattern, whereas some other properties can be used to discriminate among different approaches to the reasoning. These properties have been automatically checked for the traces acquired in experiments undertaken. The approach turned out to be beneficial from two perspectives. First, checking characteristic properties contributes to the empirical validation of a theory on reasoning by assumption. Second, checking discriminating properties allows the analyst to identify different classes of human reasoners. 2006 Lawrence Erlbaum Associates, Inc.

  1. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  2. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  3. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  4. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  5. 12 CFR 307.2 - Certification of assumption of deposit liabilities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... satisfactory evidence of such deposit assumption, as required by section 8(q) of the FDI Act (12 U.S.C. 1818(q... evidence of such assumption for purposes of section 8(q). (e) Issuance of an order. The Executive Secretary... satisfactory evidence of such assumption, pursuant to section 8(q) of the FDI Act and this regulation...

  6. ON INTERMITTENT TURBULENCE HEATING OF THE SOLAR WIND: DIFFERENCES BETWEEN TANGENTIAL AND ROTATIONAL DISCONTINUITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Xin; Tu Chuanyi; He Jiansen

    The intermittent structures in solar wind turbulence, studied by using measurements from the WIND spacecraft, are identified as being mostly rotational discontinuities (RDs) and rarely tangential discontinuities (TDs) based on the technique described by Smith. Only TD-associated current sheets (TCSs) are found to be accompanied with strong local heating of the solar wind plasma. Statistical results show that the TCSs have a distinct tendency to be associated with local enhancements of the proton temperature, density, and plasma beta, and a local decrease of magnetic field magnitude. Conversely, for RDs, our statistical results do not reveal convincing heating effects. These resultsmore » confirm the notion that dissipation of solar wind turbulence can take place in intermittent or locally isolated small-scale regions which correspond to TCSs. The possibility of heating associated with RDs is discussed.« less

  7. CFD analysis of temperature imbalance in superheater/reheater region of tangentially coal-fired boiler

    NASA Astrophysics Data System (ADS)

    Zainudin, A. F.; Hasini, H.; Fadhil, S. S. A.

    2017-10-01

    This paper presents a CFD analysis of the flow, velocity and temperature distribution in a 700 MW tangentially coal-fired boiler operating in Malaysia. The main objective of the analysis is to gain insights on the occurrences in the boiler so as to understand the inherent steam temperature imbalance problem. The results show that the root cause of the problem comes from the residual swirl in the horizontal pass. The deflection of the residual swirl due to the sudden reduction and expansion of the flow cross-sectional area causes velocity deviation between the left and right side of the boiler. This consequently results in flue gas temperature imbalance which has often caused tube leaks in the superheater/reheater region. Therefore, eliminating the residual swirl or restraining it from being diverted might help to alleviate the problem.

  8. Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D.

    PubMed

    Lasnier, C J; Allen, S L; Ellis, R E; Fenstermacher, M E; McLean, A G; Meyer, W H; Morris, K; Seppala, L G; Crabtree, K; Van Zeeland, M A

    2014-11-01

    An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in diverted and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. Demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.

  9. Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D

    DOE PAGES

    Lasnier, Charles J.; Allen, Steve L.; Ellis, Ronald E.; ...

    2014-08-26

    An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in divertedmore » and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. As a result, demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.« less

  10. Sampling Assumptions in Inductive Generalization

    ERIC Educational Resources Information Center

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  11. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  12. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.

    2015-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.

  13. SU-F-T-422: Detection of Optimal Tangential Partial Arc Span for VMAT Planning in IntactLeft-Breast Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giri, U; Sarkar, B; Munshi, A

    Purpose: This study was designed to investigate an appropriate arc span for intact partial Left breast irradiation by VMAT planning. Methods: Four cases of carcinoma left intact breast was chosen randomly for this study. Both medial tangential and left-lateral tangential arc (G20°, G25°, G30°, G35°, G40°) were used having the same length and bilaterally symmetric. For each patient base plan was generated for 30° arc and rest of other arc plans were generated by keeping all plan parameters same, only arc span were changed. All patient plans were generated on treatment planning system Monaco (V 5.00.02) for 50 Gy dosemore » in 25 fractions. PTV contours were clipped 3 mm from skin (patient). All plans were normalized in such a way that 95 % of prescription dose would cover 96 % of PTV volume. Results: Mean MU for 20°, 25°, 30°, 35° and 40° were 509 ± 18.8, 529.1 ± 20.2, 544.4 ± 20.8, 579.1 ±51.8, 607.2 ± 40.2 similarly mean hot spot (volume covered by 105% of prescription dose) were 2.9 ± 1.2, 3.7 ± 3.0, 1.5 ± 1.7, 1.3±0.6, 0.4 ± 0.4, mean contralateral breast dose (cGy) were 180.4 ± 242.3, 71.5 ± 52.7, 76.2 ± 58.8, 85.9 ± 70.5, 90.7 ± 70.1, mean heart dose (cGy) were 285.8 ± 87.2, 221.2 ± 62.8, 274.5 ± 95.5, 234.8 ± 73.8, 263.2 ± 81.6, V20 for ipsilateral lung were 15.4 ± 5.3, 14.3 ± 3.6, 15.3 ± 2.9, 14.2 ± 3.9, 14.7 ± 3.2 and V5 for ipsilateral lung were 33.9 ± 8.2, 31.0 ± 3.5, 42.6 ±15.6, 36.4 ± 12.9, 37.0 ± 7.5. Conclusion: The study concluded that appropriate arc span used for tangential intact breast treatment was optimally 30° because larger arc span were giving lower isodose spill in ipsilateral lung and smaller arc were giving heterogeneous dose distribution in PTV.« less

  14. Simulation of Dose to Surrounding Normal Structures in Tangential Breast Radiotherapy Due to Setup Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhakar, Ramachandran; Department of Nuclear Medicine, All India Institute of Medical Sciences, New Delhi; Department of Radiology, All India Institute of Medical Sciences, New Delhi

    Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL),more » contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.« less

  15. Design of tangential multi-energy SXR cameras for tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Yamazaki, H.; Delgado-Aparicio, L. F.; Pablant, N.; Hill, K.; Bitter, M.; Takase, Y.; Ono, M.; Stratton, B.

    2017-10-01

    A new synthetic diagnostic capability has been built to study the response of tangential multi-energy soft x-ray pin-hole cameras for arbitrary plasma densities (ne , D), temperature (Te) and ion concentrations (nZ). For tokamaks and future facilities to operate safely in a high-pressure long-pulse discharge, it is imperative to address key issues associated with impurity sources, core transport and high-Z impurity accumulation. Multi-energy soft xray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (e.g. Te, nZ and ΔZeff). These systems are designed to sample the continuum- and line-emission from low- to high-Z impurities (e.g. C, O, Al, Si, Ar, Ca, Fe, Ni and Mo) in multiple energy-ranges. These x-ray cameras will be installed in the MST-RFP, as well as NSTX-U and DIII-D tokamaks, measuring the radial structure of the photon emissivity with a radial resolution below 1 cm at a 500 Hz frame rate and a photon-energy resolution of 500 eV. The layout and response expected for the new systems will be shown for different plasma conditions and impurity concentrations. The effect of toroidal rotation driving poloidal asymmetries in the core radiation is also addressed for the case of NSTX-U.

  16. Philosophy of Technology Assumptions in Educational Technology Leadership: Questioning Technological Determinism

    ERIC Educational Resources Information Center

    Webster, Mark David

    2013-01-01

    Scholars have emphasized that decisions about technology can be influenced by philosophy of technology assumptions, and have argued for research that critically questions technological determinist assumptions. Empirical studies of technology management in fields other than K-12 education provided evidence that philosophy of technology assumptions,…

  17. Deep Borehole Field Test Requirements and Controlled Assumptions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, Ernest

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientificmore » characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.« less

  18. Where Are We Going? Planning Assumptions for Community Colleges.

    ERIC Educational Resources Information Center

    Maas, Rao, Taylor and Associates, Riverside, CA.

    Designed to provide community college planners with a series of reference assumptions to consider in the planning process, this document sets forth assumptions related to finance (i.e., operational funds, capital funds, alternate funding sources, and campus financial operations); California state priorities; occupational trends; population (i.e.,…

  19. School Principals' Assumptions about Human Nature: Implications for Leadership in Turkey

    ERIC Educational Resources Information Center

    Sabanci, Ali

    2008-01-01

    This article considers principals' assumptions about human nature in Turkey and the relationship between the assumptions held and the leadership style adopted in schools. The findings show that school principals hold Y-type assumptions and prefer a relationship-oriented style in their relations with assistant principals. However, both principals…

  20. An Exploration of Dental Students' Assumptions About Community-Based Clinical Experiences.

    PubMed

    Major, Nicole; McQuistan, Michelle R

    2016-03-01

    The aim of this study was to ascertain which assumptions dental students recalled feeling prior to beginning community-based clinical experiences and whether those assumptions were fulfilled or challenged. All fourth-year students at the University of Iowa College of Dentistry & Dental Clinics participate in community-based clinical experiences. At the completion of their rotations, they write a guided reflection paper detailing the assumptions they had prior to beginning their rotations and assessing the accuracy of their assumptions. For this qualitative descriptive study, the 218 papers from three classes (2011-13) were analyzed for common themes. The results showed that the students had a variety of assumptions about their rotations. They were apprehensive about working with challenging patients, performing procedures for which they had minimal experience, and working too slowly. In contrast, they looked forward to improving their clinical and patient management skills and knowledge. Other assumptions involved the site (e.g., the equipment/facility would be outdated; protocols/procedures would be similar to the dental school's). Upon reflection, students reported experiences that both fulfilled and challenged their assumptions. Some continued to feel apprehensive about treating certain patient populations, while others found it easier than anticipated. Students were able to treat multiple patients per day, which led to increased speed and patient management skills. However, some reported challenges with time management. Similarly, students were surprised to discover some clinics were new/updated although some had limited instruments and materials. Based on this study's findings about students' recalled assumptions and reflective experiences, educators should consider assessing and addressing their students' assumptions prior to beginning community-based dental education experiences.

  1. Environmental surveillance of viruses by tangential flow filtration and metagenomic reconstruction.

    PubMed

    Furtak, Vyacheslav; Roivainen, Merja; Mirochnichenko, Olga; Zagorodnyaya, Tatiana; Laassri, Majid; Zaidi, Sohail Z; Rehman, Lubna; Alam, Muhammad M; Chizhikov, Vladimir; Chumakov, Konstantin

    2016-04-14

    An approach is proposed for environmental surveillance of poliovirus by concentrating sewage samples with tangential flow filtration (TFF) followed by deep sequencing of viral RNA. Subsequent to testing the method with samples from Finland, samples from Pakistan, a country endemic for poliovirus, were investigated. Genomic sequencing was either performed directly, for unbiased identification of viruses regardless of their ability to grow in cell cultures, or after virus enrichment by cell culture or immunoprecipitation. Bioinformatics enabled separation and determination of individual consensus sequences. Overall, deep sequencing of the entire viral population identified polioviruses, non-polio enteroviruses, and other viruses. In Pakistani sewage samples, adeno-associated virus, unable to replicate autonomously in cell cultures, was the most abundant human virus. The presence of recombinants of wild polioviruses of serotype 1 (WPV1) was also inferred, whereby currently circulating WPV1 of south-Asian (SOAS) lineage comprised two sub-lineages depending on their non-capsid region origin. Complete genome analyses additionally identified point mutants and intertypic recombinants between attenuated Sabin strains in the Pakistani samples, and in one Finnish sample. The approach could allow rapid environmental surveillance of viruses causing human infections. It creates a permanent digital repository of the entire virome potentially useful for retrospective screening of future discovered viruses.

  2. 7 CFR 772.10 - Transfer and assumption-AMP loans.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Transfer and assumption-AMP loans. 772.10 Section 772..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS SERVICING MINOR PROGRAM LOANS § 772.10 Transfer and assumption—AMP loans. (a) Eligibility. The Agency may approve transfers and assumptions of AMP loans when: (1) The...

  3. 32 CFR 700.703 - To announce assumption of command.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false To announce assumption of command. 700.703... Chief and Other Commanders Titles and Duties of Commanders § 700.703 To announce assumption of command. (a) Upon assuming command, commanders shall so advise appropriate superiors, and the units of their...

  4. 32 CFR 700.703 - To announce assumption of command.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 5 2012-07-01 2012-07-01 false To announce assumption of command. 700.703... Chief and Other Commanders Titles and Duties of Commanders § 700.703 To announce assumption of command. (a) Upon assuming command, commanders shall so advise appropriate superiors, and the units of their...

  5. 32 CFR 700.703 - To announce assumption of command.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 5 2014-07-01 2014-07-01 false To announce assumption of command. 700.703... Chief and Other Commanders Titles and Duties of Commanders § 700.703 To announce assumption of command. (a) Upon assuming command, commanders shall so advise appropriate superiors, and the units of their...

  6. 32 CFR 700.703 - To announce assumption of command.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false To announce assumption of command. 700.703... Chief and Other Commanders Titles and Duties of Commanders § 700.703 To announce assumption of command. (a) Upon assuming command, commanders shall so advise appropriate superiors, and the units of their...

  7. 32 CFR 700.703 - To announce assumption of command.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 5 2011-07-01 2011-07-01 false To announce assumption of command. 700.703... Chief and Other Commanders Titles and Duties of Commanders § 700.703 To announce assumption of command. (a) Upon assuming command, commanders shall so advise appropriate superiors, and the units of their...

  8. Experimental investigation of tangential blowing for control of the strong shock boundary layer interaction on inlet ramps

    NASA Technical Reports Server (NTRS)

    Schwendemann, M. F.

    1981-01-01

    A 0.165-scale isolated inlet model was tested in the NASA Lewis Research Center 8-ft by 6-ft Supersonic Wind Tunnel. Ramp boundary layer control was provided by tangential blowing from a row of holes in an aft-facing step set into the ramp surface. Testing was performed at Mach numbers from 1.36 to 1.96 using both cold and heated air in the blowing system. Stable inlet flow was achieved at all Mach numbers. Blowing hole geometry was found to be significant at 1.96M. Blowing air temperature was found to have only a small effect on system performance. High blowing levels were required at the most severe test conditions.

  9. Flurbiprofen Axetil Enhances Analgesic Effects of Sufentanil and Attenuates Postoperative Emergence Agitation and Systemic Proinflammation in Patients Undergoing Tangential Excision Surgery

    PubMed Central

    Geng, Wujun; Hong, Wandong; Wang, Junlu; Dai, Qinxue; Mo, Yunchang; Shi, Kejian; Sun, Jiehao; Qin, Jinling; Li, Mei; Tang, Hongli

    2015-01-01

    Objective. Our present study tested whether flurbiprofen axetil could reduce perioperative sufentanil consumption and provide postoperative analgesia with decrease in emergency agitation and systemic proinflammatory cytokines release. Methods. Ninety patients undergoing tangential excision surgery were randomly assigned to three groups: (1) preoperative dose of 100 mg flurbiprofen axetil and a postoperative dose of 2 μg/kg sufentanil and 10 mL placebo by patient-controlled analgesia (PCA) pump, (2) preoperative dose of 100 mg flurbiprofen axetil and a postoperative dose of 2 μg/kg sufentanil and 100 mg flurbiprofen axetil by PCA pump, and (3) 10 mL placebo and a postoperative dose of 2 μg/kg sufentanil and 10 mL placebo by PCA pump. Results. Preoperative administration of flurbiprofen axetil decreased postoperative tramadol consumption and the visual analog scale at 4, 6, 12, and 24 h after surgery, which were further decreased by postoperative administration of flurbiprofen axetil. Furthermore, flurbiprofen axetil attenuated emergency agitation score and Ramsay score at 0, 5, and 10 min after extubation and reduced the TNF-α and interleukin- (IL-) 6 levels at 24 and 48 h after the operation. Conclusion. Flurbiprofen axetil enhances analgesic effects of sufentanil and attenuates emergence agitation and systemic proinflammation in patients undergoing tangential excision surgery. PMID:26273138

  10. Flurbiprofen Axetil Enhances Analgesic Effects of Sufentanil and Attenuates Postoperative Emergence Agitation and Systemic Proinflammation in Patients Undergoing Tangential Excision Surgery.

    PubMed

    Geng, Wujun; Hong, Wandong; Wang, Junlu; Dai, Qinxue; Mo, Yunchang; Shi, Kejian; Sun, Jiehao; Qin, Jinling; Li, Mei; Tang, Hongli

    2015-01-01

    Our present study tested whether flurbiprofen axetil could reduce perioperative sufentanil consumption and provide postoperative analgesia with decrease in emergency agitation and systemic proinflammatory cytokines release. Ninety patients undergoing tangential excision surgery were randomly assigned to three groups: (1) preoperative dose of 100 mg flurbiprofen axetil and a postoperative dose of 2 μg/kg sufentanil and 10 mL placebo by patient-controlled analgesia (PCA) pump, (2) preoperative dose of 100 mg flurbiprofen axetil and a postoperative dose of 2 μg/kg sufentanil and 100 mg flurbiprofen axetil by PCA pump, and (3) 10 mL placebo and a postoperative dose of 2 μg/kg sufentanil and 10 mL placebo by PCA pump. Preoperative administration of flurbiprofen axetil decreased postoperative tramadol consumption and the visual analog scale at 4, 6, 12, and 24 h after surgery, which were further decreased by postoperative administration of flurbiprofen axetil. Furthermore, flurbiprofen axetil attenuated emergency agitation score and Ramsay score at 0, 5, and 10 min after extubation and reduced the TNF-α and interleukin- (IL-) 6 levels at 24 and 48 h after the operation. Flurbiprofen axetil enhances analgesic effects of sufentanil and attenuates emergence agitation and systemic proinflammation in patients undergoing tangential excision surgery.

  11. Highly Efficient Large-Scale Lentiviral Vector Concentration by Tandem Tangential Flow Filtration

    PubMed Central

    Cooper, Aaron R.; Patel, Sanjeet; Senadheera, Shantha; Plath, Kathrin; Kohn, Donald B.; Hollis, Roger P.

    2014-01-01

    Large-scale lentiviral vector (LV) concentration can be inefficient and time consuming, often involving multiple rounds of filtration and centrifugation. This report describes a simpler method using two tangential flow filtration (TFF) steps to concentrate liter-scale volumes of LV supernatant, achieving in excess of 2000-fold concentration in less than 3 hours with very high recovery (>97%). Large volumes of LV supernatant can be produced easily through the use of multi-layer flasks, each having 1720 cm2 surface area and producing ~560 mL of supernatant per flask. Combining the use of such flasks and TFF greatly simplifies large-scale production of LV. As a demonstration, the method is used to produce a very high titer LV (>1010 TU/mL) and transduce primary human CD34+ hematopoietic stem/progenitor cells at high final vector concentrations with no overt toxicity. A complex LV (STEMCCA) for induced pluripotent stem cell generation is also concentrated from low initial titer and used to transduce and reprogram primary human fibroblasts with no overt toxicity. Additionally, a generalized and simple multiplexed real- time PCR assay is described for lentiviral vector titer and copy number determination. PMID:21784103

  12. Assumptions at the philosophical and programmatic levels in evaluation.

    PubMed

    Mertens, Donna M

    2016-12-01

    Stakeholders and evaluators hold a variety of levels of assumptions at the philosophical, methodological, and programmatic levels. The use of a transformative philosophical framework is presented as a way for evaluators to become more aware of the implications of various assumptions made by themselves and program stakeholders. The argument is examined and demonstrated that evaluators who are aware of the assumptions that underlie their evaluation choices are able to provide useful support for stakeholders in the examination of the assumptions they hold with regard to the nature of the problem being addressed, the program designed to solve the problem, and the approach to evaluation that is appropriate in that context. Such an informed approach has the potential for development of more appropriate and culturally responsive programs being implemented in ways that lead to the desired impacts, as well as to lead to evaluation approaches that support effective solutions to intransigent social problems. These arguments are illustrated through examples of evaluations from multiple sectors; additional challenges are also identified. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Formation of Singularities at the Interface of Liquid Dielectrics in a Horizontal Electric Field in the Presence of Tangential Velocity Discontinuity

    NASA Astrophysics Data System (ADS)

    Zubarev, N. M.; Kochurin, E. A.

    2018-03-01

    Nonlinear dynamics of the interface of dielectric liquids under the conditions of suppression of the Kelvin-Helmholz instability by a tangential electric field has been investigated. Two broad classes of exact analytical solutions to the equations of motion describing the evolution of spatially localized and periodic interface perturbations have been found. Both classes of solutions tend to the formation of strong singularities: interface discontinuities with formally infinite amplitudes. The discontinuity sign is determined by the sign of liquid velocity jump at the interface.

  14. Geostrophic adjustment in a shallow-water numerical model as it relates to thermospheric dynamics

    NASA Technical Reports Server (NTRS)

    Larsen, M. F.; Mikkelsen, I. S.

    1986-01-01

    The theory of geostrophic adjustment and its application to the dynamics of the high latitude thermosphere have been discussed in previous papers based on a linearized treatment of the fluid dynamical equations. However, a linearized treatment is only valid for small Rossby numbers given by Ro = V/fL, where V is the wind speed, f is the local value of the Coriolis parameter, and L is a characteristic horizontal scale for the flow. For typical values in the auroral zone, the approximation is not reasonable for wind speeds greater than 25 m/s or so. A shallow-water (one layer) model was developed that includes the spherical geometry and full nonlinear dynamics in the momentum equations in order to isolate the effects of the nonlinearities on the adjustment process. A belt of accelerated winds between 60 deg and 70 deg latitude was used as the initial condition. The adjustment process was found to proceed as expected from the linear formulation, but that an asymmetry between the response for an eastward and westward flow results from the nonlineawr curvature (centrifugal) terms. In general, the amplitude of an eastward flowing wind will be less after adjustment than a westward wind. For instance, if the initial wind velocity is 300 m/s, the linearized theory predicts a final wind speed of 240 m/s, regardless of the flow direction. However, the nonlinear curvature terms modify the response and produce a final wind speed of only 200 m/s for an initial eastward wind and a final wind speed of almost 300 m/s for an initial westward flow direction. Also, less gravity wave energy is produced by the adjustment of the westward flow than by the adjustment of the eastward flow. The implications are that the response of the thermosphere should be significantly different on the dawn and dusk sides of the auroral oval. Larger flow velocities would be expected on the dusk side since the plasma will accelerate the flow in a westward direction in that sector.

  15. Ratchet flow of thin liquid films induced by a two-frequency tangential forcing

    NASA Astrophysics Data System (ADS)

    Sterman-Cohen, Elad; Bestehorn, Michael; Oron, Alexander

    2018-02-01

    A possibility of saturating Rayleigh-Taylor instability in a thin liquid film on the underside of a substrate in the gravity field by harmonic vibration of the substrate was recently investigated [E. Sterman-Cohen, M. Bestehorn, and A. Oron, Phys. Fluids 29, 052105 (2017); Erratum, Phys. Fluids 29, 109901 (2017)]. In the present work, we investigate the feasibility of creating a directional flow of the fluid in a film in the Rayleigh-Taylor configuration and controlling its flow rate by applying a two-frequency tangential forcing to the substrate. It is shown that in this situation, a ratchet flow develops, and the dependence of its flow rate on the vibration frequency, amplitude, its periodicity, and asymmetry level is investigated for water and silicone-oil films. A cause for the emergence of symmetry-breaking and an ensuing flow in a preferred direction is discussed. Some aspects of a ratchet flow in a liquid film placed on top of the substrate are discussed as well. A comparison with the case of a neglected fluid inertia is made, and the differences are explained.

  16. On the generation of tangential ground motion by underground explosions in jointed rocks

    NASA Astrophysics Data System (ADS)

    Vorobiev, Oleg; Ezzedine, Souheil; Antoun, Tarabay; Glenn, Lewis

    2015-03-01

    This paper describes computational studies of tangential ground motions generated by spherical explosions in a heavily jointed granite formation. Various factors affecting the shear wave generation are considered, including joint spacing, orientation and frictional properties. Simulations are performed both in 2-D for a single joint set to elucidate the basic response mechanisms, and in 3-D for multiple joint sets to realistically represent in situ conditions in a realistic geological setting. The joints are modelled explicitly using both contact elements and weakness planes in the material. Simulations are performed both deterministically and stochastically to quantify the effects of geological uncertainties on near field ground motions. The mechanical properties of the rock and the joints as well as the joint spacing and orientation are taken from experimental test data and geophysical logs corresponding to the Climax Stock granitic outcrop, which is the geological setting of the source physics experiment (SPE). Agreement between simulation results and near field wave motion data from SPE enables newfound understanding of the origin and extent of non-spherical motions associated with underground explosions in fractured geological media.

  17. Purification of infectious adenovirus in two hours by ultracentrifugation and tangential flow filtration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ugai, Hideyo; Yamasaki, Takahito; Hirose, Megumi

    2005-06-17

    Adenoviruses are excellent vectors for gene transfer and are used extensively for high-level expression of the products of transgenes in living cells. The development of simple and rapid methods for the purification of stable infectious recombinant adenoviruses (rAds) remains a challenge. We report here a method for the purification of infectious adenovirus type 5 (Ad5) that involves ultracentrifugation on a cesium chloride gradient at 604,000g for 15 min at 4 deg C and tangential flow filtration. The entire procedure requires less than two hours and infectious Ad5 can be recovered at levels higher than 64% of the number of plaque-formingmore » units (pfu) in the initial crude preparation of viruses. We have obtained titers of infectious purified Ad5 of 1.35 x 10{sup 10} pfu/ml and a ratio of particle titer to infectious titer of seven. The method described here allows the rapid purification of rAds for studies of gene function in vivo and in vitro, as well as the rapid purification of Ad5.« less

  18. 47 CFR 76.913 - Assumption of jurisdiction by the Commission.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Assumption of jurisdiction by the Commission. 76.913 Section 76.913 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Cable Rate Regulation § 76.913 Assumption of...

  19. A Scientific Analysis of Galaxy Tangential Speed of Revolution Curves III

    NASA Astrophysics Data System (ADS)

    Taff, Laurence

    2015-04-01

    I last reported on my preliminary analysis of 350 + spiral, lenticular, irregular, polar ring, ring, and dwarf elliptical galaxies' tangential speed of revolution curves [TSRCs; and not rotation (sic) curves]. I now know that the consensus opinion in the literature--for which I can find no geometrical, numerical, statistical, nor scientific testing in 2,500 + publications--that the TSRC, vB(r), in the central bulges of these galaxies, is a linear function of the radial distance from the minor axis of symmetry r--is false. For the majority (>98%) vB(r) is rarely well represented by vB(r) = ωB r (for which the unique material model is an homogeneous, oblate, spheroid). Discovered via a scientific analysis of the gravitational potential energy computed directly from the observational data, vB(r) is almost exactly given by vB2(r) = (ωB r)2(1 + η r2) with | η | < 10-2 and frequently orders of magnitude less. The corresponding mass model is the simplest generalization: a two component homoeoid. The set of possible periodic orbits, based on circular trigonometric functions, becomes a set of periodic orbits based on the Jacobian elliptic functions. Once again it is possible to prove that the mass-to-light ratio can neither be a constant nor follow the de Vaucouleurs R1/4 rule.

  20. 24 CFR 58.4 - Assumption authority.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., decision-making, and action that would otherwise apply to HUD under NEPA and other provisions of law that... environmental review, decision-making and action for programs authorized by the Native American Housing... separate decision regarding assumption of responsibilities for each of these Acts and communicate that...

  1. Effect of potential vorticity flux on the circulation in the South China Sea

    NASA Astrophysics Data System (ADS)

    Zhu, Yaohua; Sun, Junchuan; Wang, Yonggang; Wei, Zexun; Yang, Dezhou; Qu, Tangdong

    2017-08-01

    This study analyzes temperature and salinity products from the U.S. Navy Generalized Digital Environment Model. To avoid the fictitious assumption of no-motion reference level, a P-vector inverse method is employed to derive geostrophic velocity. Line integral of geostrophic velocity shows evidence for the existence of a sandwiched circulation in the South China Sea (SCS), i.e., cyclonic circulation in the subsurface and deep layers and anticyclonic in the intermediate layer. To reveal the factors responsible for the sandwiched circulation, we derive the potential vorticity equation based on a four-and-a-half-layer quasi-geostrophic model and apply theoretical potential vorticity constraint to density layers. The result shows that the sandwiched circulation is largely induced by planetary potential vorticity flux through lateral boundaries, mainly the Luzon Strait. This dynamical mechanism lies in the fact that the net potential vorticity inflow in the subsurface and deep layers leads to a positive layer-average vorticity in the SCS basin, yielding vortex stretching and a cyclonic basin-wide circulation. On the contrary, the net potential vorticity outflow in the intermediate layer induces a negative layer-average vorticity, generating an anticyclonic basin-wide circulation in the SCS. Furthermore, by illustrating different consequence from depth/density layers, we clarify that density layers are essential for applying theoretical potential vorticity constraint to the isolated deep SCS basin.

  2. Exact solution for the layered convection of a viscous incompressible fluid at specified temperature gradients and tangential forces on the free boundary

    NASA Astrophysics Data System (ADS)

    Burmasheva, N. V.; Prosviryakov, E. Yu.

    2017-12-01

    A new exact analytical solution of a system of thermal convection equations in the Boussinesq approximation describing layered flows in an incompressible viscous fluid is obtained. A fluid flow in an infinite layer is considered. Convection in the fluid is induced by tangential stresses specified on the upper non-deformable boundary. At the fixed lower boundary, the no-slip condition is satisfied. Temperature corrections are given on the both boundaries of the fluid layer. The possibility of physical field stratification is investigated.

  3. A statistical study of magnetopause structures: Tangential versus rotational discontinuities

    NASA Astrophysics Data System (ADS)

    Chou, Y.-C.; Hau, L.-N.

    2012-08-01

    A statistical study of the structure of Earth's magnetopause is carried out by analyzing two-year AMPTE/IRM plasma and magnetic field data. The analyses are based on the minimum variance analysis (MVA), the deHoffmann-Teller (HT) frame analysis and the Walén relation. A total of 328 magnetopause crossings are identified and error estimates associated with MVA and HT frame analyses are performed for each case. In 142 out of 328 events both MVA and HT frame analyses yield high quality results which are classified as either tangential-discontinuity (TD) or rotational-discontinuity (RD) structures based only on the Walén relation: Events withSWA ≤ 0.4 (SWA ≥ 0.5) are classified as TD (RD), and rest (with 0.4 < SWA < 0.5) is classified as "uncertain," where SWA refers to the Walén slope. With this criterion, 84% of 142 events are TDs, 12% are RDs, and 4% are uncertain events. There are a large portion of TD events which exhibit a finite normal magnetic field component Bnbut have insignificant flow as compared to the Alfvén velocity in the HT frame. Two-dimensional Grad-Shafranov reconstruction of forty selected TD and RD events show that single or multiple X-line accompanied with magnetic islands are common feature of magnetopause current. A survey plot of the HT velocity associated with TD structures projected onto the magnetopause shows that the flow is diverted at the subsolar point and accelerated toward the dawn and dusk flanks.

  4. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    PubMed

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics.

  5. Exploring the Influence of Ethnicity, Age, and Trauma on Prisoners' World Assumptions

    ERIC Educational Resources Information Center

    Gibson, Sandy

    2011-01-01

    In this study, the author explores world assumptions of prisoners, how these assumptions vary by ethnicity and age, and whether trauma history affects world assumptions. A random sample of young and old prisoners, matched for prison location, was drawn from the New Jersey Department of Corrections prison population. Age and ethnicity had…

  6. Lesbian health and the assumption of heterosexuality: an organizational perspective.

    PubMed

    Daley, Andrea

    2003-01-01

    This study used a qualitative research design to explore hospital policies and practices and the assumption of female heterosexuality. The assumption of heterosexuality is a product of discursive practices that normalize heterosexuality and individualize lesbian sexual identities. Literature indicates that the assumption of female heterosexuality is implicated in both the invisibility and marked visibility of lesbians as service users. This research adds to existing literature by shifting the focus of study from individual to organizational practices and, in so doing, seeks to uncover hidden truths, explore the functional power of language, and allow for the discovery of what we know and--equally as important--how we know.

  7. Critically Challenging Some Assumptions in HRD

    ERIC Educational Resources Information Center

    O'Donnell, David; McGuire, David; Cross, Christine

    2006-01-01

    This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…

  8. Mexican-American Cultural Assumptions and Implications.

    ERIC Educational Resources Information Center

    Carranza, E. Lou

    The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…

  9. Design of tangential multi-energy soft x-ray camera for NSTX-U

    NASA Astrophysics Data System (ADS)

    Delgado-Aparicio, Luis F.; Maddox, J.; Pablant, N.; Hill, K.; Bitter, M.; Stratton, B.; Efthimion, Phillip

    2016-10-01

    For tokamaks and future facilities to operate safely in a high-pressure long-pulse discharge, it is imperative to address key issues associated with impurity sources, core transport and high-Z impurity accumulation. Multi-energy SXR imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (Te, nZ and ΔZeff). A new tangential multi-energy soft x-ray pin-hole camera is being design to sample the continuum- and line-emission from low-, medium- and high-Z impurities. This new x-ray diagnostic will be installed on an equatorial midplane port of NSTX-U tokamak and will measure the radial structure of the photon emissivity with a radial resolution below 1 cm at a 500 Hz frame rate and a photon-energy resolution of 500 eV. The layout and response expected of the new system will be shown for different plasma conditions and impurity concentrations. The effect of toroidal rotation driving poloidal asymmetries in the core radiation is also addressed. This effort is designed to contribute to the near- and long-term highest priority research goals for NSTX-U which will integrate a non-inductive operation at reduced collisionality, long energy-confinement-times and a transition to a divertor solution with metal walls.

  10. Flow-Based Assembly of Layer-by-Layer Capsules through Tangential Flow Filtration.

    PubMed

    Björnmalm, Mattias; Roozmand, Ali; Noi, Ka Fung; Guo, Junling; Cui, Jiwei; Richardson, Joseph J; Caruso, Frank

    2015-08-25

    Layer-by-layer (LbL) assembly on nano- and microparticles is of interest for a range of applications, including catalysis, optics, sensors, and drug delivery. One current limitation is the standard use of manual, centrifugation-based (pellet/resuspension) methods to perform the layering steps, which can make scalable, highly controllable, and automatable production difficult to achieve. Here, we develop a fully flow-based technique using tangential flow filtration (TFF) for LbL assembly on particles. We demonstrate that multilayered particles and capsules with different sizes (from micrometers to submicrometers in diameter) can be assembled on different templates (e.g., silica and calcium carbonate) using several polymers (e.g., poly(allylamine hydrochloride), poly(styrenesulfonate), and poly(diallyldimethylammonium chloride)). The full system only contains fluidic components routinely used (and automated) in industry, such as pumps, tanks, valves, and tubing in addition to the TFF filter modules. Using the TFF LbL system, we also demonstrate the centrifugation-free assembly, including core dissolution, of drug-loaded capsules. The well-controlled, integrated, and automatable nature of the TFF LbL system provides scientific, engineering, and practical processing benefits, making it valuable for research environments and potentially useful for translating LbL assembled particles into diverse applications.

  11. Being vs. Appearing Socially Uninterested: Challenging Assumptions about Social Motivation in Autism.

    PubMed

    Jaswal, Vikram K; Akhtar, Nameera

    2018-06-19

    Progress in psychological science can be limited by a number of factors, not least of which are the starting assumptions of scientists themselves. We believe that some influential accounts of autism rest on a questionable assumption that many of its behavioral characteristics indicate a lack of social interest-an assumption that is flatly contradicted by the testimony of many autistic people themselves. In this paper, we challenge this assumption by describing alternative explanations for four such behaviors: (a) low levels of eye contact, (b) infrequent pointing, (c) motor stereotypies, and (d) echolalia. The assumption that autistic people's unusual behaviors indicate diminished social motivation has had profound and often negative effects on the ways they are studied and treated. We argue that understanding and supporting autistic individuals will require interrogating this assumption, taking autistic testimony seriously, considering alternative explanations for unusual behaviors, and investigating unconventional-even idiosyncratic-ways that autistic individuals may express their social interest. These steps are crucial, we believe, for creating a more accurate, humane, and useful science of autism.

  12. Causal Mediation Analysis: Warning! Assumptions Ahead

    ERIC Educational Resources Information Center

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  13. SU-F-T-414: Mathematical Formulation of Gantry Starting Angle for Right Medial Tangential Arc in Left Intact Partial Breast Irradiation Using Volumetric Modulated Arc Therapy (VMAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giri, U; Sarkar, B; Kaur, H

    Purpose: To choose appropriate gantry starting angle for partial left breast irradiation using volumetric modulated arc therapy (VMAT). Methods: A random patient of left breast carcinoma was selected for this study. The slice which was selected for this mathematical formulation was having maximum breast thickness and maximum medial and lateral tangential distance. After this appropriate isocenter was chosen on that CT slice. The distances between various points were measured by the measuring tool in Monaco 5.00.04. Using the various trigonometric equations, a final equation was derived which shows the relationship between Gantry start angle, isocenter Location and tissue thickness. Results:more » The final equation for gantry start for right medial tangential arc is given asStarting angle = 270°+tan^(−1)(sin(θ)/(x-1/x-2 +cosθ))The above equation was tested for 10 cases and it was found to be appropriate for all the cases. Conclusion: Gantry starting angle for partial arc irradiation depends upon Breast thickness, Distance between Medial and lateral tangent and isocenter location.« less

  14. Late Pleistocene sequence architecture on the geostrophic current-dominated southwest margin of the Ulleung Basin, East Sea

    NASA Astrophysics Data System (ADS)

    Choi, Dong-Lim; Shin, Dong-Hyeok; Kum, Byung-Cheol; Jang, Seok; Cho, Jin-Hyung; Jou, Hyeong-Tae; Jang, Nam-Do

    2018-06-01

    High-resolution multichannel seismic data were collected to identify depositional sequences on the southwestern shelf of the Ulleung Basin, where a unidirectional ocean current is dominant at water depths exceeding 130 m. Four aggradational stratigraphic sequences with a 100,000-year cycle were recognized since marine isotope stage (MIS) 10. These sequences consist only of lowstand systems tracts (LSTs) and falling-stage systems tracts (FSSTs). Prograding wedge-shaped deposits are present in the LSTs near the shelf break. Oblique progradational clinoforms of forced regressive deposits are present in the FSSTs on the outer continental shelf. Each FSST has non-uniform forced regressional stratal geometries, reflecting that the origins of sediments in each depositional sequence changed when sea level was falling. Slump deposits are characteristically developed in the upper layer of the FSSTs, and this was used as evidence to distinguish the sequence boundaries. The subsidence rates around the shelf break reached as much as 0.6 mm/year since MIS 10, which contributed to the well-preserved depositional sequence. During the Quaternary sea-level change, the water depth in the Korea Strait declined and the intensity of the Tsushima Current flowing near the bottom of the inner continental shelf increased. This resulted in greater erosion of sediments that were delivered to the outer continental shelf, which was the main cause of sediment deposition on the deep, low-angled outer shelf. Therefore, a depositional sequence formation model that consists of only FSSTs and LSTs, excluding highstand systems tracts (HSTs) and transgressive systems tracts (TSTs), best explains the depositional sequence beneath this shelf margin dominated by a geostrophic current.

  15. Late Pleistocene sequence architecture on the geostrophic current-dominated southwest margin of the Ulleung Basin, East Sea

    NASA Astrophysics Data System (ADS)

    Choi, Dong-Lim; Shin, Dong-Hyeok; Kum, Byung-Cheol; Jang, Seok; Cho, Jin-Hyung; Jou, Hyeong-Tae; Jang, Nam-Do

    2017-11-01

    High-resolution multichannel seismic data were collected to identify depositional sequences on the southwestern shelf of the Ulleung Basin, where a unidirectional ocean current is dominant at water depths exceeding 130 m. Four aggradational stratigraphic sequences with a 100,000-year cycle were recognized since marine isotope stage (MIS) 10. These sequences consist only of lowstand systems tracts (LSTs) and falling-stage systems tracts (FSSTs). Prograding wedge-shaped deposits are present in the LSTs near the shelf break. Oblique progradational clinoforms of forced regressive deposits are present in the FSSTs on the outer continental shelf. Each FSST has non-uniform forced regressional stratal geometries, reflecting that the origins of sediments in each depositional sequence changed when sea level was falling. Slump deposits are characteristically developed in the upper layer of the FSSTs, and this was used as evidence to distinguish the sequence boundaries. The subsidence rates around the shelf break reached as much as 0.6 mm/year since MIS 10, which contributed to the well-preserved depositional sequence. During the Quaternary sea-level change, the water depth in the Korea Strait declined and the intensity of the Tsushima Current flowing near the bottom of the inner continental shelf increased. This resulted in greater erosion of sediments that were delivered to the outer continental shelf, which was the main cause of sediment deposition on the deep, low-angled outer shelf. Therefore, a depositional sequence formation model that consists of only FSSTs and LSTs, excluding highstand systems tracts (HSTs) and transgressive systems tracts (TSTs), best explains the depositional sequence beneath this shelf margin dominated by a geostrophic current.

  16. Assumptions to the annual energy outlook 1999 : with projections to 2020

    DOT National Transportation Integrated Search

    1998-12-16

    This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 19991 (AEO99), including general features of : the model structure, assumptions concerning energy ...

  17. Assumptions to the annual energy outlook 2000 : with projections to 2020

    DOT National Transportation Integrated Search

    2000-01-01

    This paper presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20001 (AEO2000), including general features of : the model structure, assumptions concerning energ...

  18. Assumptions to the annual energy outlook 2001 : with projections to 2020

    DOT National Transportation Integrated Search

    2000-12-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20011 (AEO2001), including general features of : the model structure, assumptions concerning ener...

  19. Assumptions for the annual energy outlook 2003 : with projections to 2025

    DOT National Transportation Integrated Search

    2003-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to : generate the projections in the Annual Energy Outlook 20031 (AEO2003), including general features of : the model structure, assumptions concerning ener...

  20. 29 CFR 4044.53 - Mortality assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...

  1. 29 CFR 4044.53 - Mortality assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...

  2. 29 CFR 4044.53 - Mortality assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...

  3. 29 CFR 4044.53 - Mortality assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... assumptions. (a) General rule. Subject to paragraph (b) of this section (regarding certain death benefits...), and (g) of this section to value benefits under § 4044.52. (b) Certain death benefits. If an annuity for one person is in pay status on the valuation date, and if the payment of a death benefit after the...

  4. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    PubMed

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  5. 39 Questionable Assumptions in Modern Physics

    NASA Astrophysics Data System (ADS)

    Volk, Greg

    2009-03-01

    The growing body of anomalies in new energy, low energy nuclear reactions, astrophysics, atomic physics, and entanglement, combined with the failure of the Standard Model and string theory to predict many of the most basic fundamental phenomena, all point to a need for major new paradigms. Not Band-Aids, but revolutionary new ways of conceptualizing physics, in the spirit of Thomas Kuhn's The Structure of Scientific Revolutions. This paper identifies a number of long-held, but unproven assumptions currently being challenged by an increasing number of alternative scientists. Two common themes, both with venerable histories, keep recurring in the many alternative theories being proposed: (1) Mach's Principle, and (2) toroidal, vortex particles. Matter-based Mach's Principle differs from both space-based universal frames and observer-based Einsteinian relativity. Toroidal particles, in addition to explaining electron spin and the fundamental constants, satisfy the basic requirement of Gauss's misunderstood B Law, that motion itself circulates. Though a comprehensive theory is beyond the scope of this paper, it will suggest alternatives to the long list of assumptions in context.

  6. Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions revisited and found inadequate

    NASA Astrophysics Data System (ADS)

    Coon, Max; Kwok, Ron; Levy, Gad; Pruis, Matthew; Schreyer, Howard; Sulsky, Deborah

    2007-11-01

    This paper revisits the Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions about pack ice behavior with an eye to modeling sea ice dynamics. The AIDJEX assumptions were that (1) enough leads were present in a 100 km by 100 km region to make the ice isotropic on that scale; (2) the ice had no tensile strength; and (3) the ice behavior could be approximated by an isotropic yield surface. These assumptions were made during the development of the AIDJEX model in the 1970s, and are now found inadequate. The assumptions were made in part because of insufficient large-scale (10 km) deformation and stress data, and in part because of computer capability limitations. Upon reviewing deformation and stress data, it is clear that a model including deformation on discontinuities and an anisotropic failure surface with tension would better describe the behavior of pack ice. A model based on these assumptions is needed to represent the deformation and stress in pack ice on scales from 10 to 100 km, and would need to explicitly resolve discontinuities. Such a model would require a different class of metrics to validate discontinuities against observations.

  7. Measures of cultural competence: examining hidden assumptions.

    PubMed

    Kumaş-Tan, Zofia; Beagan, Brenda; Loppie, Charlotte; MacLeod, Anna; Frank, Blye

    2007-06-01

    The authors critically examined the quantitative measures of cultural competence most commonly used in medicine and in the health professions, to identify underlying assumptions about what constitutes competent practice across social and cultural diversity. A systematic review of approximately 20 years of literature listed in PubMed, the Cumulative Index of Nursing and Allied Health Literature, Social Services Abstracts, and the Educational Resources Information Center identified the most frequently used cultural competence measures, which were then thematically analyzed following a structured analytic guide. Fifty-four instruments were identified; the 10 most widely used were analyzed closely, identifying six prominent assumptions embedded in the measures. In general, these instruments equate culture with ethnicity and race and conceptualize culture as an attribute possessed by the ethnic or racialized Other. Cultural incompetence is presumed to arise from a lack of exposure to and knowledge of the Other, and also from individual biases, prejudices, and acts of discrimination. Many instruments assume that practitioners are white and Western and that greater confidence and comfort among practitioners signify increased cultural competence. Existing measures embed highly problematic assumptions about what constitutes cultural competence. They ignore the power relations of social inequality and assume that individual knowledge and self-confidence are sufficient for change. Developing measures that assess cultural humility and/or assess actual practice are needed if educators in the health professions and health professionals are to move forward in efforts to understand, teach, practice, and evaluate cultural competence.

  8. Work and Family: Testing the Assumptions.

    DTIC Science & Technology

    1982-08-01

    It changes one’s rela- tionship to the company. One either must leave or stay and be unhappy." This assumption defines career success as residing...Grant, D.L. Formative years in business: A long-term AT&T study of managerial lives. New York: Wiley, 1974. Bray, D.W., and Howard, A. Career success and

  9. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    PubMed Central

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  10. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    ERIC Educational Resources Information Center

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  11. Evolution of Requirements and Assumptions for Future Exploration Missions

    NASA Technical Reports Server (NTRS)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  12. A Test of the Dimensionality Assumptions of Rotter's Internal-External Scale

    ERIC Educational Resources Information Center

    Klockars, Alan J.; Varnum, Susan W.

    1975-01-01

    Examined two assumptions about the dimensionality of Rotters' Internal-External (I-E) scale: First, the bipolarity of the two statements within each item pair; second, the unidimensionality of the overall construct. Both assumptions regarding Rotters' I-E Scale were found untenable. (Author/BJG)

  13. Investigating the Assumptions of Uses and Gratifications Research

    ERIC Educational Resources Information Center

    Lometti, Guy E.; And Others

    1977-01-01

    Discusses a study designed to determine empirically the gratifications sought from communication channels and to test the assumption that individuals differentiate channels based on gratifications. (MH)

  14. Assessing framing assumptions in quantitative health impact assessments: a housing intervention example.

    PubMed

    Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M

    2013-09-01

    Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Molecular-dynamics study on characteristics of energy and tangential momentum accommodation coefficients

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Hiroki; Matsuda, Yu; Niimi, Tomohide

    2017-07-01

    Gas-surface interaction is studied by the molecular dynamics method to investigate qualitatively characteristics of accommodation coefficients. A large number of trajectories of gas molecules colliding to and scattering from a surface are statistically analyzed to calculate the energy (thermal) accommodation coefficient (EAC) and the tangential momentum accommodation coefficient (TMAC). Considering experimental measurements of the accommodation coefficients, the incident velocities are stochastically sampled to represent a bulk condition. The accommodation coefficients for noble gases show qualitative coincidence with experimental values. To investigate characteristics of these accommodation coefficients in detail, the gas-surface interaction is parametrically studied by varying the molecular mass of gas, the gas-surface interaction strength, and the molecular size of gas, one by one. EAC increases with increasing every parameter, while TMAC increases with increasing the interaction strength, but decreases with increasing the molecular mass and the molecular size. Thus, contradictory results in experimentally measured TMAC for noble gases could result from the difference between the surface conditions employed in the measurements in the balance among the effective parameters of molecular mass, interaction strength, and molecular size, due to surface roughness and/or adsorbed molecules. The accommodation coefficients for a thermo-fluid dynamics field with a temperature difference between gas and surface and a bulk flow at the same time are also investigated.

  16. The Emperors sham - wrong assumption that sham needling is sham.

    PubMed

    Lundeberg, Thomas; Lund, Iréne; Näslund, Jan; Thomas, Moolamanil

    2008-12-01

    During the last five years a large number of randomised controlled clinical trials (RCTs) have been published on the efficacy of acupuncture in different conditions. In most of these studies verum is compared with sham acupuncture. In general both verum and sham have been found to be effective, and often with little reported difference in outcome. This has repeatedly led to the conclusion that acupuncture is no more effective than placebo treatment. However, this conclusion is based on the assumption that sham acupuncture is inert. Since sham acupuncture evidently is merely another form of acupuncture from the physiological perspective, the assumption that sham is sham is incorrect and conclusions based on this assumption are therefore invalid. Clinical guidelines based on such conclusions may therefore exclude suffering patients from valuable treatments.

  17. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    PubMed

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Water resources of Assumption Parish, Louisiana

    USGS Publications Warehouse

    Prakken, Lawrence B.; Lovelace, John K.

    2013-01-01

    Information concerning the availability, use, and quality of water in Assumption Parish, Louisiana, is critical for proper water-supply management. The purpose of this fact sheet is to present information that can be used by water managers, parish residents, and others for management of this vital resource. Information on the availability, past and current use, use trends, and water quality from groundwater and surface-water sources in the parish is presented. Previously published reports and data stored in the U.S. Geological Survey’s National Water Information System (http://waterdata.usgs.gov/nwis) are the primary sources of the information presented here. In 2010, about 21.4 million gallons per day (Mgal/d) of water were withdrawn in Assumption Parish, including about 12.4 Mgal/d from surface-water sources and 9.03 Mgal/d from groundwater sources. Withdrawals for industrial use accounted for about 16.4 Mgal/d or 76 percent of the total water withdrawn. Other categories of use included public supply, rural domestic, livestock, general irrigation, and aquaculture.Water-use data collected at 5-year intervals from 1960 to 2010 indicated that water withdrawals peaked in 2000 at about 29.7 Mgal/d.

  19. 40 CFR 144.66 - State assumption of responsibility.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROGRAMS (CONTINUED) UNDERGROUND INJECTION CONTROL PROGRAM Financial Responsibility: Class I Hazardous Waste Injection Wells § 144.66 State assumption of responsibility. (a) If a State either assumes legal...

  20. Artificial Intelligence: Underlying Assumptions and Basic Objectives.

    ERIC Educational Resources Information Center

    Cercone, Nick; McCalla, Gordon

    1984-01-01

    Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…

  1. On the assumptions underlying milestoning.

    PubMed

    Vanden-Eijnden, Eric; Venturoli, Maddalena; Ciccotti, Giovanni; Elber, Ron

    2008-11-07

    Milestoning is a procedure to compute the time evolution of complicated processes such as barrier crossing events or long diffusive transitions between predefined states. Milestoning reduces the dynamics to transition events between intermediates (the milestones) and computes the local kinetic information to describe these transitions via short molecular dynamics (MD) runs between the milestones. The procedure relies on the ability to reinitialize MD trajectories on the milestones to get the right kinetic information about the transitions. It also rests on the assumptions that the transition events between successive milestones and the time lags between these transitions are statistically independent. In this paper, we analyze the validity of these assumptions. We show that sets of optimal milestones exist, i.e., sets such that successive transitions are indeed statistically independent. The proof of this claim relies on the results of transition path theory and uses the isocommittor surfaces of the reaction as milestones. For systems in the overdamped limit, we also obtain the probability distribution to reinitialize the MD trajectories on the milestones, and we discuss why this distribution is not available in closed form for systems with inertia. We explain why the time lags between transitions are not statistically independent even for optimal milestones, but we show that working with such milestones allows one to compute mean first passage times between milestones exactly. Finally, we discuss some practical implications of our results and we compare milestoning with Markov state models in view of our findings.

  2. Recognising the Effects of Costing Assumptions in Educational Business Simulation Games

    ERIC Educational Resources Information Center

    Eckardt, Gordon; Selen, Willem; Wynder, Monte

    2015-01-01

    Business simulations are a powerful way to provide experiential learning that is focussed, controlled, and concentrated. Inherent in any simulation, however, are numerous assumptions that determine feedback, and hence the lessons learnt. In this conceptual paper we describe some common cost assumptions that are implicit in simulation design and…

  3. Assessing Gaussian Assumption of PMU Measurement Error Using Field Data

    DOE PAGES

    Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...

    2017-10-13

    Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.

  4. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    NASA Astrophysics Data System (ADS)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of

  5. Publish unexpected results that conflict with assumptions

    USDA-ARS?s Scientific Manuscript database

    Some widely held scientific assumptions have been discredited, whereas others are just inappropriate for many applications. Sometimes, a widely-held analysis procedure takes on a life of its own, forgetting the original purpose of the analysis. The peer-reviewed system makes it difficult to get a pa...

  6. Single pass tangential flow filtration to debottleneck downstream processing for therapeutic antibody production.

    PubMed

    Dizon-Maspat, Jemelle; Bourret, Justin; D'Agostini, Anna; Li, Feng

    2012-04-01

    As the therapeutic monoclonal antibody (mAb) market continues to grow, optimizing production processes is becoming more critical in improving efficiencies and reducing cost-of-goods in large-scale production. With the recent trends of increasing cell culture titers from upstream process improvements, downstream capacity has become the bottleneck in many existing manufacturing facilities. Single Pass Tangential Flow Filtration (SPTFF) is an emerging technology, which is potentially useful in debottlenecking downstream capacity, especially when the pool tank size is a limiting factor. It can be integrated as part of an existing purification process, after a column chromatography step or a filtration step, without introducing a new unit operation. In this study, SPTFF technology was systematically evaluated for reducing process intermediate volumes from 2× to 10× with multiple mAbs and the impact of SPTFF on product quality, and process yield was analyzed. Finally, the potential fit into the typical 3-column industry platform antibody purification process and its implementation in a commercial scale manufacturing facility were also evaluated. Our data indicate that using SPTFF to concentrate protein pools is a simple, flexible, and robust operation, which can be implemented at various scales to improve antibody purification process capacity. Copyright © 2011 Wiley Periodicals, Inc.

  7. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  8. Phenol and Benzoate Metabolism by Pseudomonas putida: Regulation of Tangential Pathways

    PubMed Central

    Feist, Carol F.; Hegeman, G. D.

    1969-01-01

    Catechol occurs as an intermediate in the metabolism of both benzoate and phenol by strains of Pseudomonas putida. During growth at the expense of benzoate, catechol is cleaved ortho (1,2-oxygenase) and metabolized via the β-ketoadipate pathway; during growth at the expense of phenol or cresols, the catechol or substituted catechols formed are metabolized by a separate pathway following meta (2,3-oxygenase) cleavage of the aromatic ring of catechol. It is possible to explain the mutually exclusive occurrence of the meta and ortho pathway enzymes in phenol- and benzoate-grown cells of P. putida on the basis of differences in the mode of regulation of these two pathways. By use of both nonmetabolizable inducers and blocked mutants, gratuitous synthesis of some of the meta pathway enzymes was obtained. All four enzymes of the meta pathway are induced by the primary substrate, cresol or phenol, or its analogue. Three enzymes of the ortho pathway that catalyze the conversion of catechol to β-ketoadipate enol-lactone are induced by cis,cis-muconate, produced from catechol by 1,2-oxygenase-mediated cleavage. Observations on the differences in specificity of induction and function of the two pathways suggest that they are not really either tangential or redundant. The meta pathway serves as a general mechanism for catabolism of various alkyl derivatives of catechol derived from substituted phenolic compounds. The ortho pathway is more specific and serves primarily in the catabolism of precursors of catechol and catechol itself. PMID:5354952

  9. Dual role for DOCK7 in tangential migration of interneuron precursors in the postnatal forebrain.

    PubMed

    Nakamuta, Shinichi; Yang, Yu-Ting; Wang, Chia-Lin; Gallo, Nicholas B; Yu, Jia-Ray; Tai, Yilin; Van Aelst, Linda

    2017-12-04

    Throughout life, stem cells in the ventricular-subventricular zone generate neuroblasts that migrate via the rostral migratory stream (RMS) to the olfactory bulb, where they differentiate into local interneurons. Although progress has been made toward identifying extracellular factors that guide the migration of these cells, little is known about the intracellular mechanisms that govern the dynamic reshaping of the neuroblasts' morphology required for their migration along the RMS. In this study, we identify DOCK7, a member of the DOCK180-family, as a molecule essential for tangential neuroblast migration in the postnatal mouse forebrain. DOCK7 regulates the migration of these cells by controlling both leading process (LP) extension and somal translocation via distinct pathways. It controls LP stability/growth via a Rac-dependent pathway, likely by modulating microtubule networks while also regulating F-actin remodeling at the cell rear to promote somal translocation via a previously unrecognized myosin phosphatase-RhoA-interacting protein-dependent pathway. The coordinated action of both pathways is required to ensure efficient neuroblast migration along the RMS. © 2017 Nakamuta et al.

  10. Dual role for DOCK7 in tangential migration of interneuron precursors in the postnatal forebrain

    PubMed Central

    Yang, Yu-Ting; Yu, Jia-Ray; Tai, Yilin

    2017-01-01

    Throughout life, stem cells in the ventricular–subventricular zone generate neuroblasts that migrate via the rostral migratory stream (RMS) to the olfactory bulb, where they differentiate into local interneurons. Although progress has been made toward identifying extracellular factors that guide the migration of these cells, little is known about the intracellular mechanisms that govern the dynamic reshaping of the neuroblasts’ morphology required for their migration along the RMS. In this study, we identify DOCK7, a member of the DOCK180-family, as a molecule essential for tangential neuroblast migration in the postnatal mouse forebrain. DOCK7 regulates the migration of these cells by controlling both leading process (LP) extension and somal translocation via distinct pathways. It controls LP stability/growth via a Rac-dependent pathway, likely by modulating microtubule networks while also regulating F-actin remodeling at the cell rear to promote somal translocation via a previously unrecognized myosin phosphatase–RhoA–interacting protein-dependent pathway. The coordinated action of both pathways is required to ensure efficient neuroblast migration along the RMS. PMID:29089377

  11. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  12. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  13. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  14. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  15. 29 CFR 4231.10 - Actuarial calculations and assumptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...

  16. Scenarios Based on Shared Socioeconomic Pathway Assumptions

    NASA Astrophysics Data System (ADS)

    Edmonds, J.

    2013-12-01

    A set of new scenarios is being developed by the international scientific community as part of a larger program that was articulated in Moss, et al. (2009), published in Nature. A long series of meetings including climate researchers drawn from the climate modeling, impacts, adaptation and vulnerability (IAV) and integrated assessment modeling (IAM) communities have led to the development of a set of five Shared Socioeconomic Pathways (SSPs), which define the state of human and natural societies at a macro scale over the course of the 21st century without regard to climate mitigation or change. SSPs were designed to explore a range of possible futures consistent with greater or lesser challenges to mitigation and challenges to adaptation. They include a narrative storyline and a set of quantified measures--e.g. demographic and economic profiles--that define the high-level state of society as it evolves over the 21st century under the assumption of no significant climate feedback. SSPs can be used to develop quantitative scenarios of human Earth systems using IAMs. IAMs produce information about greenhouse gas emissions, energy systems, the economy, agriculture and land use. Each set of SSPs will have a different human Earth system realization for each IAM. Five groups from the IAM community have begun to explore the implications of SSP assumptions for emissions, energy, economy, agriculture and land use. We report the quantitative results of initial experiments from those groups. A major goal of the Moss, et al. strategy was to enable the use of CMIP5 climate model ensemble products for IAV research. CMIP5 climate scenarios used four Representative Concentration Pathway (RCP) scenarios, defined in terms of radiative forcing in the year 2100: 2.6, 4.5, 6.0, and 8.5 Wm-2. There is no reason to believe that the SSPs will generate year 2100 levels of radiative forcing that correspond to the four RCP levels, though it is important that at least one SSP produce a

  17. Extracurricular Business Planning Competitions: Challenging the Assumptions

    ERIC Educational Resources Information Center

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  18. Modeling Bottom Sediment Erosion Process by Swirling the Flow by Tangential Supply of Oil in the Tank

    NASA Astrophysics Data System (ADS)

    Nekrasov, V. O.

    2016-10-01

    The article carries out a statistical data processing of quantitative and territorial division of oil tanks operating in Tyumen region, intended for reception, storage and distribution of commercial oil through trunk pipelines. It describes the working principle of the new device of erosion and prevention of oil bottom sediment formation with tangential supply of oil pumped into reservoir. The most significant similarity criteria can be emphasized in modeling rotational flows exerting significant influence on the structure of the circulating flow of oil in tank when operation of the device described. The dependence of the distribution of the linear velocity of a point on the surface along the radius at the circular motion of the oil in the tank is characterized, and on the basis of this dependence, a formula of general kinetic energy of rotational motion of oil and asphalt-resin-paraffin deposits total volume in the oil reservoir is given.

  19. Assumptive Worldviews and Problematic Reactions to Bereavement

    ERIC Educational Resources Information Center

    Currier, Joseph M.; Holland, Jason M.; Neimeyer, Robert A.

    2009-01-01

    Forty-two individuals who had lost an immediate family member in the prior 2 years and 42 nonbereaved matched controls completed the World Assumptions Scale (Janoff-Bulman, 1989) and the Symptom Checklist-10-Revised (Rosen et al., 2000). Results showed that bereaved individuals were significantly more distressed than nonbereaved matched controls,…

  20. 7 CFR 1779.88 - Transfers and assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... borrowers will include a one-time nonrefundable transfer fee to the Agency of no more than 1 percent... reasonable debt-paying ability considering their assets and income at the time of transfer, and (ii) The... 7 Agriculture 12 2013-01-01 2013-01-01 false Transfers and assumptions. 1779.88 Section 1779.88...

  1. The incompressibility assumption in computational simulations of nasal airflow.

    PubMed

    Cal, Ismael R; Cercos-Pita, Jose Luis; Duque, Daniel

    2017-06-01

    Most of the computational works on nasal airflow up to date have assumed incompressibility, given the low Mach number of these flows. However, for high temperature gradients, the incompressibility assumption could lead to a loss of accuracy, due to the temperature dependence of air density and viscosity. In this article we aim to shed some light on the influence of this assumption in a model of calm breathing in an Asian nasal cavity, by solving the fluid flow equations in compressible and incompressible formulation for different ambient air temperatures using the OpenFOAM package. At low flow rates and warm climatological conditions, similar results were obtained from both approaches, showing that density variations need not be taken into account to obtain a good prediction of all flow features, at least for usual breathing conditions. This agrees with most of the simulations previously reported, at least as far as the incompressibility assumption is concerned. However, parameters like nasal resistance and wall shear stress distribution differ for air temperatures below [Formula: see text]C approximately. Therefore, density variations should be considered for simulations at such low temperatures.

  2. Optimum Energy Extraction from Coherent Vortex Rings Passing Tangentially Over Flexible Plates

    NASA Astrophysics Data System (ADS)

    Pirnia, Alireza; Browning, Emily A.; Peterson, Sean D.; Erath, Byron D.

    2017-11-01

    Coherent vortical structures can incite self-sustained oscillations in flexible membranes. This concept has recently gained interest for energy extraction from ambient environments. In this study the special case of a vortex ring passing tangentially over a cantilevered flexible plate is investigated. This problem is governed by the Kirchhoff-Love plate equation, which can be expressed in terms of a non-dimensional mass parameter of the plate, non-dimensional pressure loading induced by the vortex ring, and a Strouhal (St) number which expresses the duration of pressure loading relative to the period of plate oscillation. For a plate with a fixed mass parameter immersed in a fluid environment, the St number specifies the beam dynamics and the energy exchange process. The aim of this study is to identify the St number corresponding to maximum energy exchange between plates and vortex rings. The energy exchange process between the vortex ring and the plate is investigated over a range of 0.3

  3. Formalization and Analysis of Reasoning by Assumption

    ERIC Educational Resources Information Center

    Bosse, Tibor; Jonker, Catholijn M.; Treur, Jan

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically…

  4. False assumptions.

    PubMed

    Swaminathan, M

    1997-01-01

    Indian women do not have to be told the benefits of breast feeding or "rescued from the clutches of wicked multinational companies" by international agencies. There is no proof that breast feeding has declined in India; in fact, a 1987 survey revealed that 98% of Indian women breast feed. Efforts to promote breast feeding among the middle classes rely on such initiatives as the "baby friendly" hospital where breast feeding is promoted immediately after birth. This ignores the 76% of Indian women who give birth at home. Blaming this unproved decline in breast feeding on multinational companies distracts attention from more far-reaching and intractable effects of social change. While the Infant Milk Substitutes Act is helpful, it also deflects attention from more pressing issues. Another false assumption is that Indian women are abandoning breast feeding to comply with the demands of employment, but research indicates that most women give up employment for breast feeding, despite the economic cost to their families. Women also seek work in the informal sector to secure the flexibility to meet their child care responsibilities. Instead of being concerned about "teaching" women what they already know about the benefits of breast feeding, efforts should be made to remove the constraints women face as a result of their multiple roles and to empower them with the support of families, governmental policies and legislation, employers, health professionals, and the media.

  5. The AGCE related studies of baroclinic flows in spherical geometry

    NASA Technical Reports Server (NTRS)

    Hyun, J. M.

    1983-01-01

    Steady state, axisymmetric motions of a Boussineaq fluid continued in rotating spherical anmulus are considered. The motions are driven by latitudinally varying temperature gradient at the shells. Linearized formulations for a narrow gap are derived and the flow field is divided into the Ekman layers and the geostrophic interior. The Ekman layer flows are consistent with the known results for cylindrical geometries. Within the framework of rather restrictive assumptions, the interior flows are solved by a series of associated Legendre polynomials. The solutions show qualitative features valid at midlatitudes.

  6. 7 CFR 1957.2 - Transfer with assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Rural Housing Trust 1987-1, and who are eligible for an FmHA or its successor agency under Public Law 103-354 § 502 loan will be given the same priority by FmHA or its successor agency under Public Law.... FmHA or its successor agency under Public Law 103-354 regulations governing transfers and assumptions...

  7. 7 CFR 1980.476 - Transfer and assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...-354 449-30 to recover its pro rata share of the actual loss at that time. In completing Form FmHA or... the lender on liquidations and property management. A. The State Director may approve all transfer and... Director will notify the Finance Office of all approved transfer and assumption cases on Form FmHA or its...

  8. Assumptions about Ecological Scale and Nature Knowing Best Hiding in Environmental Decisions

    Treesearch

    R. Bruce Hull; David P. Robertson; David Richert; Erin Seekamp; Gregory J. Buhyoff

    2002-01-01

    Assumptions about nature are embedded in people's preferences for environmental policy and management. The people we interviewed justified preservationist policies using four assumptions about nature knowing best: nature is balanced, evolution is progressive, technology is suspect, and the Creation is perfect. They justified interventionist policies using three...

  9. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation...

  10. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation...

  11. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Fiscally sound operation and assumption of... Organizations: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation...

  12. Total Organic Carbon Distribution and Bacterial Cycling Across A Geostrophic Front In Mediterranean Sea. Implications For The Western Basin Carbon Cycle

    NASA Astrophysics Data System (ADS)

    Sempere, R.; van Wambeke, F.; Bianchi, M.; Dafner, E.; Lefevre, D.; Bruyant, F.; Prieur, L.

    We investigated the dynamic of the total organic carbon (TOC) pool and the role it played in the carbon cycle during winter 1997-1998 in the Almeria-Oran jet-front (AOF) system resulting from the spreading of Atlantic surface water through the Gibraltar Strait in the Alboran Sea (Southwestern Mediterranean Sea). We determined TOC by using high temperature combustion technique (HTC) and bacterial produc- tion (BP; via [3H] leucine incorporation) during two legs in the frontal area. We also estimated labile TOC (l-TOC) and bacterial growth efficiency (BGE) by performing TOC biodegradation experiments on board during the cruise whereas water column semi-labile (sl-TOC), and refractory-TOC were determined from TOC profile exami- nation. These results are discussed in relation with current velocity measured by using accoustic doppler current profiler (ADCP). Lowest TOC stocks (6330-6853 mmol C m-2) over 0-100 m were measured in the northern side of the geostrophic Jet which is also the highest dynamic area (horizontal speed of 80 cm s-1 in the first 100 m di- rected eastward). Our results indicated variable turnover times of sl-TOC across the Jet-Front system, which might be explained by different coupling of primary produc- tion and bacterial production observed in these areas. We also estimated TOC and sl-TOC transports within the Jet core off the Alboran Sea as well as potential CO2 production through bacterial respiration produced from sl-TOC assimilation by het- erotrophic bacteria.

  13. The contributions of interpersonal trauma exposure and world assumptions to predicting dissociation in undergraduates.

    PubMed

    Lilly, Michelle M

    2011-01-01

    This study examines the relationship between world assumptions and trauma history in predicting symptoms of dissociation. It was proposed that cognitions related to the safety and benevolence of the world, as well as self-worth, would be related to the presence of dissociative symptoms, the latter of which were theorized to defend against threats to one's sense of safety, meaningfulness, and self-worth. Undergraduates from a midwestern university completed the Multiscale Dissociation Inventory, World Assumptions Scale, and Traumatic Life Events Questionnaire. Consistent with the hypotheses, world assumptions were related to the extent of trauma exposure and interpersonal trauma exposure in the sample but were not significantly related to non-interpersonal trauma exposure. World assumptions acted as a significant partial mediator of the relationship between trauma exposure and dissociation, and this relationship held when interpersonal trauma exposure specifically was considered. The factor structures of dissociation and world assumptions were also examined using principal component analysis, with the benevolence and self-worth factors of the World Assumptions Scale showing the strongest relationships with trauma exposure and dissociation. Clinical implications are discussed.

  14. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Fiscally sound operation and assumption of...: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation, as demonstrated...

  15. 42 CFR 417.120 - Fiscally sound operation and assumption of financial risk.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Fiscally sound operation and assumption of...: Organization and Operation § 417.120 Fiscally sound operation and assumption of financial risk. (a) Fiscally sound operation—(1) General requirements. Each HMO must have a fiscally sound operation, as demonstrated...

  16. Concentration of infectious hematopoietic necrosis virus from water samples by tangential flow filtration and polyethylene glycol precipitation

    USGS Publications Warehouse

    Batts, W.N.; Winton, J.R.

    1989-01-01

    Infectious hematopoietic necrosis virus (IHNV) was concentrated from water samples by polyethylene glycol (PEG) precipitation, tangential flow filtration (TFF), and by a combination of TFF followed by PEG precipitation of the retentate. Used alone, PEG increased virus titers more than 200-fold, and the efficiency of recovery was as great as 100%. Used alone, TFF concentrated IHNV more than 20-fold, and average recovery was 70%. When the two techniques were combined, 10-L water samples were reduced to about 300 mL by TFF and the virus was precipitated with PEG into a 1 to 2 g pellet; total recovery was as great as 100%. The combined techniques were used to isolate IHNV from water samples taken from a river containing adult sockeye salmon (Oncorhynchus nerka) and from a hatchery pond containing adult spring chinook salmon (O. tshawytscha). The combination of these methods was effective in concentrating and detecting IHNV from water containing only three infectious particles per 10-L sample.

  17. Female Offenders: Three Assumptions about Self-Esteem, Sex-Role Identity, and Feminism.

    ERIC Educational Resources Information Center

    Widom, Cathy Spatz

    1979-01-01

    Investigates the validity of three assumptions about self-esteem, sex-role identity, and feminism in female offenders in a study of women awaiting trial in Massachusetts. Results did not support assumptions regarding low self-esteem and increased masculinity in female offenders. Speculations were made about the role femininity plays in…

  18. 40 CFR 264.150 - State assumption of responsibility.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility. 264.150 Section 264.150 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE TREATMENT, STORAGE, AND DISPOSAL...

  19. Assessing the Role of the 'Unity Assumption' on Multisensory Integration: A Review.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2017-01-01

    There has been longstanding interest from both experimental psychologists and cognitive neuroscientists in the potential modulatory role of various top-down factors on multisensory integration/perception in humans. One such top-down influence, often referred to in the literature as the 'unity assumption,' is thought to occur in those situations in which an observer considers that various of the unisensory stimuli that they have been presented with belong to one and the same object or event (Welch and Warren, 1980). Here, we review the possible factors that may lead to the emergence of the unity assumption. We then critically evaluate the evidence concerning the consequences of the unity assumption from studies of the spatial and temporal ventriloquism effects, from the McGurk effect, and from the Colavita visual dominance paradigm. The research that has been published to date using these tasks provides support for the claim that the unity assumption influences multisensory perception under at least a subset of experimental conditions. We then consider whether the notion has been superseded in recent years by the introduction of priors in Bayesian causal inference models of human multisensory perception. We suggest that the prior of common cause (that is, the prior concerning whether multisensory signals originate from the same source or not) offers the most useful way to quantify the unity assumption as a continuous cognitive variable.

  20. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    NASA Astrophysics Data System (ADS)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  1. Examining the Stationarity Assumption for Statistically Downscaled Climate Projections of Precipitation

    NASA Astrophysics Data System (ADS)

    Wootten, A.; Dixon, K. W.; Lanzante, J. R.; Mcpherson, R. A.

    2017-12-01

    Empirical statistical downscaling (ESD) approaches attempt to refine global climate model (GCM) information via statistical relationships between observations and GCM simulations. The aim of such downscaling efforts is to create added-value climate projections by adding finer spatial detail and reducing biases. The results of statistical downscaling exercises are often used in impact assessments under the assumption that past performance provides an indicator of future results. Given prior research describing the danger of this assumption with regards to temperature, this study expands the perfect model experimental design from previous case studies to test the stationarity assumption with respect to precipitation. Assuming stationarity implies the performance of ESD methods are similar between the future projections and historical training. Case study results from four quantile-mapping based ESD methods demonstrate violations of the stationarity assumption for both central tendency and extremes of precipitation. These violations vary geographically and seasonally. For the four ESD methods tested the greatest challenges for downscaling of daily total precipitation projections occur in regions with limited precipitation and for extremes of precipitation along Southeast coastal regions. We conclude with a discussion of future expansion of the perfect model experimental design and the implications for improving ESD methods and providing guidance on the use of ESD techniques for impact assessments and decision-support.

  2. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  3. The crux of the method: assumptions in ordinary least squares and logistic regression.

    PubMed

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  4. 46 CFR 174.070 - General damage stability assumptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false General damage stability assumptions. 174.070 Section 174.070 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore Drilling...

  5. 46 CFR 174.070 - General damage stability assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false General damage stability assumptions. 174.070 Section 174.070 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore Drilling...

  6. Legal and ethical implications of health care provider insurance risk assumption.

    PubMed

    Cox, Thomas

    2010-01-01

    From bedside to boardroom, nurses deal with the consequences of health care provider insurance risk assumption. Professional caregiver insurance risk refers to insurance risks assumed through contracts with third parties, federal and state Medicare and Medicaid program mandates, and the diagnosis-related groups and Prospective Payment Systems. This article analyzes the financial, legal, and ethical implications of provider insurance risk assumption by focusing on the degree to which patient benefits are reduced.

  7. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Raudenbush, Stephen W.

    2011-01-01

    The purpose of this paper is to clarify the assumptions that must be met if this--multiple site, multiple mediator--strategy, hereafter referred to as "MSMM," is to identify the average causal effects (ATE) in the populations of interest. The authors' investigation of the assumptions of the multiple-mediator, multiple-site IV model demonstrates…

  8. Limiting assumptions in molecular modeling: electrostatics.

    PubMed

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  9. Culturally Responsive Suicide Prevention in Indigenous Communities: Unexamined Assumptions and New Possibilities

    PubMed Central

    Gone, Joseph P.

    2012-01-01

    Indigenous communities have significantly higher rates of suicide than non-Native communities in North America. Prevention and intervention efforts have failed to redress this disparity. One explanation is that these efforts are culturally incongruent for Native communities. Four prevalent assumptions that underpin professional suicide prevention may conflict with local indigenous understandings about suicide. Our experiences in indigenous communities led us to question assumptions that are routinely endorsed and promoted in suicide prevention programs and interventions. By raising questions about the universal relevance of these assumptions, we hope to stimulate exchange and inquiry into the character of this devastating public health challenge and to aid the development of culturally appropriate interventions in cross-cultural contexts. PMID:22420786

  10. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    PubMed

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Principal Score Methods: Assumptions, Extensions, and Practical Considerations

    ERIC Educational Resources Information Center

    Feller, Avi; Mealli, Fabrizia; Miratrix, Luke

    2017-01-01

    Researchers addressing posttreatment complications in randomized trials often turn to principal stratification to define relevant assumptions and quantities of interest. One approach for the subsequent estimation of causal effects in this framework is to use methods based on the "principal score," the conditional probability of belonging…

  12. Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.

    Treesearch

    Susan J. Alexander

    1991-01-01

    The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...

  13. Validity of the mockwitness paradigm: testing the assumptions.

    PubMed

    McQuiston, Dawn E; Malpass, Roy S

    2002-08-01

    Mockwitness identifications are used to provide a quantitative measure of lineup fairness. Some theoretical and practical assumptions of this paradigm have not been studied in terms of mockwitnesses' decision processes and procedural variation (e.g., instructions, lineup presentation method), and the current experiment was conducted to empirically evaluate these assumptions. Four hundred and eighty mockwitnesses were given physical information about a culprit, received 1 of 4 variations of lineup instructions, and were asked to identify the culprit from either a fair or unfair sequential lineup containing 1 of 2 targets. Lineup bias estimates varied as a result of lineup fairness and the target presented. Mockwitnesses generally reported that the target's physical description was their main source of identifying information. Our findings support the use of mockwitness identifications as a useful technique for sequential lineup evaluation, but only for mockwitnesses who selected only 1 lineup member. Recommendations for the use of this evaluation procedure are discussed.

  14. Educational Technology as a Subversive Activity: Questioning Assumptions Related to Teaching and Leading with Technology

    ERIC Educational Resources Information Center

    Kruger-Ross, Matthew J.; Holcomb, Lori B.

    2012-01-01

    The use of educational technologies is grounded in the assumptions of teachers, learners, and administrators. Assumptions are choices that structure our understandings and help us make meaning. Current advances in Web 2.0 and social media technologies challenge our assumptions about teaching and learning. The intersection of technology and…

  15. Child Development Knowledge and Teacher Preparation: Confronting Assumptions.

    ERIC Educational Resources Information Center

    Katz, Lilian G.

    This paper questions the widely held assumption that acquiring knowledge of child development is an essential part of teacher preparation and teaching competence, especially among teachers of young children. After discussing the influence of culture, parenting style, and teaching style on developmental expectations and outcomes, the paper asserts…

  16. User assumptions about information retrieval systems: Ethical concerns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froehlich, T.J.

    Information professionals, whether designers, intermediaries, database producers or vendors, bear some responsibility for the information that they make available to users of information systems. The users of such systems may tend to make many assumptions about the information that a system provides, such as believing: that the data are comprehensive, current and accurate, that the information resources or databases have same degree of quality and consistency of indexing; that the abstracts, if they exist, correctly and adequate reflect the content of the article; that there is consistency informs of author names or journal titles or indexing within and across databases;more » that there is standardization in and across databases; that once errors are detected, they are corrected; that appropriate choices of databases or information resources are a relatively easy matter, etc. The truth is that few of these assumptions are valid in commercia or corporate or organizational databases. However, given these beliefs and assumptions by many users, often promoted by information providers, information professionals, impossible, should intervene to warn users about the limitations and constraints of the databases they are using. With the growth of the Internet and end-user products (e.g., CD-ROMs), such interventions have significantly declined. In such cases, information should be provided on start-up or through interface screens, indicating to users, the constraints and orientation of the system they are using. The principle of {open_quotes}caveat emptor{close_quotes} is naive and socially irresponsible: information professionals or systems have an obligation to provide some framework or context for the information that users are accessing.« less

  17. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  18. 20 CFR 416.1090 - Assumption when we make a finding of substantial failure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Assumption when we make a finding of substantial failure. 416.1090 Section 416.1090 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL... responsibility for performing the disability determination function from the State agency, whether the assumption...

  19. Making Predictions about Chemical Reactivity: Assumptions and Heuristics

    ERIC Educational Resources Information Center

    Maeyer, Jenine; Talanquer, Vicente

    2013-01-01

    Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…

  20. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    ERIC Educational Resources Information Center

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  1. The Cost of CAI: A Matter of Assumptions.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    Cost estimates for Computer Assisted Instruction (CAI) depend crucially upon the particular assumptions made about the components of the system to be included in the costs, the expected lifetime of the system and courseware, and the anticipated student utilization of the system/courseware. The cost estimates of three currently operational systems…

  2. Testing Our Fundamental Assumptions

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  3. 20 CFR 404.1690 - Assumption when we make a finding of substantial failure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Assumption when we make a finding of substantial failure. 404.1690 Section 404.1690 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD... responsibility for performing the disability determination function from the State agency, whether the assumption...

  4. Supporting calculations and assumptions for use in WESF safetyanalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hey, B.E.

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  5. A new scenario framework for climate change research: The concept of Shared Climate Policy Assumptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kriegler, Elmar; Edmonds, James A.; Hallegatte, Stephane

    2014-04-01

    The paper presents the concept of shared climate policy assumptions as an important element of the new scenario framework. Shared climate policy assumptions capture key climate policy dimensions such as the type and scale of mitigation and adaptation measures. They are not specified in the socio-economic reference pathways, and therefore introduce an important third dimension to the scenario matrix architecture. Climate policy assumptions will have to be made in any climate policy scenario, and can have a significant impact on the scenario description. We conclude that a meaningful set of shared climate policy assumptions is useful for grouping individual climatemore » policy analyses and facilitating their comparison. Shared climate policy assumptions should be designed to be policy relevant, and as a set to be broad enough to allow a comprehensive exploration of the climate change scenario space.« less

  6. Conclusion: Agency in the face of complexity and the future of assumption-aware evaluation practice.

    PubMed

    Morrow, Nathan; Nkwake, Apollo M

    2016-12-01

    This final chapter in the volume pulls together common themes from the diverse set of articles by a group of eight authors in this issue, and presents some reflections on the next steps for improving the ways in which evaluators work with assumptions. Collectively, the authors provide a broad overview of existing and emerging approaches to the articulation and use of assumptions in evaluation theory and practice. The authors reiterate the rationale and key terminology as a common basis for working with assumption in program design and evaluation. They highlight some useful concepts and categorizations to promote more rigorous treatment of assumptions in evaluation. A three-tier framework for fostering agency for assumption-aware evaluation practice is proposed-agency for themselves (evaluators); agency for others (stakeholders); and agency for standards and principles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Making Sense out of Sex Stereotypes in Advertising: A Feminist Analysis of Assumptions.

    ERIC Educational Resources Information Center

    Ferrante, Karlene

    Sexism and racism in advertising have been well documented, but feminist research aimed at social change must go beyond existing content analyses to ask how advertising is created. Analysis of the "mirror assumption" (advertising reflects society) and the "gender assumption" (advertising speaks in a male voice to female…

  8. Detecting and accounting for violations of the constancy assumption in non-inferiority clinical trials.

    PubMed

    Koopmeiners, Joseph S; Hobbs, Brian P

    2018-05-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.

  9. Detecting and Accounting for Violations of the Constancy Assumption in Non-Inferiority Clinical Trials

    PubMed Central

    Koopmeiners, Joseph S.; Hobbs, Brian P.

    2016-01-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator (AC) with the objective of showing either superiority or non-inferiority to the AC. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the AC as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the AC in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV. PMID:27587591

  10. Identifying Acquisition Framing Assumptions Through Structured Deliberation

    DTIC Science & Technology

    2014-01-01

    and tracking them, the Office of the Secretary of Defense (OSD), and the Services may be able to better manage major risks to and expectations of...programs. An FA is any explicit or implicit assumption that is central in shaping cost, sched- ule, or performance expectations . FAs may change over the... expectations . This criterion means that FAs, when they fail or are incorrect, will have significant cost, schedule and/or per- formance effects on the

  11. Assumption-versus data-based approaches to summarizing species' ranges.

    PubMed

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2018-06-01

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  12. Effects of internal gain assumptions in building energy calculations

    NASA Astrophysics Data System (ADS)

    Christensen, C.; Perkins, R.

    1981-01-01

    The utilization of direct solar gains in buildings can be affected by operating profiles, such as schedules for internal gains, thermostat controls, and ventilation rates. Building energy analysis methods use various assumptions about these profiles. The effects of typical internal gain assumptions in energy calculations are described. Heating and cooling loads from simulations using the DOE 2.1 computer code are compared for various internal gain inputs: typical hourly profiles, constant average profiles, and zero gain profiles. Prototype single-family-detached and multifamily-attached residential units are studied with various levels of insulation and infiltration. Small detached commercial buildings and attached zones in large commercial buildings are studied with various levels of internal gains. The results indicate that calculations of annual heating and cooling loads are sensitive to internal gains, but in most cases are relatively insensitive to hourly variations in internal gains.

  13. Testing Modeling Assumptions in the West Africa Ebola Outbreak

    NASA Astrophysics Data System (ADS)

    Burghardt, Keith; Verzijl, Christopher; Huang, Junming; Ingram, Matthew; Song, Binyang; Hasne, Marie-Pierre

    2016-10-01

    The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.

  14. Development of state and transition model assumptions used in National Forest Plan revision

    Treesearch

    Eric B. Henderson

    2008-01-01

    State and transition models are being utilized in forest management analysis processes to evaluate assumptions about disturbances and succession. These models assume valid information about seral class successional pathways and timing. The Forest Vegetation Simulator (FVS) was used to evaluate seral class succession assumptions for the Hiawatha National Forest in...

  15. An Azulene-Based Discovery Experiment: Challenging Students to Watch for the "False Assumption"

    ERIC Educational Resources Information Center

    Garner, Charles M.

    2005-01-01

    A discovery-based experiment is developed depending on a "false assumption" that the students mistakenly assume they know the structure of a reaction product and are forced to reconcile observations that are inconsistent with this assumption. This experiment involves the chemistry of azulenes, an interesting class of intensely colored aromatic…

  16. A Note on the Assumption of Identical Distributions for Nonparametric Tests of Location

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Colp, S. Mitchell

    2018-01-01

    Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…

  17. Heterosexual assumptions in verbal and non-verbal communication in nursing.

    PubMed

    Röndahl, Gerd; Innala, Sune; Carlsson, Marianne

    2006-11-01

    This paper reports a study of what lesbian women and gay men had to say, as patients and as partners, about their experiences of nursing in hospital care, and what they regarded as important to communicate about homosexuality and nursing. The social life of heterosexual cultures is based on the assumption that all people are heterosexual, thereby making homosexuality socially invisible. Nurses may assume that all patients and significant others are heterosexual, and these heteronormative assumptions may lead to poor communication that affects nursing quality by leading nurses to ask the wrong questions and make incorrect judgements. A qualitative interview study was carried out in the spring of 2004. Seventeen women and 10 men ranging in age from 23 to 65 years from different parts of Sweden participated. They described 46 experiences as patients and 31 as partners. Heteronormativity was communicated in waiting rooms, in patient documents and when registering for admission, and nursing staff sometimes showed perplexity when an informant deviated from this heteronormative assumption. Informants had often met nursing staff who showed fear of behaving incorrectly, which could lead to a sense of insecurity, thereby impeding further communication. As partners of gay patients, informants felt that they had to deal with heterosexual assumptions more than they did when they were patients, and the consequences were feelings of not being accepted as a 'true' relative, of exclusion and neglect. Almost all participants offered recommendations about how nursing staff could facilitate communication. Heterosexual norms communicated unconsciously by nursing staff contribute to ambivalent attitudes and feelings of insecurity that prevent communication and easily lead to misconceptions. Educational and management interventions, as well as increased communication, could make gay people more visible and thereby encourage openness and awareness by hospital staff of the norms that they

  18. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    PubMed

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Evaluation of assumptions in soil moisture triple collocation analysis

    USDA-ARS?s Scientific Manuscript database

    Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...

  20. An Investigation of the Equipercentile Assumption and the One-Group Pre/Post Design.

    ERIC Educational Resources Information Center

    Powell, George D.; Raffeld, Paul C.

    The equipercentile assumption states that students in traditional classrooms who receive no other instructional assistance, will maintain their relative rank order over time. To test this assumption, fall to fall test results on the SRA Achievement Tests were obtained for grades 2-3, and 6-7. Total reading and total mathematics growth scale values…

  1. Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates

    USGS Publications Warehouse

    Gray, B.R.

    2005-01-01

    The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively

  2. Assumptions Underlying the Use of Different Types of Simulations.

    ERIC Educational Resources Information Center

    Cunningham, J. Barton

    1984-01-01

    Clarifies appropriateness of certain simulation approaches by distinguishing between different types of simulations--experimental, predictive, evaluative, and educational--on the basis of purpose, assumptions, procedures, and criteria for evaluating. The kinds of questions each type best responds to are discussed. (65 references) (MBR)

  3. Challenging Teachers' Pedagogic Practice and Assumptions about Social Media

    ERIC Educational Resources Information Center

    Cartner, Helen C.; Hallas, Julia L.

    2017-01-01

    This article describes an innovative approach to professional development designed to challenge teachers' pedagogic practice and assumptions about educational technologies such as social media. Developing effective technology-related professional development for teachers can be a challenge for institutions and facilitators who provide this…

  4. Investigation of the ellipsoidal-statistical Bhatnagar-Gross-Krook kinetic model applied to gas-phase transport of heat and tangential momentum between parallel walls

    NASA Astrophysics Data System (ADS)

    Gallis, M. A.; Torczynski, J. R.

    2011-03-01

    The ellipsoidal-statistical Bhatnagar-Gross-Krook (ES-BGK) kinetic model is investigated for steady gas-phase transport of heat, tangential momentum, and mass between parallel walls (i.e., Fourier, Couette, and Fickian flows). This investigation extends the original study of Cercignani and Tironi, who first applied the ES-BGK model to heat transport (i.e., Fourier flow) shortly after this model was proposed by Holway. The ES-BGK model is implemented in a molecular-gas-dynamics code so that results from this model can be compared directly to results from the full Boltzmann collision term, as computed by the same code with the direct simulation Monte Carlo (DSMC) algorithm of Bird. A gas of monatomic molecules is considered. These molecules collide in a pairwise fashion according to either the Maxwell or the hard-sphere interaction and reflect from the walls according to the Cercignani-Lampis-Lord model with unity accommodation coefficients. Simulations are performed at pressures from near-free-molecular to near-continuum. Unlike the BGK model, the ES-BGK model produces heat-flux and shear-stress values that both agree closely with the DSMC values at all pressures. However, for both interactions, the ES-BGK model produces molecular-velocity-distribution functions that are qualitatively similar to those determined for the Maxwell interaction from Chapman-Enskog theory for small wall temperature differences and moment-hierarchy theory for large wall temperature differences. Moreover, the ES-BGK model does not produce accurate values of the mass self-diffusion coefficient for either interaction. Nevertheless, given its reasonable accuracy for heat and tangential-momentum transport, its sound theoretical foundation (it obeys the H-theorem), and its available extension to polyatomic molecules, the ES-BGK model may be a useful method for simulating certain classes of single-species noncontinuum gas flows, as Cercignani suggested.

  5. Common-sense chemistry: The use of assumptions and heuristics in problem solving

    NASA Astrophysics Data System (ADS)

    Maeyer, Jenine Rachel

    Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build predictions and make decisions). A better understanding and characterization of these constraints are of central importance in the development of curriculum and teaching strategies that better support student learning in science. It was the overall goal of this thesis to investigate student reasoning in chemistry, specifically to better understand and characterize the assumptions and heuristics used by undergraduate chemistry students. To achieve this, two mixed-methods studies were conducted, each with quantitative data collected using a questionnaire and qualitative data gathered through semi-structured interviews. The first project investigated the reasoning heuristics used when ranking chemical substances based on the relative value of a physical or chemical property, while the second study characterized the assumptions and heuristics used when making predictions about the relative likelihood of different types of chemical processes. Our results revealed that heuristics for cue selection and decision-making played a significant role in the construction of answers during the interviews. Many study participants relied frequently on one or more of the following heuristics to make their decisions: recognition, representativeness, one-reason decision-making, and arbitrary trend. These heuristics allowed students to generate answers in the absence of requisite knowledge, but often led students astray. When characterizing assumptions, our results indicate that students relied on intuitive, spurious, and valid assumptions about the nature of chemical substances and processes in building their responses. In particular, many

  6. New Assumptions to Guide SETI Research

    NASA Technical Reports Server (NTRS)

    Colombano, S. P.

    2018-01-01

    The recent Kepler discoveries of Earth-like planets offer the opportunity to focus our attention on detecting signs of life and technology in specific planetary systems, but I feel we need to become more flexible in our assumptions. The reason is that, while it is still reasonable and conservative to assume that life is most likely to have originated in conditions similar to ours, the vast time differences in potential evolutions render the likelihood of "matching" technologies very slim. In light of these challenges I propose a more "aggressive"� approach to future SETI exploration in directions that until now have received little consideration.

  7. Questionable assumptions hampered interpretation of a network meta-analysis of primary care depression treatments.

    PubMed

    Linde, Klaus; Rücker, Gerta; Schneider, Antonius; Kriston, Levente

    2016-03-01

    We aimed to evaluate the underlying assumptions of a network meta-analysis investigating which depression treatment works best in primary care and to highlight challenges and pitfalls of interpretation under consideration of these assumptions. We reviewed 100 randomized trials investigating pharmacologic and psychological treatments for primary care patients with depression. Network meta-analysis was carried out within a frequentist framework using response to treatment as outcome measure. Transitivity was assessed by epidemiologic judgment based on theoretical and empirical investigation of the distribution of trial characteristics across comparisons. Homogeneity and consistency were investigated by decomposing the Q statistic. There were important clinical and statistically significant differences between "pure" drug trials comparing pharmacologic substances with each other or placebo (63 trials) and trials including a psychological treatment arm (37 trials). Overall network meta-analysis produced results well comparable with separate meta-analyses of drug trials and psychological trials. Although the homogeneity and consistency assumptions were mostly met, we considered the transitivity assumption unjustifiable. An exchange of experience between reviewers and, if possible, some guidance on how reviewers addressing important clinical questions can proceed in situations where important assumptions for valid network meta-analysis are not met would be desirable. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Through the organizational looking glass: you can't plan tomorrow's organizations with today's assumptions.

    PubMed

    Handy, C

    1980-01-01

    It's hard to imagine what our industrial society would be like if, for instance, there were no factories. How would things get produced, how would business survive? But are we, in fact, an industrial society? Are factories going to be the prime production place for a society that is conserving energy and doesn't need to travel to work because the silicon chip makes it more efficient to work at home? Who knows what the impact of energy conservation and women in the work force will be on future organizations? One thing we can be sure of, this author writes, is that whatever tomorrow brings, today's assumptions probably cannot account for it. We are, he asserts, entering a period of discontinuous change where the assumptions we have been working with as a society and in organizations are no longer necessarily true. He discusses three assumptions he sees fading--what causes efficiency, what work is, and what value organizational hierarchy has--and then gives some clues as to what our new assumptions might be. Regardless of what our assumptions actually are, however, our organizations and society will require leaders willing to take enormous risks and try unproved ways to cope with them.

  9. Tilted geostrophic convection in icy world oceans caused by the horizontal component of the planetary rotation vector

    NASA Astrophysics Data System (ADS)

    Goodman, J. C.

    2012-12-01

    in Europa's ocean (Seafloor heat source = 4 GW; ocean depth = 100 km; rotation period = 3.55 days; latitude = 30° N). Left: elevation section through plume. Right: 3-d isosurface of constant temperature (1 microkelvin above ambient). Note alignment of geostrophic eddies along angular rotation axis.

  10. Implicit Assumptions in Special Education Policy: Promoting Full Inclusion for Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Kirby, Moira

    2017-01-01

    Introduction: Everyday millions of students in the United States receive special education services. Special education is an institution shaped by societal norms. Inherent in these norms are implicit assumptions regarding disability and the nature of special education services. The two dominant implicit assumptions evident in the American…

  11. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    PubMed

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  12. Does Artificial Neural Network Support Connectivism's Assumptions?

    ERIC Educational Resources Information Center

    AlDahdouh, Alaa A.

    2017-01-01

    Connectivism was presented as a learning theory for the digital age and connectivists claim that recent developments in Artificial Intelligence (AI) and, more specifically, Artificial Neural Network (ANN) support their assumptions of knowledge connectivity. Yet, very little has been done to investigate this brave allegation. Does the advancement…

  13. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling

  14. From Pore to Core: Do Engineered Nanoparticles Violate Upscaling Assumptions? A Microtomographic Investigation

    NASA Astrophysics Data System (ADS)

    Molnar, I. L.; O'Carroll, D. M.; Gerhard, J.; Willson, C. S.

    2014-12-01

    The recent success in using Synchrotron X-ray Computed Microtomography (SXCMT) for the quantification of nanoparticle concentrations within real, three-dimensional pore networks [1] has opened up new opportunities for collecting experimental data of pore-scale flow and transport processes. One opportunity is coupling SXCMT with nanoparticle/soil transport experiments to provide unique insights into how pore-scale processes influence transport at larger scales. Understanding these processes is a key step in accurately upscaling micron-scale phenomena to the continuum-scale. Upscaling phenomena from the micron-scale to the continuum-scale typically involves the assumption that the pore space is well mixed. Using this 'well mixed assumption' it is implicitly assumed that the distribution of nanoparticles within the pore does not affect its retention by soil grains. This assumption enables the use of volume-averaged parameters in calculating transport and retention rates. However, in some scenarios, the well mixed assumption will likely be violated by processes such as deposition and diffusion. These processes can alter the distribution of the nanoparticles in the pore space and impact retention behaviour, leading to discrepancies between theoretical predictions and experimental observations. This work investigates the well mixed assumption by employing SXCMT to experimentally examine pore-scale mixing of silver nanoparticles during transport through sand packed columns. Silver nanoparticles were flushed through three different sands to examine the impact of grain distribution and nanoparticle retention rates on mixing: uniform silica (low retention), well graded silica sand (low retention) and uniform iron oxide coated silica sand (high retention). The SXCMT data identified diffusion-limited retention as responsible for violations of the well mixed assumption. A mathematical description of the diffusion-limited retention process was created and compared to the

  15. I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work

    NASA Astrophysics Data System (ADS)

    Horodyskyj, L.; Mead, C.; Anbar, A. D.

    2016-12-01

    Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.

  16. A statistical analysis of the dependency of closure assumptions in cumulus parameterization on the horizontal resolution

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    1994-01-01

    Simulated data from the UCLA cumulus ensemble model are used to investigate the quasi-universal validity of closure assumptions used in existing cumulus parameterizations. A closure assumption is quasi-universally valid if it is sensitive neither to convective cloud regimes nor to horizontal resolutions of large-scale/mesoscale models. The dependency of three types of closure assumptions, as classified by Arakawa and Chen, on the horizontal resolution is addressed in this study. Type I is the constraint on the coupling of the time tendencies of large-scale temperature and water vapor mixing ratio. Type II is the constraint on the coupling of cumulus heating and cumulus drying. Type III is a direct constraint on the intensity of a cumulus ensemble. The macroscopic behavior of simulated cumulus convection is first compared with the observed behavior in view of Type I and Type II closure assumptions using 'quick-look' and canonical correlation analyses. It is found that they are statistically similar to each other. The three types of closure assumptions are further examined with simulated data averaged over selected subdomain sizes ranging from 64 to 512 km. It is found that the dependency of Type I and Type II closure assumptions on the horizontal resolution is very weak and that Type III closure assumption is somewhat dependent upon the horizontal resolution. The influences of convective and mesoscale processes on the closure assumptions are also addressed by comparing the structures of canonical components with the corresponding vertical profiles in the convective and stratiform regions of cumulus ensembles analyzed directly from simulated data. The implication of these results for cumulus parameterization is discussed.

  17. Evaluation of Pharmacokinetic Assumptions Using a 443 ...

    EPA Pesticide Factsheets

    With the increasing availability of high-throughput and in vitro data for untested chemicals, there is a need for pharmacokinetic (PK) models for in vitro to in vivo extrapolation (IVIVE). Though some PBPK models have been created for individual compounds using in vivo data, we are now able to rapidly parameterize generic PBPK models using in vitro data to allow IVIVE for chemicals tested for bioactivity via high-throughput screening. However, these new models are expected to have limited accuracy due to their simplicity and generalization of assumptions. We evaluated the assumptions and performance of a generic PBPK model (R package “httk”) parameterized by a library of in vitro PK data for 443 chemicals. We evaluate and calibrate Schmitt’s method by comparing the predicted volume of distribution (Vd) and tissue partition coefficients to in vivo measurements. The partition coefficients are initially over predicted, likely due to overestimation of partitioning into phospholipids in tissues and the lack of lipid partitioning in the in vitro measurements of the fraction unbound in plasma. Correcting for phospholipids and plasma binding improved the predictive ability (R2 to 0.52 for partition coefficients and 0.32 for Vd). We lacked enough data to evaluate the accuracy of changing the model structure to include tissue blood volumes and/or separate compartments for richly/poorly perfused tissues, therefore we evaluated the impact of these changes on model

  18. 41 CFR 60-3.9 - No assumption of validity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true No assumption of validity. 60-3.9 Section 60-3.9 Public Contracts and Property Management Other Provisions Relating to Public... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...

  19. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  20. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE PAGES

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...

    2016-05-01

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  1. A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States

    NASA Technical Reports Server (NTRS)

    Ryff, Luiz Carlos

    1996-01-01

    A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.

  2. Use of tangential visual symbols to increase the long-term learning process: applications of linkage in teaching pharmacological principles of addiction.

    PubMed

    Giannini, A J; Giannini, J N; Condon, M

    2000-07-01

    Medieval and Renaissance teaching techniques using linkage between course content and tangentially related visual symbols were applied to the teaching of the pharmacological principles of addiction. Forty medical students randomly divided into two blinded groups viewed a lecture. One lecture was supplemented by symbolic slides, and the second was not. Students who viewed symbolic slides had significantly higher scores in a written 15-question multiple-choice test 30 days after the lecture. These results were consistent with learning and semiotic models. These models hypothesize a linkage between conceptual content and perception of visual symbols that thereby increases conceptual retention. Recent neurochemical research supports the existence of a linkage between two chemically distinct memory systems. Simultaneous stimulation of both chemical systems by teaching formats similar to those employed in the study can augment neurochemical signaling in the neocortex.

  3. The Clinical Encounter as Local Moral World: Shifts of Assumptions and Transformation in Relational Context

    PubMed Central

    Alegría, Margarita

    2013-01-01

    In this study we consider the process of the clinical encounter, and present exemplars of how assumptions of both clinicians and their patients can shift or transform in the course of a diagnostic interview. We examine the process as it is recalled, and further elaborated, in post-diagnostic interviews as part of a collaborative inquiry during reflections with clinicians and patients in the northeastern United States. Rather than treating assumptions by patients and providers as a fixed attribute of an individual, we treat them as occurring between people within a particular social context, the diagnostic interview. We explore the diagnostic interview as a landscape in which assumptions occur (and can shift), navigate the features of this landscape, and suggest that our examination can best be achieved by the systematic comparison of views of the multiple actors in an experience-near manner. We describe what might be gained by this shift in assumptions and how it can make visible what is at stake for clinician and patient in their local moral worlds – for patients, acknowledgement of social suffering, for clinicians how assumptions are a barrier to engagement with minority patients. It is crucial for clinicians to develop this capacity for reflection when navigating the interactions with patients from different cultures, to recognize and transform assumptions, to notice ‘surprises’, and to elicit what really matters to patients in their care. PMID:19201074

  4. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    PubMed

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  5. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model

    PubMed Central

    Austin, Peter C.

    2017-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694

  6. Origins and Traditions in Comparative Education: Challenging Some Assumptions

    ERIC Educational Resources Information Center

    Manzon, Maria

    2018-01-01

    This article questions some of our assumptions about the history of comparative education. It explores new scholarship on key actors and ways of knowing in the field. Building on the theory of the social constructedness of the field of comparative education, the paper elucidates how power shapes our scholarly histories and identities.

  7. "Touch Me, Like Me": Testing an Encounter Group Assumption

    ERIC Educational Resources Information Center

    Boderman, Alvin; And Others

    1972-01-01

    An experiment to test an encounter group assumption that touching increases interpersonal attraction was conducted. College women were randomly assigned to a touch or no-touch condition. A comparison of total evaluation scores verified the hypothesis: subjects who touched the accomplice perceived her as a more attractive person than those who did…

  8. Assumptions Underlying Curriculum Decisions in Australia: An American Perspective.

    ERIC Educational Resources Information Center

    Willis, George

    An analysis of the cultural and historical context in which curriculum decisions are made in Australia and a comparison with educational assumptions in the United States is the purpose of this paper. Methodology is based on personal teaching experience and observation in Australia. Seven factors are identified upon which curricular decisions in…

  9. High-pressure size exclusion chromatography analysis of dissolved organic matter isolated by tangential-flow ultra filtration

    USGS Publications Warehouse

    Everett, C.R.; Chin, Y.-P.; Aiken, G.R.

    1999-01-01

    A 1,000-Dalton tangential-flow ultrafiltration (TFUF) membrane was used to isolate dissolved organic matter (DOM) from several freshwater environments. The TFUF unit used in this study was able to completely retain a polystyrene sulfonate 1,800-Dalton standard. Unaltered and TFUF-fractionated DOM molecular weights were assayed by high-pressure size exclusion chromatography (HPSEC). The weight-averaged molecular weights of the retentates were larger than those of the raw water samples, whereas the filtrates were all significantly smaller and approximately the same size or smaller than the manufacturer-specified pore size of the membrane. Moreover, at 280 nm the molar absorptivity of the DOM retained by the ultrafilter is significantly larger than the material in the filtrate. This observation suggests that most of the chromophoric components are associated with the higher molecular weight fraction of the DOM pool. Multivalent metals in the aqueous matrix also affected the molecular weights of the DOM molecules. Typically, proton-exchanged DOM retentates were smaller than untreated samples. This TFUF system appears to be an effective means of isolating aquatic DOM by size, but the ultimate size of the retentates may be affected by the presence of metals and by configurational properties unique to the DOM phase.

  10. Cavitation control on a 2D hydrofoil through a continuous tangential injection of liquid: Experimental study

    NASA Astrophysics Data System (ADS)

    Timoshevskiy, M. V.; Zapryagaev, I. I.; Pervunin, K. S.; Markovich, D. M.

    2016-10-01

    In the paper, the possibility of active control of a cavitating flow over a 2D hydrofoil that replicates a scaled-down model of high-pressure hydroturbine guide vane (GV) was tested. The flow manipulation was implemented by a continuous tangential liquid injection at different flow rates through a spanwise slot in the foil surface. In experiments, the hydrofoil was placed in the test channel at the attack angle of 9°. Different cavitation conditions were reached by varying the cavitation number and injection velocity. In order to study time dynamics and spatial patterns of partial cavities, high-speed imaging was employed. A PIV method was used to measure the mean and fluctuating velocity fields over the hydrofoil. Hydroacoustic measurements were carried out by means of a pressure transducer to identify spectral characteristics of the cavitating flow. It was found that the present control technique is able to modify the partial cavity pattern (or even totally suppress cavitation) in case of stable sheet cavitation and change the amplitude of pressure pulsations at unsteady regimes. The injection technique makes it also possible to significantly influence the spatial distributions of the mean velocity and its turbulent fluctuations over the GV section for non-cavitating flow and sheet cavitation.

  11. 3-D Waveform Modeling of the 11 September 2001 World Trade Center Collapse Events in New York City

    NASA Astrophysics Data System (ADS)

    Yoo, S.; Rhie, J.; Kim, W.

    2010-12-01

    The seismic signals from collapse of the twin towers of World Trade Center (WTC), NYC were well recorded by the seismographic stations in the northeastern United States. The building collapse can be represented by a vertical single force which does not generate tangential component seismic signals during the source process. The waveforms recorded by the Basking Ridge, NJ (BRNJ) station located due west of the WTC site show that the amplitude on tangential component is negligible and indicates that a vertical single force assumption is valid and the velocity structure is more or less homogeneous along the propagation path. However, 3-component seismograms recorded at Palisades, NY (PAL), which is located 33.8 km due north of the WTC site along the Hudson River (azimuth = 15.2°), show abnormal features. The amplitude on tangential component is larger than on vertical- or on radial-component. This observation may be attributable to the complex energy conversion between Rayleigh and Love waves due to the strong low velocity anomaly associated with unconsolidated sediments under the Hudson River. To test the effects of the low velocity anomaly on the enhanced amplitude in tangential component, we developed a 3D velocity model by considering local geology such as unconsolidated sediment layer, Palisades sill, Triassic sandstone, and crystalline basement and simulated waveforms at PAL. The preliminary synthetic results show that 3D velocity structure can significantly enhance the amplitude in tangential component but it is not as large as the observation. Although a more precise 3D model is required to better explain the observations, our results confirm that the low velocity layer under the Hudson River can enhance the amplitude in tangential component at PAL. This result suggests that a good understanding of the amplitude enhancements for specific event-site pairs may be important to evaluate seismic hazard of metropolitan New York City.

  12. Conceptual design of the tangentially viewing combined interferometer-polarimeter for ITER density measurements.

    PubMed

    Van Zeeland, M A; Boivin, R L; Brower, D L; Carlstrom, T N; Chavez, J A; Ding, W X; Feder, R; Johnson, D; Lin, L; O'Neill, R C; Watts, C

    2013-04-01

    One of the systems planned for the measurement of electron density in ITER is a multi-channel tangentially viewing combined interferometer-polarimeter (TIP). This work discusses the current status of the design, including a preliminary optical table layout, calibration options, error sources, and performance projections based on a CO2/CO laser system. In the current design, two-color interferometry is carried out at 10.59 μm and 5.42 μm and a separate polarimetry measurement of the plasma induced Faraday effect, utilizing the rotating wave technique, is made at 10.59 μm. The inclusion of polarimetry provides an independent measure of the electron density and can also be used to correct the conventional two-color interferometer for fringe skips at all densities, up to and beyond the Greenwald limit. The system features five chords with independent first mirrors to reduce risks associated with deposition, erosion, etc., and a common first wall hole to minimize penetration sizes. Simulations of performance for a projected ITER baseline discharge show the diagnostic will function as well as, or better than, comparable existing systems for feedback density control. Calculations also show that finite temperature effects will be significant in ITER even for moderate temperature plasmas and can lead to a significant underestimate of electron density. A secondary role TIP will fulfill is that of a density fluctuation diagnostic; using a toroidal Alfvén eigenmode as an example, simulations show TIP will be extremely robust in this capacity and potentially able to resolve coherent mode fluctuations with perturbed densities as low as δn∕n ≈ 10(-5).

  13. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  14. The Solar Neighborhood. XLII. Parallax Results from the CTIOPI 0.9 m Program—Identifying New Nearby Subdwarfs Using Tangential Velocities and Locations on the H–R Diagram

    NASA Astrophysics Data System (ADS)

    Jao, Wei-Chun; Henry, Todd J.; Winters, Jennifer G.; Subasavage, John P.; Riedel, Adric R.; Silverstein, Michele L.; Ianna, Philip A.

    2017-11-01

    Parallaxes, proper motions, and optical photometry are presented for 51 systems consisting of 37 cool subdwarf and 14 additional high proper motion systems. Thirty-seven systems have parallaxes reported for the first time, 15 of which have proper motions of at least 1″ yr‑1. The sample includes 22 newly identified cool subdwarfs within 100 pc, of which three are within 25 pc, and an additional five subdwarfs from 100 to 160 pc. Two systems—LSR 1610-0040 AB and LHS 440 AB—are close binaries exhibiting clear astrometric perturbations that will ultimately provide important masses for cool subdwarfs. We use the accurate parallaxes and proper motions provided here, combined with additional data from our program and others, to determine that effectively all nearby stars with tangential velocities greater than 200 km s‑1 are subdwarfs. We compare a sample of 167 confirmed cool subdwarfs to nearby main sequence dwarfs and Pleiades members on an observational Hertzsprung–Russell diagram using M V versus (V ‑ K s ) to map trends of age and metallicity. We find that subdwarfs are clearly separated for spectral types K5–M5, indicating that the low metallicities of subdwarfs set them apart in the H–R diagram for (V ‑ K s ) = 3–6. We then apply the tangential velocity cutoff and the subdwarf region of the H–R diagram to stars with parallaxes from Gaia Data Release 1 and the MEarth Project to identify a total of 29 new nearby subdwarf candidates that fall clearly below the main sequence.

  15. Comparing the Performance of Approaches for Testing the Homogeneity of Variance Assumption in One-Factor ANOVA Models

    ERIC Educational Resources Information Center

    Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.

    2017-01-01

    Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…

  16. Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoll, Brady; Brinkman, Gregory; Townsend, Aaron

    2016-01-01

    Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small systemmore » and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0

  17. South Atlantic Ocean circulation: Simulation experiments with a quasi-geostrophic model and assimilation of TOPEX/POSEIDON and ERS 1 altimeter data

    NASA Astrophysics Data System (ADS)

    Florenchie, P.; Verron, J.

    1998-10-01

    Simulation experiments of South Atlantic Ocean circulations are conducted with a 1/6°, four-layered, quasi-geostrophic model. By means of a simple nudging data assimilation procedure along satellite tracks, TOPEX/POSEIDON and ERS 1 altimeter measurements are introduced into the model to control the simulation of the basin-scale circulation for the period from October 1992 to September 1994. The model circulation appears to be strongly influenced by the introduction of altimeter data, offering a consistent picture of South Atlantic Ocean circulations. Comparisons with observations show that the assimilating model successfully simulates the kinematic behavior of a large number of surface circulation components. The assimilation procedure enables us to produce schematic diagrams of South Atlantic circulation in which patterns ranging from basin-scale currents to mesoscale eddies are portrayed in a realistic way, with respect to their complexity. The major features of the South Atlantic circulation are described and analyzed, with special emphasis on the Brazil-Malvinas Confluence region, the Subtropical Gyre with the formation of frontal structures, and the Agulhas Retroflection. The Agulhas eddy-shedding process has been studied extensively. Fourteen eddies appear to be shed during the 2-year experiment. Because of their strong surface topographic signature, Agulhas eddies have been tracked continuously during the assimilation experiment as they cross the South Atlantic basin westward. Other effects of the assimilation procedure are shown, such as the intensification of the Subtropical Gyre, the appearance of a strong seasonal cycle in the Brazil Current transport, and the increase of the mean Brazil Current transport. This last result, combined with the westward oriention of the Agulhas eddies' trajectories, leads to a southward transport of mean eddy kinetic energy across 30°S.

  18. Sentinel Chicken Seroconversions Track Tangential Transmission of West Nile Virus to Humans in the Greater Los Angeles Area of California

    PubMed Central

    Kwan, Jennifer L.; Kluh, Susanne; Madon, Minoo B.; Nguyen, Danh V.; Barker, Christopher M.; Reisen, William K.

    2010-01-01

    In Los Angeles, California, West Nile virus (WNV) has followed a pattern of emergence, amplification, subsidence, and resurgence. A time series cross-correlation analysis of human case counts and sentinel chicken seroconversions revealed temporal concordance indicating that chicken seroconversions tracked tangential transmission of WNV from the basic passeriform-Culex amplification cycle to humans rather than antecedent enzootic amplification. Sentinel seroconversions provided the location and time of transmission as opposed to human cases, which frequently were reported late and were assumed to be acquired 2–14 days before disease onset at their residence. Cox models revealed that warming degree-days were associated with the increased risk of seroconversion, whereas elevated herd immunity in peridomestic birds dampened seroconversion risk. Spatially, surveillance data collected within a 5 km radius of flock locations 15–28 days before the bleed date were most predictive of a seroconversion. In urban Los Angeles, sentinel chicken seroconversions could be used as an outcome measure in decision support for emergency intervention. PMID:21036853

  19. Evaluating growth assumptions using diameter or radial increments in natural even-aged longleaf pine

    Treesearch

    John C. Gilbert; Ralph S. Meldahl; Jyoti N. Rayamajhi; John S. Kush

    2010-01-01

    When using increment cores to predict future growth, one often assumes future growth is identical to past growth for individual trees. Once this assumption is accepted, a decision has to be made between which growth estimate should be used, constant diameter growth or constant basal area growth. Often, the assumption of constant diameter growth is used due to the ease...

  20. The Microtremor H/V Spectral Ratio: The Physical Basis of the Diffuse Field Assumption

    NASA Astrophysics Data System (ADS)

    Sanchez-Sesma, F. J.

    2016-12-01

    The microtremor H/V spectral ratio (MHVSR) is popular to obtain the dominant frequency at a site. Despite the success of MHVSR some controversy arose regarding its physical basis. One approach is the Diffuse Field Assumption, DFA. It is then assumed that noise diffuse features come from multiple scattering within the medium. According to theory, the average of the autocorrelation is proportional to directional energy density (DED) and to the imaginary part of the Green's function for same source and receiver. Then, the square of MHVSR is a ratio of DEDs which, in a horizontally layered system, is 2xImG11/ImG33, where ImG11 and ImG33 are the imaginary parts of Green's functions for horizontal and vertical components. This has physical implications that emerge from the duality DED-force, implicit in the DFA. Consider a surface force at a half-space. The radiated energy is carried away by various wave types and the proportions of each one are precisely the fractions of the energy densities of a diffuse elastic wave field at the free surface. Thus, some properties of applied forces are also characteristics of DEDs. For example, consider a Poisson solid. For a normal point load, 67 per cent of energy is carried away by Rayleigh waves. For the tangential case, it is less well known that, 77 per cent of energy goes as shear waves. In a full space, 92 per cent of the energy is emitted as shear waves. The horizontal DED at the half-space surface implies significant emission of down-going shear waves that explains the curious stair-like resonance spectrum of ImG11. Both ImG11 and ImG33 grow linearly versus frequency and this represents wave emission. For a layered medium, besides wave emission, the ensuing variations correspond to reflected waves. For high frequencies, ImG33 depends on the properties of the top layer. Reflected body waves are very small and Rayleigh waves behave in the top layer as in a kind of mini half-space. From HVSR one can invert the velocity model

  1. Linking assumptions in amblyopia

    PubMed Central

    LEVI, DENNIS M.

    2017-01-01

    Over the last 35 years or so, there has been substantial progress in revealing and characterizing the many interesting and sometimes mysterious sensory abnormalities that accompany amblyopia. A goal of many of the studies has been to try to make the link between the sensory losses and the underlying neural losses, resulting in several hypotheses about the site, nature, and cause of amblyopia. This article reviews some of these hypotheses, and the assumptions that link the sensory losses to specific physiological alterations in the brain. Despite intensive study, it turns out to be quite difficult to make a simple linking hypothesis, at least at the level of single neurons, and the locus of the sensory loss remains elusive. It is now clear that the simplest notion—that reduced contrast sensitivity of neurons in cortical area V1 explains the reduction in contrast sensitivity—is too simplistic. Considerations of noise, noise correlations, pooling, and the weighting of information also play a critically important role in making perceptual decisions, and our current models of amblyopia do not adequately take these into account. Indeed, although the reduction of contrast sensitivity is generally considered to reflect “early” neural changes, it seems plausible that it reflects changes at many stages of visual processing. PMID:23879956

  2. Are Prescription Opioids Driving the Opioid Crisis? Assumptions vs Facts.

    PubMed

    Rose, Mark Edmund

    2018-04-01

    Sharp increases in opioid prescriptions, and associated increases in overdose deaths in the 2000s, evoked widespread calls to change perceptions of opioid analgesics. Medical literature discussions of opioid analgesics began emphasizing patient and public health hazards. Repetitive exposure to this information may influence physician assumptions. While highly consequential to patients with pain whose function and quality of life may benefit from opioid analgesics, current assumptions about prescription opioid analgesics, including their role in the ongoing opioid overdose epidemic, have not been scrutinized. Information was obtained by searching PubMed, governmental agency websites, and conference proceedings. Opioid analgesic prescribing and associated overdose deaths both peaked around 2011 and are in long-term decline; the sharp overdose increase recorded in 2014 was driven by illicit fentanyl and heroin. Nonmethadone prescription opioid analgesic deaths, in the absence of co-ingested benzodiazepines, alcohol, or other central nervous system/respiratory depressants, are infrequent. Within five years of initial prescription opioid misuse, 3.6% initiate heroin use. The United States consumes 80% of the world opioid supply, but opioid access is nonexistent for 80% and severely restricted for 4.1% of the global population. Many current assumptions about opioid analgesics are ill-founded. Illicit fentanyl and heroin, not opioid prescribing, now fuel the current opioid overdose epidemic. National discussion has often neglected the potentially devastating effects of uncontrolled chronic pain. Opioid analgesic prescribing and related overdoses are in decline, at great cost to patients with pain who have benefited or may benefit from, but cannot access, opioid analgesic therapy.

  3. Missing data in FFQs: making assumptions about item non-response.

    PubMed

    Lamb, Karen E; Olstad, Dana Lee; Nguyen, Cattram; Milte, Catherine; McNaughton, Sarah A

    2017-04-01

    FFQs are a popular method of capturing dietary information in epidemiological studies and may be used to derive dietary exposures such as nutrient intake or overall dietary patterns and diet quality. As FFQs can involve large numbers of questions, participants may fail to respond to all questions, leaving researchers to decide how to deal with missing data when deriving intake measures. The aim of the present commentary is to discuss the current practice for dealing with item non-response in FFQs and to propose a research agenda for reporting and handling missing data in FFQs. Single imputation techniques, such as zero imputation (assuming no consumption of the item) or mean imputation, are commonly used to deal with item non-response in FFQs. However, single imputation methods make strong assumptions about the missing data mechanism and do not reflect the uncertainty created by the missing data. This can lead to incorrect inference about associations between diet and health outcomes. Although the use of multiple imputation methods in epidemiology has increased, these have seldom been used in the field of nutritional epidemiology to address missing data in FFQs. We discuss methods for dealing with item non-response in FFQs, highlighting the assumptions made under each approach. Researchers analysing FFQs should ensure that missing data are handled appropriately and clearly report how missing data were treated in analyses. Simulation studies are required to enable systematic evaluation of the utility of various methods for handling item non-response in FFQs under different assumptions about the missing data mechanism.

  4. Unpacking Assumptions in Research Synthesis: A Critical Construct Synthesis Approach

    ERIC Educational Resources Information Center

    Wolgemuth, Jennifer R.; Hicks, Tyler; Agosto, Vonzell

    2017-01-01

    Research syntheses in education, particularly meta-analyses and best-evidence syntheses, identify evidence-based practices by combining findings across studies whose constructs are similar enough to warrant comparison. Yet constructs come preloaded with social, historical, political, and cultural assumptions that anticipate how research problems…

  5. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    PubMed

    Hsu, Anne; Griffiths, Thomas L

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  6. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    PubMed Central

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  7. Teaching for Tomorrow: An Exploratory Study of Prekindergarten Teachers' Underlying Assumptions about How Children Learn

    ERIC Educational Resources Information Center

    Flynn, Erin E.; Schachter, Rachel E.

    2017-01-01

    This study investigated eight prekindergarten teachers' underlying assumptions about how children learn, and how these assumptions were used to inform and enact instruction. By contextualizing teachers' knowledge and understanding as it is used in practice we were able to provide unique insight into the work of teaching. Participants focused on…

  8. Investigating Teachers' and Students' Beliefs and Assumptions about CALL Programme at Caledonian College of Engineering

    ERIC Educational Resources Information Center

    Ali, Holi Ibrahim Holi

    2012-01-01

    This study is set to investigate students' and teachers' perceptions and assumptions about newly implemented CALL Programme at the School of Foundation Studies, Caledonian College of Engineering, Oman. Two versions of questionnaire were administered to 24 teachers and 90 students to collect their beliefs and assumption about CALL programame. The…

  9. Qualifications and Assignments of Alternatively Certified Teachers: Testing Core Assumptions

    ERIC Educational Resources Information Center

    Cohen-Vogel, Lora; Smith, Thomas M.

    2007-01-01

    By analyzing data from the Schools and Staffing Survey, the authors empirically test four of the core assumptions embedded in current arguments for expanding alternative teacher certification (AC): AC attracts experienced candidates from fields outside of education; AC attracts top-quality, well-trained teachers; AC disproportionately trains…

  10. The Metatheoretical Assumptions of Literacy Engagement: A Preliminary Centennial History

    ERIC Educational Resources Information Center

    Hruby, George G.; Burns, Leslie D.; Botzakis, Stergios; Groenke, Susan L.; Hall, Leigh A.; Laughter, Judson; Allington, Richard L.

    2016-01-01

    In this review of literacy education research in North America over the past century, the authors examined the historical succession of theoretical frameworks on students' active participation in their own literacy learning, and in particular the metatheoretical assumptions that justify those frameworks. The authors used "motivation" and…

  11. Using Classroom Data to Teach Students about Data Cleaning and Testing Assumptions

    PubMed Central

    Cummiskey, Kevin; Kuiper, Shonda; Sturdivant, Rodney

    2012-01-01

    This paper discusses the influence that decisions about data cleaning and violations of statistical assumptions can have on drawing valid conclusions to research studies. The datasets provided in this paper were collected as part of a National Science Foundation grant to design online games and associated labs for use in undergraduate and graduate statistics courses that can effectively illustrate issues not always addressed in traditional instruction. Students play the role of a researcher by selecting from a wide variety of independent variables to explain why some students complete games faster than others. Typical project data sets are “messy,” with many outliers (usually from some students taking much longer than others) and distributions that do not appear normal. Classroom testing of the games over several semesters has produced evidence of their efficacy in statistics education. The projects tend to be engaging for students and they make the impact of data cleaning and violations of model assumptions more relevant. We discuss the use of one of the games and associated guided lab in introducing students to issues prevalent in real data and the challenges involved in data cleaning and dangers when model assumptions are violated. PMID:23055992

  12. Constant-Round Concurrent Zero Knowledge From Falsifiable Assumptions

    DTIC Science & Technology

    2013-01-01

    assumptions (e.g., [DS98, Dam00, CGGM00, Gol02, PTV12, GJO+12]), or in alternative models (e.g., super -polynomial-time simulation [Pas03b, PV10]). In the...T (·)-time computations, where T (·) is some “nice” (slightly) super -polynomial function (e.g., T (n) = nlog log logn). We refer to such proof...put a cap on both using a (slightly) super -polynomial function, and thus to guarantee soundness of the concurrent zero-knowledge protocol, we need

  13. World assumptions, posttraumatic stress and quality of life after a natural disaster: a longitudinal study.

    PubMed

    Nygaard, Egil; Heir, Trond

    2012-06-28

    Changes in world assumptions are a fundamental concept within theories that explain posttraumatic stress disorder. The objective of the present study was to gain a greater understanding of how changes in world assumptions are related to quality of life and posttraumatic stress symptoms after a natural disaster. A longitudinal study of 574 Norwegian adults who survived the Southeast Asian tsunami in 2004 was undertaken. Multilevel analyses were used to identify which factors at six months post-tsunami predicted quality of life and posttraumatic stress symptoms two years post-tsunami. Good quality of life and posttraumatic stress symptoms were negatively related. However, major differences in the predictors of these outcomes were found. Females reported significantly higher quality of life and more posttraumatic stress than men. The association between level of exposure to the tsunami and quality of life seemed to be mediated by posttraumatic stress. Negative perceived changes in the assumption "the world is just" were related to adverse outcome in both quality of life and posttraumatic stress. Positive perceived changes in the assumptions "life is meaningful" and "feeling that I am a valuable human" were associated with higher levels of quality of life but not with posttraumatic stress. Quality of life and posttraumatic stress symptoms demonstrate differences in their etiology. World assumptions may be less specifically related to posttraumatic stress than has been postulated in some cognitive theories.

  14. Project M: An Assessment of Mission Assumptions

    NASA Technical Reports Server (NTRS)

    Edwards, Alycia

    2010-01-01

    Project M is a mission Johnson Space Center is working on to send an autonomous humanoid robot to the moon (also known as Robonaut 2) in l000 days. The robot will be in a lander, fueled by liquid oxygen and liquid methane, and land on the moon, avoiding any hazardous obstacles. It will perform tasks like maintenance, construction, and simple student experiments. This mission is also being used as inspiration for new advancements in technology. I am considering three of the design assumptions that contribute to determining the mission feasibility: maturity of robotic technology, launch vehicle determination, and the LOX/Methane fueled spacecraft

  15. Male and Female Assumptions About Colleagues' Views of Their Competence.

    ERIC Educational Resources Information Center

    Heilman, Madeline E.; Kram, Kathy E.

    1983-01-01

    Compared the assumptions of 100 male and female employees about colleagues' views of their performance on a joint task. Results indicated women anticipated more blame for a joint failure, less credit for a joint success, and a work image of lesser effectiveness, regardless of the co-worker's sex. (JAC)

  16. The fall of the Northern Unicorn: tangential motions in the Galactic anticentre with SDSS and Gaia

    NASA Astrophysics Data System (ADS)

    de Boer, T. J. L.; Belokurov, V.; Koposov, S. E.

    2018-01-01

    We present the first detailed study of the behaviour of the stellar proper motion across the entire Galactic anticentre area visible in the Sloan Digital Sky Survey (SDSS) data. We use recalibrated SDSS astrometry in combination with positions from Gaia DR1 to provide tangential motion measurements with a systematic uncertainty <5 km s-1 for the Main Sequence stars at the distance of the Monoceros Ring. We demonstrate that Monoceros members rotate around the Galaxy with azimuthal speeds of ∼230 km s-1, only slightly lower than that of the Sun. Additionally, both vertical and azimuthal components of their motion are shown to vary considerably but gradually as a function of Galactic longitude and latitude. The stellar overdensity in the anti-centre region can be split into two components, the narrow, stream-like ACS and the smooth Ring. According to our analysis, these two structures show very similar but clearly distinct kinematic trends, which can be summarized as follows: the amplitude of the velocity variation in vϕ and vz in the ACS is higher compared to the Ring, whose velocity gradients appear to be flatter. Currently, no model available can explain the entirety of the data in this area of the sky. However, the new accurate kinematic map introduced here should provide strong constraints on the genesis of the Monoceros Ring and the associated substructure.

  17. Thresholds of understanding: Exploring assumptions of scale invariance vs. scale dependence in global biogeochemical models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bradford, M.; Koven, C.; Talbot, J. M.; Wood, S.; Chadwick, O.

    2016-12-01

    High uncertainty and low confidence in terrestrial carbon (C) cycle projections reflect the incomplete understanding of how best to represent biologically-driven C cycle processes at global scales. Ecosystem theories, and consequently biogeochemical models, are based on the assumption that different belowground communities function similarly and interact with the abiotic environment in consistent ways. This assumption of "Scale Invariance" posits that environmental conditions will change the rate of ecosystem processes, but the biotic response will be consistent across sites. Indeed, cross-site comparisons and global-scale analyses suggest that climate strongly controls rates of litter mass loss and soil organic matter turnover. Alternatively, activities of belowground communities are shaped by particular local environmental conditions, such as climate and edaphic conditions. Under this assumption of "Scale Dependence", relationships generated by evolutionary trade-offs in acquiring resources and withstanding environmental stress dictate the activities of belowground communities and their functional response to environmental change. Similarly, local edaphic conditions (e.g. permafrost soils or reactive minerals that physicochemically stabilize soil organic matter on mineral surfaces) may strongly constrain the availability of substrates that biota decompose—altering the trajectory of soil biogeochemical response to perturbations. Identifying when scale invariant assumptions hold vs. where local variation in biotic communities or edaphic conditions must be considered is critical to advancing our understanding and representation of belowground processes in the face of environmental change. Here we introduce data sets that support assumptions of scale invariance and scale dependent processes and discuss their application in global-scale biogeochemical models. We identify particular domains over which assumptions of scale invariance may be appropriate and potential

  18. Geostrophic Vortex Dynamics

    DTIC Science & Technology

    1988-10-01

    Generalized Kirchhoff Vortices 176 B. The 2-Level Rankine Vortex: Critical Points & Stability 181 C. Tripolar Coherent Euler Vortices 186 7...spontaneously in spectral simulations. One such example is provided by the tripolar vortex structureE which will be examined in detail in Chapter 6. It...of the tripolar coherent vortex structures that have recently been observed in very high resolution numerical simulations of two- dimensional

  19. Automatic ethics: the effects of implicit assumptions and contextual cues on moral behavior.

    PubMed

    Reynolds, Scott J; Leavitt, Keith; DeCelles, Katherine A

    2010-07-01

    We empirically examine the reflexive or automatic aspects of moral decision making. To begin, we develop and validate a measure of an individual's implicit assumption regarding the inherent morality of business. Then, using an in-basket exercise, we demonstrate that an implicit assumption that business is inherently moral impacts day-to-day business decisions and interacts with contextual cues to shape moral behavior. Ultimately, we offer evidence supporting a characterization of employees as reflexive interactionists: moral agents whose automatic decision-making processes interact with the environment to shape their moral behavior.

  20. A critical assessment of the ecological assumptions underpinning compensatory mitigation of salmon-derived nutrients

    USGS Publications Warehouse

    Collins, Scott F.; Marcarelli, Amy M.; Baxter, Colden V.; Wipfli, Mark S.

    2015-01-01

    We critically evaluate some of the key ecological assumptions underpinning the use of nutrient replacement as a means of recovering salmon populations and a range of other organisms thought to be linked to productive salmon runs. These assumptions include: (1) nutrient mitigation mimics the ecological roles of salmon, (2) mitigation is needed to replace salmon-derived nutrients and stimulate primary and invertebrate production in streams, and (3) food resources in rearing habitats limit populations of salmon and resident fishes. First, we call into question assumption one because an array of evidence points to the multi-faceted role played by spawning salmon, including disturbance via redd-building, nutrient recycling by live fish, and consumption by terrestrial consumers. Second, we show that assumption two may require qualification based upon a more complete understanding of nutrient cycling and productivity in streams. Third, we evaluate the empirical evidence supporting food limitation of fish populations and conclude it has been only weakly tested. On the basis of this assessment, we urge caution in the application of nutrient mitigation as a management tool. Although applications of nutrients and other materials intended to mitigate for lost or diminished runs of Pacific salmon may trigger ecological responses within treated ecosystems, contributions of these activities toward actual mitigation may be limited.

  1. ψ -ontology result without the Cartesian product assumption

    NASA Astrophysics Data System (ADS)

    Myrvold, Wayne C.

    2018-05-01

    We introduce a weakening of the preparation independence postulate of Pusey et al. [Nat. Phys. 8, 475 (2012), 10.1038/nphys2309] that does not presuppose that the space of ontic states resulting from a product-state preparation can be represented by the Cartesian product of subsystem state spaces. On the basis of this weakened assumption, it is shown that, in any model that reproduces the quantum probabilities, any pair of pure quantum states |ψ >,|ϕ > with <ϕ |ψ > ≤1 /√{2 } must be ontologically distinct.

  2. The vulnerabilities of teenage mothers: challenging prevailing assumptions.

    PubMed

    SmithBattle, L

    2000-09-01

    The belief that early childbearing leads to poverty permeates our collective understanding. However, recent findings reveal that for many teens, mothering makes sense of the limited life options that precede their pregnancies. The author challenges several assumptions about teenage mothers and offers an alternative to the modern view of the unencumbered self that drives current responses to teen childbearing. This alternative perspective entails a situated view of the self and a broader notion of parenting and citizenship that supports teen mothers and affirms our mutual interdependence.

  3. Using Contemporary Art to Challenge Cultural Values, Beliefs, and Assumptions

    ERIC Educational Resources Information Center

    Knight, Wanda B.

    2006-01-01

    Art educators, like many other educators born or socialized within the main-stream culture of a society, seldom have an opportunity to identify, question, and challenge their cultural values, beliefs, assumptions, and perspectives because school culture typically reinforces those they learn at home and in their communities (Bush & Simmons, 1990).…

  4. Two-dimensional Cascade Investigation of the Maximum Exit Tangential Velocity Component and Other Flow Conditions at the Exit of Several Turbine Blade Designs at Supercritical Pressure Ratios

    NASA Technical Reports Server (NTRS)

    Hauser, Cavour H; Plohr, Henry W

    1951-01-01

    The nature of the flow at the exit of a row of turbine blades for the range of conditions represented by four different blade configurations was evaluated by the conservation-of-momentum principle using static-pressure surveys and by analysis of Schlieren photographs of the flow. It was found that for blades of the type investigated, the maximum exit tangential-velocity component is a function of the blade geometry only and can be accurately predicted by the method of characteristics. A maximum value of exit velocity coefficient is obtained at a pressure ratio immediately below that required for maximum blade loading followed by a sharp drop after maximum blade loading occurs.

  5. Evaluation of surface deformability of lipid nanocapsules by drop tensiometer technique, and its experimental assessment by dialysis and tangential flow filtration.

    PubMed

    Hirsjärvi, Samuli; Bastiat, Guillaume; Saulnier, Patrick; Benoît, Jean-Pierre

    2012-09-15

    Deformability of nanoparticles might affect their behaviour at biological interfaces. Lipid nanocapsules (LNCs) are semi-solid particles resembling a hybrid of polymer nanoparticles and liposomes. Deformability of LNCs of different sizes was modelled by drop tensiometer technique. Two purification methods, dialysis and tangential flow filtration (TFF), were applied to study experimental behaviour and deformability of LNCs in order to evaluate if these properties contributed to membrane passing. Rheological parameters obtained from the drop tensiometer analysis suggested decreasing surface deformability of LNCs with increase in diameter. Dialysis results showed that up to 10% of LNCs can be lost during the process (e.g. membrane accumulation) but no clear evidence of the membrane passing was observed. Instead, LNCs with initial size and size distribution could be found in the TFF filtrate although molecular weight cut-off (MWCO) of the membrane used was smaller than the LNC diameter. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Applying Human Factors Principles to Mitigate Usability Issues Related to Embedded Assumptions in Health Information Technology Design

    PubMed Central

    Lowry, Svetlana Z; Patterson, Emily S

    2014-01-01

    Background There is growing recognition that design flaws in health information technology (HIT) lead to increased cognitive work, impact workflows, and produce other undesirable user experiences that contribute to usability issues and, in some cases, patient harm. These usability issues may in turn contribute to HIT utilization disparities and patient safety concerns, particularly among “non-typical” HIT users and their health care providers. Health care disparities are associated with poor health outcomes, premature death, and increased health care costs. HIT has the potential to reduce these disparate outcomes. In the computer science field, it has long been recognized that embedded cultural assumptions can reduce the usability, usefulness, and safety of HIT systems for populations whose characteristics differ from “stereotypical” users. Among these non-typical users, inappropriate embedded design assumptions may contribute to health care disparities. It is unclear how to address potentially inappropriate embedded HIT design assumptions once detected. Objective The objective of this paper is to explain HIT universal design principles derived from the human factors engineering literature that can help to overcome potential usability and/or patient safety issues that are associated with unrecognized, embedded assumptions about cultural groups when designing HIT systems. Methods Existing best practices, guidance, and standards in software usability and accessibility were subjected to a 5-step expert review process to identify and summarize those best practices, guidance, and standards that could help identify and/or address embedded design assumptions in HIT that could negatively impact patient safety, particularly for non-majority HIT user populations. An iterative consensus-based process was then used to derive evidence-based design principles from the data to address potentially inappropriate embedded cultural assumptions. Results Design principles that

  7. Applying Human Factors Principles to Mitigate Usability Issues Related to Embedded Assumptions in Health Information Technology Design.

    PubMed

    Gibbons, Michael C; Lowry, Svetlana Z; Patterson, Emily S

    2014-12-18

    There is growing recognition that design flaws in health information technology (HIT) lead to increased cognitive work, impact workflows, and produce other undesirable user experiences that contribute to usability issues and, in some cases, patient harm. These usability issues may in turn contribute to HIT utilization disparities and patient safety concerns, particularly among "non-typical" HIT users and their health care providers. Health care disparities are associated with poor health outcomes, premature death, and increased health care costs. HIT has the potential to reduce these disparate outcomes. In the computer science field, it has long been recognized that embedded cultural assumptions can reduce the usability, usefulness, and safety of HIT systems for populations whose characteristics differ from "stereotypical" users. Among these non-typical users, inappropriate embedded design assumptions may contribute to health care disparities. It is unclear how to address potentially inappropriate embedded HIT design assumptions once detected. The objective of this paper is to explain HIT universal design principles derived from the human factors engineering literature that can help to overcome potential usability and/or patient safety issues that are associated with unrecognized, embedded assumptions about cultural groups when designing HIT systems. Existing best practices, guidance, and standards in software usability and accessibility were subjected to a 5-step expert review process to identify and summarize those best practices, guidance, and standards that could help identify and/or address embedded design assumptions in HIT that could negatively impact patient safety, particularly for non-majority HIT user populations. An iterative consensus-based process was then used to derive evidence-based design principles from the data to address potentially inappropriate embedded cultural assumptions. Design principles that may help identify and address embedded HIT

  8. Robust optimization methods for cardiac sparing in tangential breast IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca; Lee, Jenny; Chan, Timothy C. Y.

    Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructedmore » using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the

  9. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    ERIC Educational Resources Information Center

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  10. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    NASA Astrophysics Data System (ADS)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative

  11. Consequences of Violated Equating Assumptions under the Equivalent Groups Design

    ERIC Educational Resources Information Center

    Lyren, Per-Erik; Hambleton, Ronald K.

    2011-01-01

    The equal ability distribution assumption associated with the equivalent groups equating design was investigated in the context of a selection test for admission to higher education. The purpose was to assess the consequences for the test-takers in terms of receiving improperly high or low scores compared to their peers, and to find strong…

  12. Has the "Equal Environments" assumption been tested in twin studies?

    PubMed

    Eaves, Lindon; Foley, Debra; Silberg, Judy

    2003-12-01

    A recurring criticism of the twin method for quantifying genetic and environmental components of human differences is the necessity of the so-called "equal environments assumption" (EEA) (i.e., that monozygotic and dizygotic twins experience equally correlated environments). It has been proposed to test the EEA by stratifying twin correlations by indices of the amount of shared environment. However, relevant environments may also be influenced by genetic differences. We present a model for the role of genetic factors in niche selection by twins that may account for variation in indices of the shared twin environment (e.g., contact between members of twin pairs). Simulations reveal that stratification of twin correlations by amount of contact can yield spurious evidence of large shared environmental effects in some strata and even give false indications of genotype x environment interaction. The stratification approach to testing the equal environments assumption may be misleading and the results of such tests may actually be consistent with a simpler theory of the role of genetic factors in niche selection.

  13. Bell violation using entangled photons without the fair-sampling assumption.

    PubMed

    Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton

    2013-05-09

    The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.

  14. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    NASA Astrophysics Data System (ADS)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  15. Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.

    2017-12-01

    In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last

  16. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  17. The Ubiquitous Laplacian Assumption: Reply to Lee and Wagenmakers (2005)

    ERIC Educational Resources Information Center

    Trafimow, David

    2005-01-01

    In their comment on D. Trafimow, M. D. Lee and E. Wagenmakers argued that the requisite probabilities to use in Bayes's theorem can always be found. In the present reply, the author asserts that M. D. Lee and E. Wagenmakers use a problematic assumption and that finding the requisite probabilities is not straightforward. After describing the…

  18. VITRECTOMY FOR INTERMEDIATE AGE-RELATED MACULAR DEGENERATION ASSOCIATED WITH TANGENTIAL VITREOMACULAR TRACTION: A CLINICOPATHOLOGIC CORRELATION.

    PubMed

    Ziada, Jean; Hagenau, Felix; Compera, Denise; Wolf, Armin; Scheler, Renate; Schaumberger, Markus M; Priglinger, Siegfried G; Schumann, Ricarda G

    2018-03-01

    To describe the morphologic characteristics of the vitreomacular interface in intermediate age-related macular degeneration associated with tangential traction due to premacular membrane formation and to correlate with optical coherence tomography (OCT) findings and clinical data. Premacular membrane specimens were removed sequentially with the internal limiting membrane from 27 eyes of 26 patients with intermediate age-related macular degeneration during standard vitrectomy. Specimens were processed for immunocytochemical staining of epiretinal cells and extracellular matrix components. Ultrastructural analysis was performed using transmission electron microscopy. Spectral domain optical coherence tomography images and patient charts were evaluated in retrospect. Immunocytochemistry revealed hyalocytes and myofibroblasts as predominant cell types. Ultrastructural analysis demonstrated evidence of vitreoschisis in all eyes. Myofibroblasts with contractile properties were observed to span between folds of the internal limiting membrane and vitreous cortex collagen. Retinal pigment epithelial cells or inflammatory cells were not detected. Mean visual acuity (Snellen) showed significant improvement from 20/72 ± 20/36 to 20/41 ± 20/32 (P < 0.001) after a mean follow-up period of 19 months (median, 17 months). During this period, none of the eyes required anti-vascular endothelial growth factor therapy. Fibrocellular premacular proliferation in intermediate age-related macular degeneration predominantly consists of vitreous collagen, hyalocytes, and myofibroblasts with contractile properties. Vitreoschisis and vitreous-derived cells appear to play an important role in traction formation of this subgroup of eyes. In patients with intermediate age-related macular degeneration and contractile premacular membrane, release of traction by vitrectomy with internal limiting membrane peeling results in significantly functional and anatomical improvement.

  19. Keeping Things Simple: Why the Human Development Index Should Not Diverge from Its Equal Weights Assumption

    ERIC Educational Resources Information Center

    Stapleton, Lee M.; Garrod, Guy D.

    2007-01-01

    Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…

  20. Public-private partnerships to improve primary healthcare surgeries: clarifying assumptions about the role of private provider activities.

    PubMed

    Mudyarabikwa, Oliver; Tobi, Patrick; Regmi, Krishna

    2017-07-01

    Aim To examine assumptions about public-private partnership (PPP) activities and their role in improving public procurement of primary healthcare surgeries. PPPs were developed to improve the quality of care and patient satisfaction. However, evidence of their effectiveness in delivering health benefits is limited. A qualitative study design was employed. A total of 25 interviews with public sector staff (n=23) and private sector managers (n=2) were conducted to understand their interpretations of assumptions in the activities of private investors and service contractors participating in Local Improvement Finance Trust (LIFT) partnerships. Realist evaluation principles were applied in the data analysis to interpret the findings. Six thematic areas of assumed health benefits were identified: (i) quality improvement; (ii) improved risk management; (iii) reduced procurement costs; (iv) increased efficiency; (v) community involvement; and (vi) sustainable investment. Primary Care Trusts that chose to procure their surgeries through LIFT were expected to support its implementation by providing an environment conducive for the private participants to achieve these benefits. Private participant activities were found to be based on a range of explicit and tacit assumptions perceived helpful in achieving government objectives for LIFT. The success of PPPs depended upon private participants' (i) capacity to assess how PPP assumptions added value to their activities, (ii) effectiveness in interpreting assumptions in their expected activities, and (iii) preparedness to align their business principles to government objectives for PPPs. They risked missing some of the expected benefits because of some factors constraining realization of the assumptions. The ways in which private participants preferred to carry out their activities also influenced the extent to which expected benefits were achieved. Giving more discretion to public than private participants over critical

  1. Effects of various assumptions on the calculated liquid fraction in isentropic saturated equilibrium expansions

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1980-01-01

    The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.

  2. Testing assumptions for unbiased estimation of survival of radiomarked harlequin ducks

    USGS Publications Warehouse

    Esler, Daniel N.; Mulcahy, Daniel M.; Jarvis, Robert L.

    2000-01-01

    Unbiased estimates of survival based on individuals outfitted with radiotransmitters require meeting the assumptions that radios do not affect survival, and animals for which the radio signal is lost have the same survival probability as those for which fate is known. In most survival studies, researchers have made these assumptions without testing their validity. We tested these assumptions by comparing interannual recapture rates (and, by inference, survival) between radioed and unradioed adult female harlequin ducks (Histrionicus histrionicus), and for radioed females, between right-censored birds (i.e., those for which the radio signal was lost during the telemetry monitoring period) and birds with known fates. We found that recapture rates of birds equipped with implanted radiotransmitters (21.6 ± 3.0%; x̄ ± SE) were similar to unradioed birds (21.7 ± 8.6%), suggesting that radios did not affect survival. Recapture rates also were similar between right-censored (20.6 ± 5.1%) and known-fate individuals (22.1 ± 3.8%), suggesting that missing birds were not subject to differential mortality. We also determined that capture and handling resulted in short-term loss of body mass for both radioed and unradioed females and that this effect was more pronounced for radioed birds (the difference between groups was 15.4 ± 7.1 g). However, no difference existed in body mass after recapture 1 year later. Our study suggests that implanted radios are an unbiased method for estimating survival of harlequin ducks and likely other species under similar circumstances.

  3. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    PubMed

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  4. Fringe-jump corrected far infrared tangential interferometer/polarimeter for a real-time density feedback control system of NSTX plasmasa)

    NASA Astrophysics Data System (ADS)

    Juhn, J.-W.; Lee, K. C.; Hwang, Y. S.; Domier, C. W.; Luhmann, N. C.; Leblanc, B. P.; Mueller, D.; Gates, D. A.; Kaita, R.

    2010-10-01

    The far infrared tangential interferometer/polarimeter (FIReTIP) of the National Spherical Torus Experiment (NSTX) has been set up to provide reliable electron density signals for a real-time density feedback control system. This work consists of two main parts: suppression of the fringe jumps that have been prohibiting the plasma density from use in the direct feedback to actuators and the conceptual design of a density feedback control system including the FIReTIP, control hardware, and software that takes advantage of the NSTX plasma control system (PCS). By investigating numerous shot data after July 2009 when the new electronics were installed, fringe jumps in the FIReTIP are well characterized, and consequently the suppressing algorithms are working properly as shown in comparisons with the Thomson scattering diagnostic. This approach is also applicable to signals taken at a 5 kHz sampling rate, which is a fundamental constraint imposed by the digitizers providing inputs to the PCS. The fringe jump correction algorithm, as well as safety and feedback modules, will be included as submodules either in the gas injection system category or a new category of density in the PCS.

  5. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    NASA Astrophysics Data System (ADS)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  6. Hydraulic pressures generated in magnetic ionic liquids by paramagnetic fluid/air interfaces inside of uniform tangential magnetic fields.

    PubMed

    Scovazzo, Paul; Portugal, Carla A M; Rosatella, Andreia A; Afonso, Carlos A M; Crespo, João G

    2014-08-15

    Magnetic Ionic Liquid (MILs), novel magnetic molecules that form "pure magnetic liquids," will follow the Ferrohydrodynamic Bernoulli Relationship. Based on recent literature, the modeling of this fluid system is an open issue and potentially controversial. We imposed uniform magnetic fields parallel to MIL/air interfaces where the capillary forces were negligible, the Quincke Problem. The size and location of the bulk fluid as well as the size and location of the fluid/air interface inside of the magnetic field were varied. MIL properties varied included the density, magnetic susceptibility, chemical structure, and magnetic element. Uniform tangential magnetic fields pulled the MILs up counter to gravity. The forces per area were not a function of the volume, the surface area inside of the magnetic field, or the volume displacement. However, the presence of fluid/air interfaces was necessary for the phenomena. The Ferrohydrodynamic Bernoulli Relationship predicted the phenomena with the forces being directly related to the fluid's volumetric magnetic susceptibility and the square of the magnetic field strength. [emim][FeCl4] generated the greatest hydraulic head (64-mm or 910 Pa at 1.627 Tesla). This work could aid in experimental design, when free surfaces are involved, and in the development of MIL applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Fluid-Structure Interactions as Flow Propagates Tangentially Over a Flexible Plate with Application to Voiced Speech Production

    NASA Astrophysics Data System (ADS)

    Westervelt, Andrea; Erath, Byron

    2013-11-01

    Voiced speech is produced by fluid-structure interactions that drive vocal fold motion. Viscous flow features influence the pressure in the gap between the vocal folds (i.e. glottis), thereby altering vocal fold dynamics and the sound that is produced. During the closing phases of the phonatory cycle, vortices form as a result of flow separation as air passes through the divergent glottis. It is hypothesized that the reduced pressure within a vortex core will alter the pressure distribution along the vocal fold surface, thereby aiding in vocal fold closure. The objective of this study is to determine the impact of intraglottal vortices on the fluid-structure interactions of voiced speech by investigating how the dynamics of a flexible plate are influenced by a vortex ring passing tangentially over it. A flexible plate, which models the medial vocal fold surface, is placed in a water-filled tank and positioned parallel to the exit of a vortex generator. The physical parameters of plate stiffness and vortex circulation are scaled with physiological values. As vortices propagate over the plate, particle image velocimetry measurements are captured to analyze the energy exchange between the fluid and flexible plate. The investigations are performed over a range of vortex formation numbers, and lateral displacements of the plate from the centerline of the vortex trajectory. Observations show plate oscillations with displacements directly correlated with the vortex core location.

  8. Mexican-Trained Educators in the United States: Our Assumptions--Their Beliefs.

    ERIC Educational Resources Information Center

    Maroney, Oanh H.; Smith, Howard L.

    A study examined the beliefs and attitudes of Mexican-trained educators regarding instruction for minority and language minority students in light of assumptions that students experience better outcomes with culturally and linguistically compatible teachers. Fifteen educators who received their teacher education in Mexico, and whose native…

  9. 33 CFR Appendix B to Part 157 - Subdivision and Stability Assumptions

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... liquids from damaged compartments. (b) The permeabilities are assumed as follows: Intended space use... space located aft is involved in the damage assumption. The machinery space is calculated as a single... between adjacent transverse bulkheads except the machinery space. (b) The extent and the character of the...

  10. 33 CFR Appendix B to Part 157 - Subdivision and Stability Assumptions

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... liquids from damaged compartments. (b) The permeabilities are assumed as follows: Intended space use... space located aft is involved in the damage assumption. The machinery space is calculated as a single... between adjacent transverse bulkheads except the machinery space. (b) The extent and the character of the...

  11. 33 CFR Appendix B to Part 157 - Subdivision and Stability Assumptions

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... liquids from damaged compartments. (b) The permeabilities are assumed as follows: Intended space use... space located aft is involved in the damage assumption. The machinery space is calculated as a single... between adjacent transverse bulkheads except the machinery space. (b) The extent and the character of the...

  12. 33 CFR Appendix B to Part 157 - Subdivision and Stability Assumptions

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... liquids from damaged compartments. (b) The permeabilities are assumed as follows: Intended space use... space located aft is involved in the damage assumption. The machinery space is calculated as a single... between adjacent transverse bulkheads except the machinery space. (b) The extent and the character of the...

  13. 33 CFR Appendix B to Part 157 - Subdivision and Stability Assumptions

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... liquids from damaged compartments. (b) The permeabilities are assumed as follows: Intended space use... space located aft is involved in the damage assumption. The machinery space is calculated as a single... between adjacent transverse bulkheads except the machinery space. (b) The extent and the character of the...

  14. 42 CFR 137.303 - Are Federal or other funds available for training associated with Tribal assumption of...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... associated with Tribal assumption of environmental responsibilities? 137.303 Section 137.303 Public Health... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.303 Are Federal or other funds available for training associated with Tribal assumption of environmental responsibilities? Yes...

  15. 42 CFR 137.303 - Are Federal or other funds available for training associated with Tribal assumption of...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... associated with Tribal assumption of environmental responsibilities? 137.303 Section 137.303 Public Health... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.303 Are Federal or other funds available for training associated with Tribal assumption of environmental responsibilities? Yes...

  16. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Super learning to hedge against incorrect inference from arbitrary parametric assumptions in marginal structural modeling.

    PubMed

    Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J

    2013-08-01

    Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Quantum information versus black hole physics: deep firewalls from narrow assumptions

    NASA Astrophysics Data System (ADS)

    Braunstein, Samuel L.; Pirandola, Stefano

    2018-07-01

    The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes `all the way down' in contrast with earlier work describing only a structure at the horizon. This article is part of a discussion meeting issue `Foundations of quantum mechanics and their impact on contemporary society'.

  19. Quantum information versus black hole physics: deep firewalls from narrow assumptions.

    PubMed

    Braunstein, Samuel L; Pirandola, Stefano

    2018-07-13

    The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes 'all the way down' in contrast with earlier work describing only a structure at the horizon.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  20. Assumptions in the European Union biofuels policy: frictions with experiences in Germany, Brazil and Mozambique.

    PubMed

    Franco, Jennifer; Levidow, Les; Fig, David; Goldfarb, Lucia; Hönicke, Mireille; Mendonça, Maria Luisa

    2010-01-01

    The biofuel project is an agro-industrial development and politically contested policy process where governments increasingly become global actors. European Union (EU) biofuels policy rests upon arguments about societal benefits of three main kinds - namely, environmental protection (especially greenhouse gas savings), energy security and rural development, especially in the global South. Each argument involves optimistic assumptions about what the putative benefits mean and how they can be fulfilled. After examining those assumptions, we compare them with experiences in three countries - Germany, Brazil and Mozambique - which have various links to each other and to the EU through biofuels. In those case studies, there are fundamental contradictions between EU policy assumptions and practices in the real world, involving frictional encounters among biofuel promoters as well as with people adversely affected. Such contradictions may intensify with the future rise of biofuels and so warrant systematic attention.

  1. The NASA/MSFC global reference atmospheric model: MOD 3 (with spherical harmonic wind model)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Fletcher, G. R.; Gramling, F. E.; Pace, W. B.

    1980-01-01

    Improvements to the global reference atmospheric model are described. The basic model includes monthly mean values of pressure, density, temperature, and geostrophic winds, as well as quasi-biennial and small and large scale random perturbations. A spherical harmonic wind model for the 25 to 90 km height range is included. Below 25 km and above 90 km, the GRAM program uses the geostrophic wind equations and pressure data to compute the mean wind. In the altitudes where the geostrophic wind relations are used, an interpolation scheme is employed for estimating winds at low latitudes where the geostrophic wind relations being to mesh down. Several sample wind profiles are given, as computed by the spherical harmonic model. User and programmer manuals are presented.

  2. The extended evolutionary synthesis: its structure, assumptions and predictions

    PubMed Central

    Laland, Kevin N.; Uller, Tobias; Feldman, Marcus W.; Sterelny, Kim; Müller, Gerd B.; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-01-01

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the ‘extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism–environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. PMID:26246559

  3. Drug policy in sport: hidden assumptions and inherent contradictions.

    PubMed

    Smith, Aaron C T; Stewart, Bob

    2008-03-01

    This paper considers the assumptions underpinning the current drugs-in-sport policy arrangements. We examine the assumptions and contradictions inherent in the policy approach, paying particular attention to the evidence that supports different policy arrangements. We find that the current anti-doping policy of the World Anti-Doping Agency (WADA) contains inconsistencies and ambiguities. WADA's policy position is predicated upon four fundamental principles; first, the need for sport to set a good example; secondly, the necessity of ensuring a level playing field; thirdly, the responsibility to protect the health of athletes; and fourthly, the importance of preserving the integrity of sport. A review of the evidence, however, suggests that sport is a problematic institution when it comes to setting a good example for the rest of society. Neither is it clear that sport has an inherent or essential integrity that can only be sustained through regulation. Furthermore, it is doubtful that WADA's anti-doping policy is effective in maintaining a level playing field, or is the best means of protecting the health of athletes. The WADA anti-doping policy is based too heavily on principals of minimising drug use, and gives insufficient weight to the minimisation of drug-related harms. As a result drug-related harms are being poorly managed in sport. We argue that anti-doping policy in sport would benefit from placing greater emphasis on a harm minimisation model.

  4. The Robustness of the Studentized Range Statistic to Violations of the Normality and Homogeneity of Variance Assumptions.

    ERIC Educational Resources Information Center

    Ramseyer, Gary C.; Tcheng, Tse-Kia

    The present study was directed at determining the extent to which the Type I Error rate is affected by violations in the basic assumptions of the q statistic. Monte Carlo methods were employed, and a variety of departures from the assumptions were examined. (Author)

  5. 76 FR 22925 - Assumption Buster Workshop: Abnormal Behavior Detection Finds Malicious Actors

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-25

    ... Technology Research and Development (NITRD) Program, National Science Foundation. ACTION: Call for... NATIONAL SCIENCE FOUNDATION Assumption Buster Workshop: Abnormal Behavior Detection Finds...: The NCO, on behalf of the Special Cyber Operations Research and Engineering (SCORE) Committee, an...

  6. Marking and Moderation in the UK: False Assumptions and Wasted Resources

    ERIC Educational Resources Information Center

    Bloxham, Sue

    2009-01-01

    This article challenges a number of assumptions underlying marking of student work in British universities. It argues that, in developing rigorous moderation procedures, we have created a huge burden for markers which adds little to accuracy and reliability but creates additional work for staff, constrains assessment choices and slows down…

  7. What's Love Got to Do with It? Rethinking Common Sense Assumptions

    ERIC Educational Resources Information Center

    Trachman, Matthew; Bluestone, Cheryl

    2005-01-01

    One of the most basic tasks in introductory social science classes is to get students to reexamine their common sense assumptions concerning human behavior. This article introduces a shared assignment developed for a learning community that paired an introductory sociology and psychology class. The assignment challenges students to rethink the…

  8. Estimation of the energy loss at the blades in rowing: common assumptions revisited.

    PubMed

    Hofmijster, Mathijs; De Koning, Jos; Van Soest, A J

    2010-08-01

    In rowing, power is inevitably lost as kinetic energy is imparted to the water during push-off with the blades. Power loss is estimated from reconstructed blade kinetics and kinematics. Traditionally, it is assumed that the oar is completely rigid and that force acts strictly perpendicular to the blade. The aim of the present study was to evaluate how reconstructed blade kinematics, kinetics, and average power loss are affected by these assumptions. A calibration experiment with instrumented oars and oarlocks was performed to establish relations between measured signals and oar deformation and blade force. Next, an on-water experiment was performed with a single female world-class rower rowing at constant racing pace in an instrumented scull. Blade kinematics, kinetics, and power loss under different assumptions (rigid versus deformable oars; absence or presence of a blade force component parallel to the oar) were reconstructed. Estimated power losses at the blades are 18% higher when parallel blade force is incorporated. Incorporating oar deformation affects reconstructed blade kinematics and instantaneous power loss, but has no effect on estimation of power losses at the blades. Assumptions on oar deformation and blade force direction have implications for the reconstructed blade kinetics and kinematics. Neglecting parallel blade forces leads to a substantial underestimation of power losses at the blades.

  9. Elasticity reconstruction: Beyond the assumption of local homogeneity

    NASA Astrophysics Data System (ADS)

    Sinkus, Ralph; Daire, Jean-Luc; Van Beers, Bernard E.; Vilgrain, Valerie

    2010-07-01

    Elasticity imaging is a novel domain which is currently gaining significant interest in the medical field. Most inversion techniques are based on the homogeneity assumption, i.e. the local spatial derivatives of the complex-shear modulus are ignored. This analysis presents an analytic approach in order to overcome this limitation, i.e. first order spatial derivatives of the real-part of the complex-shear modulus are taken into account. Resulting distributions in a gauged breast lesion phantom agree very well with the theoretical expectations. An in-vivo example of a cholangiocarcinoma demonstrates that the new approach provides maps of the viscoelastic properties which agree much better with expectations from anatomy.

  10. Tangential flow ultrafiltration for detection of white spot syndrome virus (WSSV) in shrimp pond water.

    PubMed

    Alavandi, S V; Ananda Bharathi, R; Satheesh Kumar, S; Dineshkumar, N; Saravanakumar, C; Joseph Sahaya Rajan, J

    2015-06-15

    Water represents the most important component in the white spot syndrome virus (WSSV) transmission pathway in aquaculture, yet there is very little information. Detection of viruses in water is a challenge, since their counts will often be too low to be detected by available methods such as polymerase chain reaction (PCR). In order to overcome this difficulty, viruses in water have to be concentrated from large volumes of water prior to detection. In this study, a total of 19 water samples from aquaculture ecosystem comprising 3 creeks, 10 shrimp culture ponds, 3 shrimp broodstock tanks and 2 larval rearing tanks of shrimp hatcheries and a sample from a hatchery effluent treatment tank were subjected to concentration of viruses by ultrafiltration (UF) using tangential flow filtration (TFF). Twenty to 100l of water from these sources was concentrated to a final volume of 100mL (200-1000 fold). The efficiency of recovery of WSSV by TFF ranged from 7.5 to 89.61%. WSSV could be successfully detected by PCR in the viral concentrates obtained from water samples of three shrimp culture ponds, one each of the shrimp broodstock tank, larval rearing tank, and the shrimp hatchery effluent treatment tank with WSSV copy numbers ranging from 6 to 157mL(-1) by quantitative real time PCR. The ultrafiltration virus concentration technique enables efficient detection of shrimp viral pathogens in water from aquaculture facilities. It could be used as an important tool to understand the efficacy of biosecurity protocols adopted in the aquaculture facility and to carry out epidemiological investigations of aquatic viral pathogens. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Patterns and Assumptions: The Keys to Understanding Organizational Cultures.

    DTIC Science & Technology

    1982-06-01

    assumption of the Navaho Indians: 8 Experience shows that if one asks Navaho Indians about witchcraft , more than 70 per cent will give almost identical...verbal responses. The replies will vary only in this fashion: "Who told you to talk to me about witchcraft ?" "Who said that I knew anything about... witchcraft ?" "Why do you come to ask about this--who told--pou I knew about it?" Here one has a behavioral pattern of the explicit culture, for the structure

  12. Of mental models, assumptions and heuristics: The case of acids and acid strength

    NASA Astrophysics Data System (ADS)

    McClary, Lakeisha Michelle

    This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An

  13. Diagnosing development. II - A study of rapid cyclone development using analyzed data fields

    NASA Technical Reports Server (NTRS)

    Smith, Phillip; Lupo, Anthony; Zwack, Peter

    1991-01-01

    A diagnosis is presented of the explosive development phase of a cyclone that occurred over the southeastern U.S. during the 24 hour period 1200 GMT January 20 to 1200 GMT January 21, 1979. The Zwack-Osossi development equation is extended to incorporate geostrophic and ageostrophic forcing of the basic development parameter, geostrophic vorticity tendency. This equation yields reasonable comparability with observed geostrophic vorticity changes and shows positive vorticity advection, latent heat release and thermal advection to be the primary development mechanisms.

  14. How biological background assumptions influence scientific risk evaluation of stacked genetically modified plants: an analysis of research hypotheses and argumentations.

    PubMed

    Rocca, Elena; Andersen, Fredrik

    2017-08-14

    Scientific risk evaluations are constructed by specific evidence, value judgements and biological background assumptions. The latter are the framework-setting suppositions we apply in order to understand some new phenomenon. That background assumptions co-determine choice of methodology, data interpretation, and choice of relevant evidence is an uncontroversial claim in modern basic science. Furthermore, it is commonly accepted that, unless explicated, disagreements in background assumptions can lead to misunderstanding as well as miscommunication. Here, we extend the discussion on background assumptions from basic science to the debate over genetically modified (GM) plants risk assessment. In this realm, while the different political, social and economic values are often mentioned, the identity and role of background assumptions at play are rarely examined. We use an example from the debate over risk assessment of stacked genetically modified plants (GM stacks), obtained by applying conventional breeding techniques to GM plants. There are two main regulatory practices of GM stacks: (i) regulate as conventional hybrids and (ii) regulate as new GM plants. We analyzed eight papers representative of these positions and found that, in all cases, additional premises are needed to reach the stated conclusions. We suggest that these premises play the role of biological background assumptions and argue that the most effective way toward a unified framework for risk analysis and regulation of GM stacks is by explicating and examining the biological background assumptions of each position. Once explicated, it is possible to either evaluate which background assumptions best reflect contemporary biological knowledge, or to apply Douglas' 'inductive risk' argument.

  15. Botallo's error, or the quandaries of the universality assumption.

    PubMed

    Bartolomeo, Paolo; Seidel Malkinson, Tal; de Vito, Stefania

    2017-01-01

    One of the founding principles of human cognitive neuroscience is the so-called universality assumption, the postulate that neurocognitive mechanisms do not show major differences among individuals. Without negating the importance of the universality assumption for the development of cognitive neuroscience, or the importance of single-case studies, here we aim at stressing the potential dangers of interpreting the pattern of performance of single patients as conclusive evidence concerning the architecture of the intact neurocognitive system. We take example from the case of Leonardo Botallo, an Italian surgeon of the Renaissance period, who claimed to have discovered a new anatomical structure of the adult human heart. Unfortunately, Botallo's discovery was erroneous, because what he saw in the few samples he examined was in fact the anomalous persistence of a fetal structure. Botallo's error is a reminder of the necessity to always strive for replication, despite the major hindrance of a publication system heavily biased towards novelty. In the present paper, we briefly discuss variations and anomalies in human brain anatomy and introduce the issue of variability in cognitive neuroscience. We then review some examples of the impact on cognition of individual variations in (1) brain structure, (2) brain functional organization and (3) brain damage. We finally discuss the importance and limits of single case studies in the neuroimaging era, outline potential ways to deal with individual variability, and draw some general conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. An empirical comparison of statistical tests for assessing the proportional hazards assumption of Cox's model.

    PubMed

    Ng'andu, N H

    1997-03-30

    In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.

  17. Pupil Mobility, Choice and the Secondary School Market: Assumptions and Realities

    ERIC Educational Resources Information Center

    Dobson, Janet

    2008-01-01

    This paper examines assumptions which underpin the promotion of a quasi-market in the English secondary school system in light of research findings on pupil mobility--that is, children joining and leaving schools at non-standard times. It draws principally on research funded by the Nuffield Foundation and reports findings of a study of pupil…

  18. The Drive to Diversify the Teaching Profession: Narrow Assumptions, Hidden Complexities

    ERIC Educational Resources Information Center

    Santoro, Ninetta

    2015-01-01

    In response to increasing cultural diversity within student populations in Australia as well as Britain, Europe and North America, there have been ongoing calls to diversify the teaching profession. Such a strategy is based on assumptions that teachers who are of ethnic and racial minority are well placed to act as role models for minority…

  19. Propagation of large-amplitude waves on dielectric liquid sheets in a tangential electric field: exact solutions in three-dimensional geometry.

    PubMed

    Zubarev, Nikolay M; Zubareva, Olga V

    2010-10-01

    Nonlinear waves on sheets of dielectric liquid in the presence of an external tangential electric field are studied theoretically. It is shown that waves of arbitrary shape in three-dimensional geometry can propagate along (or against) the electric field direction without distortion, i.e., the equations of motion admit a wide class of exact traveling wave solutions. This unusual situation occurs for nonconducting ideal liquids with high dielectric constants in the case of a sufficiently strong field strength. Governing equations for evolution of plane symmetric waves on fluid sheets are derived using conformal variables. A dispersion relation for the evolution of small perturbations of the traveling wave solutions is obtained. It follows from this relation that, regardless of the wave shape, the amplitudes of small-scale perturbations do not increase with time and, hence, the traveling waves are stable. We also study the interaction of counterpropagating symmetric waves with small but finite amplitudes. The corresponding solution of the equations of motion describes the nonlinear superposition of the oppositely directed waves. The results obtained are applicable for the description of long waves on fluid sheets in a horizontal magnetic field.

  20. An Evaluation of 700 mb Aircraft Reconnaissance Data for Selected Northwest Pacific Tropical Cyclones.

    DTIC Science & Technology

    1983-09-01

    ccesearch flights inte both Atlantic and ncr-.hwust Pacific tropical cyclones. Infcrmation providal by these studies expanded and, in some cases, altered...This assumption iaplies t at the curl of the tangential frictional drag is equal to zero. This further implies that the partial derivative of the sur...20) at 30 NM1, prior to the period of most rapidl deepening, Is reflecti at 60 NNl, and possibly at 90 NMl. In the case of super typhoon. rip (Fig