Sample records for practices roughly equivalent

  1. Surface changes of enamel after brushing with charcoal toothpaste

    NASA Astrophysics Data System (ADS)

    Pertiwi, U. I.; Eriwati, Y. K.; Irawan, B.

    2017-08-01

    The aim of this study was to determine the surface roughness changes of tooth enamel after brushing with charcoal toothpaste. Thirty specimens were brushed using distilled water (the first group), Strong® Formula toothpaste (the second group), and Charcoal® Formula toothpaste for four minutes and 40 seconds (equivalent to one month) and for 14 minutes (equivalent to three months) using a soft fleece toothbrush with a mass of 150 gr. The roughness was measured using a surface roughness tester, and the results were tested with repeated ANOVA test and one-way ANOVA. The value of the surface roughness of tooth enamel was significantly different (p<0.05) after brushing for an equivalent of one month and an equivalent of three months. Using toothpaste containing charcoal can increase the surface roughness of tooth enamel.

  2. Surface Features Parameterization and Equivalent Roughness Height Estimation of a Real Subglacial Conduit in the Arctic

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Liu, X.; Mankoff, K. D.; Gulley, J. D.

    2016-12-01

    The surfaces of subglacial conduits are very complex, coupling multi-scale roughness, large sinuosity, and cross-sectional variations together. Those features significantly affect the friction law and drainage efficiency inside the conduit by altering velocity and pressure distributions, thus posing considerable influences on the dynamic development of the conduit. Parameterizing the above surface features is a first step towards understanding their hydraulic influences. A Matlab package is developed to extract the roughness field, the conduit centerline, and associated area and curvature data from the conduit surface, acquired from 3D scanning. By using those data, the characteristic vertical and horizontal roughness scales are then estimated based on the structure functions. The centerline sinuosities, defined through three concepts, i.e., the traditional definition of a fluvial river, entropy-based sinuosity, and curvature-based sinuosity, are also calculated and compared. The cross-sectional area and equivalent circular diameter along the centerline are also calculated. Among those features, the roughness is especially important due to its pivotal role in determining the wall friction, and thus an estimation of the equivalent roughness height is of great importance. To achieve such a goal, the original conduit is firstly simplified into a straight smooth pipe with the same volume and centerline length, and the roughness field obtained above is then reconstructed into the simplified pipe. An OpenFOAM-based Large-eddy-simulation (LES) is then performed based on the reconstructed pipe. Considering that the Reynolds number is of the order 106, and the relative roughness is larger than 5% for 60% of the conduit, we test the validity of the resistance law for completely rough pipe. The friction factor is calculated based on the pressure drop and mean velocity in the simulation. Working together, the equivalent roughness height can be calculated. However, whether the assumption is applicable for the current case, i.e., high relative roughness, is a question. Two other roughness heights, i.e., the vertical roughness scale based on structure functions and viscous sublayer thickness determined from the wall boundary layer are also calculated and compared with the equivalent roughness height.

  3. Relation between skin micro-topography, roughness, and skin age.

    PubMed

    Trojahn, C; Dobos, G; Schario, M; Ludriksone, L; Blume-Peytavi, U; Kottner, J

    2015-02-01

    The topography of the skin surface consists of lines, wrinkles, and scales. Primary and secondary lines form a network like structure that may be identified as polygons. Skin surface roughness measurements are widely applied in dermatological research and practice but the relation between roughness parameters and their anatomical equivalents are unclear. This study aimed to investigate whether the number of closed polygons (NCP) per measurement field can be used as a reliable parameter to measure skin surface topography. For this purpose, we analysed the relation between skin surface roughness parameters and NCP in different age groups. Images of the volar forearm skin of 38 subjects (14 children, 12 younger, and 12 older adults) were obtained with the VisioScan VC98. The NCP was counted by three independent researchers and selected roughness parameters were measured. Interrater reliability of counting the number of closed polygons and correlations between NCP, roughness parameters, and age were calculated. The mean NCP/mm² in children was 3.1 (SD 1.1), in younger adults 1.0 (SD 0.7), and in older adults 1.0 (SD 0.9). The interrater reliability was 0.9. A negative correlation of NCP/mm² with age was observed, whereas measured roughness parameters were positively associated with age. NCP/mm² was weakly related to skin roughness. The NCP/mm² is a reproducible parameter for characterizing the skin surface topography. It is proposed as an additional parameter in dermatological research and practice because it represents distinct aspects of the cutaneous profile not covered by established roughness parameters. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Rough Finite State Automata and Rough Languages

    NASA Astrophysics Data System (ADS)

    Arulprakasam, R.; Perumal, R.; Radhakrishnan, M.; Dare, V. R.

    2018-04-01

    Sumita Basu [1, 2] recently introduced the concept of a rough finite state (semi)automaton, rough grammar and rough languages. Motivated by the work of [1, 2], in this paper, we investigate some closure properties of rough regular languages and establish the equivalence between the classes of rough languages generated by rough grammar and the classes of rough regular languages accepted by rough finite automaton.

  5. Faithfulness of Recurrence Plots: A Mathematical Proof

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; Komuro, Motomasa; Horai, Shunsuke; Aihara, Kazuyuki

    It is practically known that a recurrence plot, a two-dimensional visualization of time series data, can contain almost all information related to the underlying dynamics except for its spatial scale because we can recover a rough shape for the original time series from the recurrence plot even if the original time series is multivariate. We here provide a mathematical proof that the metric defined by a recurrence plot [Hirata et al., 2008] is equivalent to the Euclidean metric under mild conditions.

  6. Hydraulic resistance of submerged flexible vegetation

    NASA Astrophysics Data System (ADS)

    Stephan, Ursula; Gutknecht, Dieter

    2002-12-01

    The main research objective consisted in analysing the influence of roughness caused by aquatic vegetation (av), in particular submerged macrophytes, on the overall flow field. These plants are highly flexible and behave differently depending on the flow situation. They also react substantially to the flow field and thus, the roughness becomes variable and dynamic. Conventional flow formulas, such as the Manning or the Strickler formula, are one-dimensional and based on integral flow parameters. They are not suitable for quantifying the roughness of av, because the flow is complex and more dimensional due to the variable behaviour of the plants. Therefore, the present investigation concentrates on the definition of a characteristic hydraulic roughness parameter to quantify the resistance of av. Within this investigation laboratory experiments were carried out with three different types of av, chosen with respect to varying plant structures as well as stem lengths. Velocity measurements above these plants were conducted to determine the relationship between the hydraulic roughness and the deflected plant height. The deflected plant height is used as the geometric roughness parameter, whereas the equivalent sand roughness based on the universal logarithmic law modified by Nikuradse was used as hydraulic roughness parameter. The influence of relative submergence on the hydraulic roughness was also analysed. The analysis of the velocity measurements illustrates that equivalent sand roughness and zero plane displacement of the logarithmic law are correlated to the deflected plant height and are equally to this height.

  7. Bushfires, 2003. A rural GP's perspective.

    PubMed

    Robinson, Mark

    2003-12-01

    Extensive bushfires in January and February of 2003 had a major impact on many communities in northeast Victoria, East Gippsland, southern New South Wales and Canberra. These fires eventually engulfed an area roughly equivalent to the entire area of Germany. This article describes the impact of the fires and the role of the general practitioner in the emergency response, and presents recommendations for the role of general practice in future disaster planning. General practitioners have critical roles in the provision of round the clock general medical services to their communities in times of bushfire or natural disaster. They also act as gatekeepers to mental health services, psychiatric referral and counselling alongside other community based programs. Divisions of general practice have a pivotal role to play in disaster plans, particularly in coordinating the maintenance of ongoing medical services, facilitating communication between GPs and essential services, and integrating general practice into postdisaster recovery.

  8. Effects of fracture surface roughness and shear displacement on geometrical and hydraulic properties of three-dimensional crossed rock fracture models

    NASA Astrophysics Data System (ADS)

    Huang, Na; Liu, Richeng; Jiang, Yujing; Li, Bo; Yu, Liyuan

    2018-03-01

    While shear-flow behavior through fractured media has been so far studied at single fracture scale, a numerical analysis of the shear effect on the hydraulic response of 3D crossed fracture model is presented. The analysis was based on a series of crossed fracture models, in which the effects of fracture surface roughness and shear displacement were considered. The rough fracture surfaces were generated using the modified successive random additions (SRA) algorithm. The shear displacement was applied on one fracture, and at the same time another fracture shifted along with the upper and lower surfaces of the sheared fracture. The simulation results reveal the development and variation of preferential flow paths through the model during the shear, accompanied by the change of the flow rate ratios between two flow planes at the outlet boundary. The average contact area accounts for approximately 5-27% of the fracture planes during shear, but the actual calculated flow area is about 38-55% of the fracture planes, which is much smaller than the noncontact area. The equivalent permeability will either increase or decrease as shear displacement increases from 0 to 4 mm, depending on the aperture distribution of intersection part between two fractures. When the shear displacement continuously increases by up to 20 mm, the equivalent permeability increases sharply first, and then keeps increasing with a lower gradient. The equivalent permeability of rough fractured model is about 26-80% of that calculated from the parallel plate model, and the equivalent permeability in the direction perpendicular to shear direction is approximately 1.31-3.67 times larger than that in the direction parallel to shear direction. These results can provide a fundamental understanding of fluid flow through crossed fracture model under shear.

  9. Progress in radar snow research. [Brookings, South Dakota

    NASA Technical Reports Server (NTRS)

    Stiles, W. H.; Ulaby, F. T.; Fung, A. K.; Aslam, A.

    1981-01-01

    Multifrequency measurements of the radar backscatter from snow-covered terrain were made at several sites in Brookings, South Dakota, during the month of March of 1979. The data are used to examine the response of the scattering coefficient to the following parameters: (1) snow surface roughness, (2) snow liquid water content, and (3) snow water equivalent. The results indicate that the scattering coefficient is insensitive to snow surface roughness if the snow is drv. For wet snow, however, surface roughness can have a strong influence on the magnitude of the scattering coefficient. These observations confirm the results predicted by a theoretical model that describes the snow as a volume of Rayleig scatterers, bounded by a Gaussian random surface. In addition, empirical models were developed to relate the scattering coefficient to snow liquid water content and the dependence of the scattering coefficient on water equivalent was evaluated for both wet and dry snow conditions.

  10. Selection of representative embankments based on rough set - fuzzy clustering method

    NASA Astrophysics Data System (ADS)

    Bin, Ou; Lin, Zhi-xiang; Fu, Shu-yan; Gao, Sheng-song

    2018-02-01

    The premise condition of comprehensive evaluation of embankment safety is selection of representative unit embankment, on the basis of dividing the unit levee the influencing factors and classification of the unit embankment are drafted.Based on the rough set-fuzzy clustering, the influence factors of the unit embankment are measured by quantitative and qualitative indexes.Construct to fuzzy similarity matrix of standard embankment then calculate fuzzy equivalent matrix of fuzzy similarity matrix by square method. By setting the threshold of the fuzzy equivalence matrix, the unit embankment is clustered, and the representative unit embankment is selected from the classification of the embankment.

  11. Feather roughness reduces flow separation during low Reynolds number glides of swifts.

    PubMed

    van Bokhorst, Evelien; de Kat, Roeland; Elsinga, Gerrit E; Lentink, David

    2015-10-01

    Swifts are aerodynamically sophisticated birds with a small arm and large hand wing that provides them with exquisite control over their glide performance. However, their hand wings have a seemingly unsophisticated surface roughness that is poised to disturb flow. This roughness of about 2% chord length is formed by the valleys and ridges of overlapping primary feathers with thick protruding rachides, which make the wing stiffer. An earlier flow study of laminar-turbulent boundary layer transition over prepared swift wings suggested that swifts can attain laminar flow at a low angle of attack. In contrast, aerodynamic design theory suggests that airfoils must be extremely smooth to attain such laminar flow. In hummingbirds, which have similarly rough wings, flow measurements on a 3D printed model suggest that the flow separates at the leading edge and becomes turbulent well above the rachis bumps in a detached shear layer. The aerodynamic function of wing roughness in small birds is, therefore, not fully understood. Here, we performed particle image velocimetry and force measurements to compare smooth versus rough 3D-printed models of the swift hand wing. The high-resolution boundary layer measurements show that the flow over rough wings is indeed laminar at a low angle of attack and a low Reynolds number, but becomes turbulent at higher values. In contrast, the boundary layer over the smooth wing forms open laminar separation bubbles that extend beyond the trailing edge. The boundary layer dynamics of the smooth surface varies non-linearly as a function of angle of attack and Reynolds number, whereas the rough surface boasts more consistent turbulent boundary layer dynamics. Comparison of the corresponding drag values, lift values and glide ratios suggests, however, that glide performance is equivalent. The increased structural performance, boundary layer robustness and equivalent aerodynamic performance of rough wings might have provided small (proto) birds with an evolutionary window to high glide performance. © 2015. Published by The Company of Biologists Ltd.

  12. Field theoretic approach to roughness corrections

    NASA Astrophysics Data System (ADS)

    Wu, Hua Yao; Schaden, Martin

    2012-02-01

    We develop a systematic field theoretic description of roughness corrections to the Casimir free energy of a massless scalar field in the presence of parallel plates with mean separation a. Roughness is modeled by specifying a generating functional for correlation functions of the height profile. The two-point correlation function being characterized by its variance, σ2, and correlation length, ℓ. We obtain the partition function of a massless scalar quantum field interacting with the height profile of the surface via a δ-function potential. The partition function is given by a holographic reduction of this model to three coupled scalar fields on a two-dimensional plane. The original three-dimensional space with a flat parallel plate at a distance a from the rough plate is encoded in the nonlocal propagators of the surface fields on its boundary. Feynman rules for this equivalent 2+1-dimensional model are derived and its counterterms constructed. The two-loop contribution to the free energy of this model gives the leading roughness correction. The effective separation, aeff, to a rough plate is measured to a plane that is displaced a distance ρ∝σ2/ℓ from the mean of its profile. This definition of the separation eliminates corrections to the free energy of order 1/a4 and results in unitary scattering matrices. We obtain an effective low-energy model in the limit ℓ≪a. It determines the scattering matrix and equivalent planar scattering surface of a very rough plate in terms of the single length scale ρ. The Casimir force on a rough plate is found to always weaken with decreasing correlation length ℓ. The two-loop approximation to the free energy interpolates between the free energy of the effective low-energy model and that of the proximity force approximation - the force on a very rough plate with σ≳0.5ℓ being weaker than on a planar Dirichlet surface at any separation.

  13. The art and science of cross-border management.

    PubMed

    1998-07-01

    Mergers, acquisitions, and even plain expansion means that more practices operate in multiple states. But there are different employment laws in each state, and benefits such as health insurance can vary from market to market. The first step to effective management of staffs across state lines is to work through the maze of legal differences. Then you have to make sure that the benefits you give to all your staff members are roughly equivalent. That may mean phasing out some benefits at one location, or providing completely new ones in another. Once you get through the nitty gritty of legal and benefit analysis, you have to work on remote management issues. It can be difficult to merge the cultures at two practices, and even more difficult to get the new staff to trust you. These issues can often take longer to resolve than the legal ones.

  14. Polarimetry With Phased Array Antennas: Theoretical Framework and Definitions

    NASA Astrophysics Data System (ADS)

    Warnick, Karl F.; Ivashina, Marianna V.; Wijnholds, Stefan J.; Maaskant, Rob

    2012-01-01

    For phased array receivers, the accuracy with which the polarization state of a received signal can be measured depends on the antenna configuration, array calibration process, and beamforming algorithms. A signal and noise model for a dual-polarized array is developed and related to standard polarimetric antenna figures of merit, and the ideal polarimetrically calibrated, maximum-sensitivity beamforming solution for a dual-polarized phased array feed is derived. A practical polarimetric beamformer solution that does not require exact knowledge of the array polarimetric response is shown to be equivalent to the optimal solution in the sense that when the practical beamformers are calibrated, the optimal solution is obtained. To provide a rough initial polarimetric calibration for the practical beamformer solution, an approximate single-source polarimetric calibration method is developed. The modeled instrumental polarization error for a dipole phased array feed with the practical beamformer solution and single-source polarimetric calibration was -10 dB or lower over the array field of view for elements with alignments perturbed by random rotations with 5 degree standard deviation.

  15. An intermittency model for predicting roughness induced transition

    NASA Astrophysics Data System (ADS)

    Ge, Xuan; Durbin, Paul

    2014-11-01

    An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.

  16. Considerations of Methods of Improving Helicopter Efficiency

    NASA Technical Reports Server (NTRS)

    Dingeldein, Richard C.

    1961-01-01

    Recent NASA helicopter research indicates that significant improvements in hovering efficiency, up to 7 percent, are available from the use of a special airfoil section formed by combining an NACA 632A015 thickness distribution with an NACA 230 mean line. This airfoil should be considered for flying-crane-type helicopters. Application of standard leading-edge roughness causes a large drop in efficiency; however, the cambered rotor is shown to retain its superiority over a rotor having a symmetrical airfoil when both rotors have leading-edge roughness. A simple analysis of available rotor static-thrust data indicates a greatly reduced effect of compressibility effects on the rotor profile-drag power than predicted from calculations. Preliminary results of an experimental study of helicopter parasite drag indicate the practicability of achieving an equivalent flat-plate parasite-drag area of less than 4 square feet for a rotor-head-pylon-fuselage configuration (landing gear retracted) in the 2,000-pound minimum-flying-weight class. The large drag penalty of a conventional skid-type landing (3.6 square feet) can be reduced by two-thirds by careful design. Clean, fair, and smooth fuselages that tend to have narrow, deep cross sections are shown to have advantages from the standpoint of drag and download. A ferry range of the order of 1,500 miles is indicated to be practicable for the small helicopter considered.

  17. Atomic force microscopy of orb-spider-web-silks to measure surface nanostructuring and evaluate silk fibers per strand

    NASA Astrophysics Data System (ADS)

    Kane, D. M.; Naidoo, N.; Staib, G. R.

    2010-10-01

    Atomic force microscopy (AFM) study is used to measure the surface topology and roughness of radial and capture spider silks on the micro- and nanoscale. This is done for silks of the orb weaver spider Argiope keyserlingi. Capture silk has a surface roughness that is five times less than that for radial silk. The capture silk has an equivalent flatness of λ /100 (5-6 nm deep surface features) as an optical surface. This is equivalent to a very highly polished optical surface. AFM does show the number of silk fibers that make up a silk thread but geometric distortion occurs during sample preparation. This prevented AFM from accurately measuring the silk topology on the microscale in this study.

  18. On twelve types of covering-based rough sets.

    PubMed

    Safari, Samira; Hooshmandasl, Mohammad Reza

    2016-01-01

    Covering approximation spaces are a generalization of equivalence-based rough set theories. In this paper, we will consider twelve types of covering based approximation operators by combining four types of covering lower approximation operators and three types of covering upper approximation operators. Then, we will study the properties of these new pairs and show they have most of the common properties among existing covering approximation pairs. Finally, the relation between these new pairs is studied.

  19. Ray Tracing and Modal Methods for Modeling Radio Propagation in Tunnels With Rough Walls

    PubMed Central

    Zhou, Chenming

    2017-01-01

    At the ultrahigh frequencies common to portable radios, tunnels such as mine entries are often modeled by hollow dielectric waveguides. The roughness condition of the tunnel walls has an influence on radio propagation, and therefore should be taken into account when an accurate power prediction is needed. This paper investigates how wall roughness affects radio propagation in tunnels, and presents a unified ray tracing and modal method for modeling radio propagation in tunnels with rough walls. First, general analytical formulas for modeling the influence of the wall roughness are derived, based on the modal method and the ray tracing method, respectively. Second, the equivalence of the ray tracing and modal methods in the presence of wall roughnesses is mathematically proved, by showing that the ray tracing-based analytical formula can converge to the modal-based formula through the Poisson summation formula. The derivation and findings are verified by simulation results based on ray tracing and modal methods. PMID:28935995

  20. Simplified Approach to Predicting Rough Surface Transition

    NASA Technical Reports Server (NTRS)

    Boyle, Robert J.; Stripf, Matthias

    2009-01-01

    Turbine vane heat transfer predictions are given for smooth and rough vanes where the experimental data show transition moving forward on the vane as the surface roughness physical height increases. Consiste nt with smooth vane heat transfer, the transition moves forward for a fixed roughness height as the Reynolds number increases. Comparison s are presented with published experimental data. Some of the data ar e for a regular roughness geometry with a range of roughness heights, Reynolds numbers, and inlet turbulence intensities. The approach ta ken in this analysis is to treat the roughness in a statistical sense , consistent with what would be obtained from blades measured after e xposure to actual engine environments. An approach is given to determ ine the equivalent sand grain roughness from the statistics of the re gular geometry. This approach is guided by the experimental data. A roughness transition criterion is developed, and comparisons are made with experimental data over the entire range of experimental test co nditions. Additional comparisons are made with experimental heat tran sfer data, where the roughness geometries are both regular as well a s statistical. Using the developed analysis, heat transfer calculatio ns are presented for the second stage vane of a high pressure turbine at hypothetical engine conditions.

  1. Effect of Blade-surface Finish on Performance of a Single-stage Axial-flow Compressor

    NASA Technical Reports Server (NTRS)

    Moses, Jason J; Serovy, George, K

    1951-01-01

    A set of modified NACA 5509-34 rotor and stator blades was investigated with rough-machine, hand-filed, and highly polished surface finishes over a range of weight flows at six equivalent tip speeds from 672 to 1092 feet per second to determine the effect of blade-surface finish on the performance of a single-stage axial-flow compressor. Surface-finish effects decreased with increasing compressor speed and with decreasing flow at a given speed. In general, finishing blade surfaces below the roughness that may be considered aerodynamically smooth on the basis of an admissible-roughness formula will have no effect on compressor performance.

  2. Surface roughness effects on contact line motion with small capillary number

    NASA Astrophysics Data System (ADS)

    Yang, Feng-Chao; Chen, Xiao-Peng; Yue, Pengtao

    2018-01-01

    In this work, we investigate how surface roughness influences contact line dynamics by simulating forced wetting in a capillary tube. The tube wall is decorated with microgrooves and is intrinsically hydrophilic. A phase-field method is used to capture the fluid interface and the moving contact line. According to the numerical results, a criterion is proposed to judge whether the grooves are entirely wetted or not at vanishing capillary numbers. When the contact line moves over a train of grooves, the apparent contact angle exhibits a periodic nature, no matter whether the Cassie-Baxter or the Wenzel state is achieved. The oscillation amplitude of apparent contact angle is analyzed and found to be inversely proportional to the interface area. The contact line motion can be characterized as stick-jump-slip in the Cassie-Baxter state and stick-slip in the Wenzel state. By comparing to the contact line dynamics on smooth surfaces, equivalent microscopic contact angles and slip lengths are obtained. The equivalent slip length in the Cassie-Baxter state agrees well with the theoretical model in the literature. The equivalent contact angles are, however, much greater than the predictions of the Cassie-Baxter model and the Wenzel model for equilibrium stable states. Our results reveal that the pinning of the contact line at surface defects effectively enhances the hydrophobicity of rough surfaces, even when the surface material is intrinsically hydrophilic and the flow is under the Wenzel state.

  3. Influence of adhesive rough surface contact on microswitches

    NASA Astrophysics Data System (ADS)

    Wu, Ling; Rochus, V.; Noels, L.; Golinval, J. C.

    2009-12-01

    Stiction is a major failure mode in microelectromechanical systems (MEMS). Undesirable stiction, which results from contact between surfaces, threatens the reliability of MEMS severely as it breaks the actuation function of MEMS switches, for example. Although it may be possible to avoid stiction by increasing restoring forces using high spring constants, it follows that the actuation voltage has also to be increased significantly, which reduces the efficiency. In our research, an electrostatic-structural analysis is performed to estimate the proper design range of the equivalent spring constant, which is the main factor of restoring force in MEMS switches. The upper limit of equivalent spring constant is evaluated based on the initial gap width, the dielectric thickness, and the expected actuation voltage. The lower limit is assessed on the value of adhesive forces between the two contacting rough surfaces. The MEMS devices studied here are assumed to work in a dry environment. In these operating conditions only the van der Waals forces have to be considered for adhesion. A statistical model is used to simulate the rough surface, and the Maugis's model is combined with Kim's expansion to calculate adhesive forces. In the resulting model, the critical value of the spring stiffness depends on the material and surface properties, such as the elastic modulus, surface energy, and surface roughness. The aim of this research is to propose simple rules for design purposes.

  4. Surface correlations of hydrodynamic drag for transitionally rough engineering surfaces

    NASA Astrophysics Data System (ADS)

    Thakkar, Manan; Busse, Angela; Sandham, Neil

    2017-02-01

    Rough surfaces are usually characterised by a single equivalent sand-grain roughness height scale that typically needs to be determined from laboratory experiments. Recently, this method has been complemented by a direct numerical simulation approach, whereby representative surfaces can be scanned and the roughness effects computed over a range of Reynolds number. This development raises the prospect over the coming years of having enough data for different types of rough surfaces to be able to relate surface characteristics to roughness effects, such as the roughness function that quantifies the downward displacement of the logarithmic law of the wall. In the present contribution, we use simulation data for 17 irregular surfaces at the same friction Reynolds number, for which they are in the transitionally rough regime. All surfaces are scaled to the same physical roughness height. Mean streamwise velocity profiles show a wide range of roughness function values, while the velocity defect profiles show a good collapse. Profile peaks of the turbulent kinetic energy also vary depending on the surface. We then consider which surface properties are important and how new properties can be incorporated into an empirical model, the accuracy of which can then be tested. Optimised models with several roughness parameters are systematically developed for the roughness function and profile peak turbulent kinetic energy. In determining the roughness function, besides the known parameters of solidity (or frontal area ratio) and skewness, it is shown that the streamwise correlation length and the root-mean-square roughness height are also significant. The peak turbulent kinetic energy is determined by the skewness and root-mean-square roughness height, along with the mean forward-facing surface angle and spanwise effective slope. The results suggest feasibility of relating rough-wall flow properties (throughout the range from hydrodynamically smooth to fully rough) to surface parameters.

  5. Electroform replication of smooth mirrors from sapphire masters

    NASA Technical Reports Server (NTRS)

    Altkorn, R.; Chang, J.; Haidle, R.; Takacs, P. Z.; Ulmer, M. P.

    1992-01-01

    A sapphire master was used to produce mirrors that exhibit mid-to-high-frequency roughness as low as 3 A. The fabrication procedure and potential applications in X-ray astronomy are discussed. It is shown that foils replicated from flat smooth mandrels should offer at least equivalent HF roughness and significantly lower mid-frequency ripple than those coated with lacquer. A ceramic-surface mandrel could also be expected to last far longer without the need for repolishing than electroless nickel-coated mandrels.

  6. Sustaining dry surfaces under water

    PubMed Central

    Jones, Paul R.; Hao, Xiuqing; Cruz-Chu, Eduardo R.; Rykaczewski, Konrad; Nandy, Krishanu; Schutzius, Thomas M.; Varanasi, Kripa K.; Megaridis, Constantine M.; Walther, Jens H.; Koumoutsakos, Petros; Espinosa, Horacio D.; Patankar, Neelesh A.

    2015-01-01

    Rough surfaces immersed under water remain practically dry if the liquid-solid contact is on roughness peaks, while the roughness valleys are filled with gas. Mechanisms that prevent water from invading the valleys are well studied. However, to remain practically dry under water, additional mechanisms need consideration. This is because trapped gas (e.g. air) in the roughness valleys can dissolve into the water pool, leading to invasion. Additionally, water vapor can also occupy the roughness valleys of immersed surfaces. If water vapor condenses, that too leads to invasion. These effects have not been investigated, and are critically important to maintain surfaces dry under water. In this work, we identify the critical roughness scale, below which it is possible to sustain the vapor phase of water and/or trapped gases in roughness valleys – thus keeping the immersed surface dry. Theoretical predictions are consistent with molecular dynamics simulations and experiments. PMID:26282732

  7. It Pays to Be Organized: Organizing Arithmetic Practice around Equivalent Values Facilitates Understanding of Math Equivalence

    ERIC Educational Resources Information Center

    McNeil, Nicole M.; Chesney, Dana L.; Matthews, Percival G.; Fyfe, Emily R.; Petersen, Lori A.; Dunwiddie, April E.; Wheeler, Mary C.

    2012-01-01

    This experiment tested the hypothesis that organizing arithmetic fact practice by equivalent values facilitates children's understanding of math equivalence. Children (M age = 8 years 6 months, N = 104) were randomly assigned to 1 of 3 practice conditions: (a) equivalent values, in which problems were grouped by equivalent sums (e.g., 3 + 4 = 7, 2…

  8. Validation of the ROMI-RIP rough mill simulator

    Treesearch

    Edward R. Thomas; Urs Buehlmann

    2002-01-01

    The USDA Forest Service's ROMI-RIP rough mill rip-first simulation program is a popular tool for analyzing rough mill conditions, determining more efficient rough mill practices, and finding optimal lumber board cut-up patterns. However, until now, the results generated by ROMI-RIP have not been rigorously compared to those of an actual rough mill. Validating the...

  9. Some considerations in the evaluation of Seasat-A scatterometer /SASS/ measurements

    NASA Technical Reports Server (NTRS)

    Halberstam, I.

    1980-01-01

    A study is presented of the geophysical algorithms relating the Seasat-A scatterometer (SASS) backscatter measurements with a wind parameter. Although these measurements are closely related to surface features, an identification with surface layer parameters such as friction velocity or the roughness length is difficult. It is shown how surface truth in the form of wind speeds and coincident stability can be used to derive friction velocity or the equivalent neutral wind at an arbitrary height; it is also shown that the derived friction velocity values are sensitive to contested formulations relating friction velocity to the roughness length, while the derived values of the equivalent neutral wind are not. Examples of geophysical verification are demonstrated using values obtained from the Gulf of Alaska Seasat Experiment; these results show very little sensitivity to the type of wind parameter employed, suggesting that this insensitivity is mainly due to a large scatter in the SASS and surface truth data.

  10. Effect of surface roughness on liquid property measurements using mechanically oscillating sensors

    NASA Technical Reports Server (NTRS)

    Jain, Mahaveer K.; Grimes, Craig A.

    2002-01-01

    The resonant frequency and quality factor Q of a liquid immersed magnetoelastic sensor are shown to shift linearly with the liquid viscosity and density product. Measurements using different grade oils, organic chemicals, and glycerol-water mixtures show that the surface roughness of the sensor in combination with the molecular size of the liquid play important roles in determining measurement sensitivity, which can be controlled through adjusting the surface roughness of the sensor surface. A theoretical model describing the sensor resonant frequency and quality factor Q as a function of liquid properties is developed using a novel equivalent circuit approach. Experimental results are in agreement with theory when the liquid molecule size is larger than the average surface roughness. However, when the molecular size of the liquid is small relative to the surface roughness features molecules are trapped, and the trapped molecules act both as a mass load and viscous load; the result is higher viscous damping of the sensor than expected. c2002 Elsevier Science B.V. All rights reserved.

  11. PACIFIC NORTHWEST SALMON: FORECASTING THEIR STATUS IN 2100

    EPA Science Inventory

    Throughout the Pacific Northwest (northern California, Oregon, Idaho, Washington, and the Columbia Basin portion of British Columbia), many wild salmon stocks (a group of interbreeding individuals that is roughly equivalent to a "population") have declined and some have disappear...

  12. Hydraulic Roughness and Flow Resistance in a Subglacial Conduit

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Liu, X.; Mankoff, K. D.

    2017-12-01

    The hydraulic roughness significantly affects the flow resistance in real subglacial conduits, but has been poorly understood. To address this knowledge gap, this paper first proposes a procedure to define and quantify the geometry roughness, and then relates such a geometry roughness to the hydraulic roughness based on a series of computational fluid dynamics (CFD) simulations. The results indicate that by using the 2nd order structure function, the roughness field can be well quantified by the powers of the scaling-law, the vertical and horizontal length scales of the structure functions. The vertical length scale can be further chosen as the standard deviation of the roughness field σr. The friction factors calculated from either total drag force or the linear decreasing pressure agree very well with those calculated from traditional rough pipe theories when the equivalent hydraulic roughness height is corrected as ks = (1.1 ˜ 1.5)σr. This result means that the fully rough pipe resistance formula λ = [2 log(D0/2ks) + 1.74]-2, and the Moody diagram are still valid for the friction factor estimation in subglacial conduits when σr /D0<18% and ks/D0<22%. The results further show that when a proper hydraulic roughness is determined, the total flow resistance corresponding to the given hydraulic roughness height can be accurately modelled by using a rough wall function. This suggests that the flow resistance for the longer realistic subglacial conduits with large sinuosity and cross-sectional variations may be correctly predicted by CFD simulations. The results also show that the friction factors from CFD modeling are much larger than those determined from traditional rough pipe theories when σr /D0>20%.

  13. Comparing student learning with multiple research-based conceptual surveys: CSEM and BEMA.

    NASA Astrophysics Data System (ADS)

    Pollock, S. J.

    2008-10-01

    We present results demonstrating similar distributions of student scores, and statistically indistinguishable gains on two popular research-based assessment tools: the Brief Electricity and Magnetism Assessment (BEMA) and the Conceptual Survey of Electricity and Magnetism(CSEM). To deepen our understanding of student learning in our course environment and of these assessment tools as measures of student learning, we identify systematic trends and differences in results from these two instruments. We investigate correlations of both pre- and post- conceptual scores with other measures including traditional exam scores and course grades, student background (earlier grades), gender, a pretest of scientific reasoning, and tests of attitudes and beliefs about science and learning science. Overall, for practical purposes, we find the BEMA and CSEM are roughly equivalently useful instruments for measuring student learning in our course.

  14. FUTURE OF PACIFIC NORTHWEST SALMON: SCIENCE AND POLICY IN ACTION

    EPA Science Inventory

    Throughout the Pacific Northwest (northern California, Oregon, Idaho, Washington, and the Columbia Basin portion of British Columbia), many wild salmon stocks (a group of interbreeding individuals that is roughly equivalent to a "population") have declined and some have disappear...

  15. RESTORING WILD SALMON TO THE PACIFIC NORTHWEST: CHASING AN ILLUSION

    EPA Science Inventory

    Throughout the Pacific Northwest (northern California, Oregon, Idaho, Washington, and the Columbia Basin portion of British Columbia), many wild salmon "stocks" (a group of interbreeding individuals that is roughly equivalent to a "population") have declined and some have been ...

  16. RESTORING WILD SALMON TO THE PACIFIC NORTHWEST: CHASING AN ILLUSION?

    EPA Science Inventory

    Throughout the Pacific Northwest (northern California, Oregon, Idaho, Washington, and the Columbia Basin portion of British Columbia), many wild salmon "stocks" (a group of interbreeding individuals that is roughly equivalent to a "population) have declined and some have been e...

  17. Assessing learning outcomes and cost effectiveness of an online sleep curriculum for medical students.

    PubMed

    Bandla, Hari; Franco, Rose A; Simpson, Deborah; Brennan, Kimberly; McKanry, Jennifer; Bragg, Dawn

    2012-08-15

    Sleep disorders are highly prevalent across all age groups but often remain undiagnosed and untreated, resulting in significant health consequences. To overcome an inadequacy of available curricula and learner and instructor time constraints, this study sought to determine if an online sleep medicine curriculum would achieve equivalent learner outcomes when compared with traditional, classroom-based, face-to-face instruction at equivalent costs. Medical students rotating on a required clinical clerkship received instruction in 4 core clinical sleep-medicine competency domains in 1 of 2 delivery formats: a single 2.5-hour face-to-face workshop or 4 asynchronous e-learning modules. Immediate learning outcomes were assessed in a subsequent clerkship using a multiple-choice examination and standardized patient station, with long-term outcomes assessed through analysis of students' patient write-ups for inclusion of sleep complaints and diagnoses before and after the intervention. Instructional costs by delivery format were tracked. Descriptive and inferential statistical analyses compared learning outcomes and costs by instructional delivery method (face-to-face versus e-learning). Face-to-face learners, compared with online learners, were more satisfied with instruction. Learning outcomes (i.e., multiple-choice examination, standardized patient encounter, patient write-up), as measured by short-term and long-term assessments, were roughly equivalent. Design, delivery, and learner-assessment costs by format were equivalent at the end of 1 year, due to higher ongoing teaching costs associated with face-to-face learning offsetting online development and delivery costs. Because short-term and long-term learner performance outcomes were roughly equivalent, based on delivery method, the cost effectiveness of online learning is an economically and educationally viable instruction platform for clinical clerkships.

  18. Studies of Real Roughness Effects for Improved Modeling and Control of Practical Wall-Bounded Turbulent Flows

    DTIC Science & Technology

    2008-04-22

    SUPPLEMENTARY NOTES 14. ABSTRACT The present effort investigates the effects of practical roughness replicated from a turbine blade damaged by deposition of...Motivation Most practical wall-bounded turbulent flows of interest, like flows over turbine blades , through heat exchangers, and over aircraft and ship...significantly roughened over time due to harsh operating conditions. Examples of such conditions include cumulative damage to turbine blades (Bons, 2002

  19. Factors influencing optical 3D scanning of vinyl polysiloxane impression materials.

    PubMed

    DeLong, R; Pintado, M R; Ko, C C; Hodges, J S; Douglas, W H

    2001-06-01

    Future growth in dental practice lies in digital imaging enhancing many chairside procedures and functions. This revolution requires the fast, accurate, and 3D digitizing of clinical records. One such clinical record is the chairside impression. This study investigated how surface angle and surface roughness affect the digitizing of vinyl polysiloxane impression materials. Seventeen vinyl polysiloxane impression materials were digitized with a white light optical digitizing system. Each sample was digitized at 3 different angles: 0 degrees, 22.5 degrees, and 45 degrees, and 2 digitizer camera f-stops. The digitized images were rendered on a computer monitor using custom software developed under NIH/NIDCR grant DE12225. All the 3D images were rotated to the 0 degrees position, cropped using Corel Photo-Paint 8 (Corel Corp, Ottawa, Ontario, Canada), then saved in the TIFF file format. The impression material area that was successfully digitized was calculated as a percentage of the total sample area, using Optimas 5.22 image processing software (Media Cybernetics, LP, Silver Spring, MD). The dependent variable was a Performance Value calculated for each material by averaging the percentage of area that digitized over the 3 angles. New samples with smooth and rough surfaces were made using the 7 impression materials with the largest Performance Values. These samples were tested as before, but with the additional angle of 60 degrees. Silky-Rock die stone (Whip Mix Corp, Louisville, KY) was used as a control. The Performance Values for the 17 impression materials ranged from 0% to 100%. The Performance Values for the 7 best materials were equivalent to the control at f/11 out to a surface angle of 45 degrees; however, only Examix impression material (GC America Inc, Alsip, IL) was equivalent to the control at f/11/\\16. At the 60 degrees surface angle with f/11/\\16, the Performance Values were 0% for all the impression materials, whereas that for the control was 90%. The difference in the Performance Values for the smooth and rough surface textures was 7%, which was not significant. The digitizing performance of vinyl polysiloxane impression materials is highly material and surface angle-dependent and is significantly lower than the die stone control when angles to 60 degrees are included. It is affected to a lesser extent by surface texture. Copyright 2001 by The American College of Prosthodontists.

  20. Smooth- and rough-wall boundary layer structure from high spatial range particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Squire, D. T.; Morrill-Winter, C.; Hutchins, N.; Marusic, I.; Schultz, M. P.; Klewicki, J. C.

    2016-10-01

    Two particle image velocimetry arrangements are used to make true spatial comparisons between smooth- and rough-wall boundary layers at high Reynolds numbers across a very wide range of streamwise scales. Together, the arrangements resolve scales ranging from motions on the order of the Kolmogorov microscale to those longer than twice the boundary layer thickness. The rough-wall experiments were obtained above a continuous sandpaper sheet, identical to that used by Squire et al. [J. Fluid Mech. 795, 210 (2016), 10.1017/jfm.2016.196], and cover a range of friction and equivalent sand-grain roughness Reynolds numbers (12 000 ≲δ+≲ 18000, 62 ≲ks+≲104 ). The smooth-wall experiments comprise new and previously published data spanning 6500 ≲δ+≲17 000 . Flow statistics from all experiments show similar Reynolds number trends and behaviors to recent, well-resolved hot-wire anemometry measurements above the same rough surface. Comparisons, at matched δ+, between smooth- and rough-wall two-point correlation maps and two-point magnitude-squared coherence maps demonstrate that spatially the outer region of the boundary layer is the same between the two flows. This is apparently true even at wall-normal locations where the total (inner-normalized) energy differs between the smooth and rough wall. Generally, the present results provide strong support for Townsend's [The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, 1956), Vol. 1] wall-similarity hypothesis in high Reynolds number fully rough boundary layer flows.

  1. Effect of whitening dentifrices on the surface roughness of a nanohybrid composite resin

    PubMed Central

    da Rosa, Gabriela Migliorin; da Silva, Luciana Mendonça; de Menezes, Márcio; do Vale, Hugo Felipe; Regalado, Diego Ferreira; Pontes, Danielson Guedes

    2016-01-01

    Objectives: The present study verified the influence of whitening dentifrices on the surface roughness of a nanohybrid composite resin. Materials and Methods: Thirty-two specimens were prepared with Filtek™ Z350 XT (3M/ESPE) and randomly divided into four groups (n = 08) that were subjected to brushing simulation equivalent to the period of 1 month. The groups assessed were a control group with distilled water (G1), Colgate Total 12 Professional Clean (G2), Sensodyne Extra Whitener Extra Fresh (G3), and Colgate Luminous White (G4). A sequence of 90 cycles was performed for all the samples. The initial roughness of each group was analyzed by the Surface Roughness Tester (TR 200-TIME Group Inc., CA, USA). After the brushing period, the final roughness was measured, and the results were statistically analyzed using nonparametric Kruskal–Wallis and Dunn tests for intergroup roughness comparison in the time factor. For intragroup and “Δ Final − Initial” comparisons, the Wilcoxon test and (one-way) ANOVA were, respectively, performed (α = 0.05). Results: The roughness mean values before and after brushing showed no statistically significant difference when the different dentifrices were used. None of the dentifrices analyzed increased significantly the nanohybrid composite resin surface roughness in a 1 month of tooth brushing simulation. Conclusions: These results suggest that no hazardous effect on the roughness of nanohybrid composite resin can be expected when whitening dentifrices are used for a short period. Similar studies should be conducted to analyze other esthetic composite materials. PMID:27095891

  2. ROMI-RIP: Rough Mill RIP-first simulator user's guide

    Treesearch

    R. Edward Thomas

    1995-01-01

    The ROugh Mill RIP-first simulator (ROMI-RIP) is a computer software package for IBM compatible personal computers that simulates current industrial practices for gang-ripping lumber. This guide shows the user how to set and examine the results of simulations regarding current or proposed mill practices. ROMI-RIP accepts cutting bills with up to 300 different part...

  3. Granularity refined by knowledge: contingency tables and rough sets as tools of discovery

    NASA Astrophysics Data System (ADS)

    Zytkow, Jan M.

    2000-04-01

    Contingency tables represent data in a granular way and are a well-established tool for inductive generalization of knowledge from data. We show that the basic concepts of rough sets, such as concept approximation, indiscernibility, and reduct can be expressed in the language of contingency tables. We further demonstrate the relevance to rough sets theory of additional probabilistic information available in contingency tables and in particular of statistical tests of significance and predictive strength applied to contingency tables. Tests of both type can help the evaluation mechanisms used in inductive generalization based on rough sets. Granularity of attributes can be improved in feedback with knowledge discovered in data. We demonstrate how 49er's facilities for (1) contingency table refinement, for (2) column and row grouping based on correspondence analysis, and (3) the search for equivalence relations between attributes improve both granularization of attributes and the quality of knowledge. Finally we demonstrate the limitations of knowledge viewed as concept approximation, which is the focus of rough sets. Transcending that focus and reorienting towards the predictive knowledge and towards the related distinction between possible and impossible (or statistically improbable) situations will be very useful in expanding the rough sets approach to more expressive forms of knowledge.

  4. Ultimate scaling of TiN/ZrO2/TiN capacitors: Leakage currents and limitations due to electrode roughness

    NASA Astrophysics Data System (ADS)

    Jegert, Gunther; Kersch, Alfred; Weinreich, Wenke; Lugli, Paolo

    2011-01-01

    In this paper, we investigate the influence of electrode roughness on the leakage current in TiN/high-κ ZrO2/TiN (TZT) thin-film capacitors which are used in dynamic random access memory cells. Based on a microscopic transport model, which is expanded to incorporate electrode roughness, we assess the ultimate scaling potential of TZT capacitors in terms of equivalent oxide thickness, film smoothness, thickness fluctuations, defect density and distribution, and conduction band offset (CBO). The model is based on three-dimensional, fully self-consistent, kinetic Monte Carlo transport simulations. Tunneling transport in the bandgap of the dielectric is treated, which includes defect-assisted transport mechanisms. Electrode roughness is described in the framework of fractal geometry. While the short-range roughness of the electrodes is found not to influence significantly the leakage current, thickness fluctuations of the dielectric have a major impact. For thinner dielectric films they cause a transformation of the dominant transport mechanism from Poole-Frenkel conduction to trap-assisted tunneling. Consequently, the sensitivity of the leakage current on electrode roughness drastically increases on downscaling. Based on the simulations, optimization of the CBO is suggested as the most viable strategy to extend the scalability of TZT capacitors over the next chip generations.

  5. Lipopolysaccharide variation in Coxiella burnetti: intrastrain heterogeneity in structure and antigenicity.

    PubMed Central

    Hackstadt, T; Peacock, M G; Hitchcock, P J; Cole, R L

    1985-01-01

    We isolated lipopolysaccharides (LPSs) from phase variants of Coxiella burnetii Nine Mile and compared the isolated LPS and C. burnetii cells by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and immunoblotting. The LPSs were found to be the predominant component which varied structurally and antigenically between virulent phase I and avirulent phase II. A comparison of techniques historically used to extract the phase I antigenic component revealed that the aqueous phase of phenol-water, trichloroacetic acid, and dimethyl sulfoxide extractions of phase I C. burnettii cells all contained phase I LPS, although the efficiency and specificity of extraction varied. Our studies provide additional evidence that phase variation in C. burnetii is analogous to the smooth-to-rough LPS variation of gram-negative enteric bacteria, with phase I LPS being equivalent to smooth LPS and phase II being equivalent to rough LPS. In addition, we identified a variant with a third LPS chemotype with appears to have a structural complexity intermediate to phase I and II LPSs. All three C. burnetii LPS contain a 2-keto-3-deoxyoctulosonic acid-like substance, heptose, and gel Limulus amoebocyte lysates in subnanogram amounts. The C. burnetii LPSs were nontoxic to chicken embryos at doses of over 80 micrograms per embryo, in contrast to Salmonella typhimurium smooth- and rough-type LPSs, which were toxic in nanogram amounts. Images PMID:3988339

  6. Interfacial phonon scattering and transmission loss in >1 μm thick silicon-on-insulator thin films

    NASA Astrophysics Data System (ADS)

    Jiang, Puqing; Lindsay, Lucas; Huang, Xi; Koh, Yee Kan

    2018-05-01

    Scattering of phonons at boundaries of a crystal (grains, surfaces, or solid/solid interfaces) is characterized by the phonon wavelength, the angle of incidence, and the interface roughness, as historically evaluated using a specularity parameter p formulated by Ziman [Electrons and Phonons (Clarendon Press, Oxford, 1960)]. This parameter was initially defined to determine the probability of a phonon specularly reflecting or diffusely scattering from the rough surface of a material. The validity of Ziman's theory as extended to solid/solid interfaces has not been previously validated. To better understand the interfacial scattering of phonons and to test the validity of Ziman's theory, we precisely measured the in-plane thermal conductivity of a series of Si films in silicon-on-insulator (SOI) wafers by time-domain thermoreflectance (TDTR) for a Si film thickness range of 1-10 μm and a temperature range of 100-300 K. The Si /SiO2 interface roughness was determined to be 0.11 ±0.04 nm using transmission electron microscopy (TEM). Furthermore, we compared our in-plane thermal conductivity measurements to theoretical calculations that combine first-principles phonon transport with Ziman's theory. Calculations using Ziman's specularity parameter significantly overestimate values from the TDTR measurements. We attribute this discrepancy to phonon transmission through the solid/solid interface into the substrate, which is not accounted for by Ziman's theory for surfaces. The phonons that are specularly transmitted into an amorphous layer will be sufficiently randomized by the time they come back to the crystalline Si layer, the effect of which is practically equivalent to a diffuse reflection at the interface. We derive a simple expression for the specularity parameter at solid/amorphous interfaces and achieve good agreement between calculations and measurement values.

  7. Snow water equivalent measured with cosmic-ray neutrons: reviving a little known but highly successful field method

    NASA Astrophysics Data System (ADS)

    Desilets, D.

    2012-12-01

    Secondary cosmic-ray neutrons are attenuated strongly by water in either solid or liquid form, suggesting a method for measuring snow water equivalent that has several advantages over alternative technologies. The cosmic-ray attenuation method is passive, portable, highly adaptable, and operates over an exceptionally large range of snow pack thicknesses. But despite promising initial observations made in the 1970s, the technique today remains practically unknown to snow hydrologists. Side-by-side measurements performed over the past several years with a snow pillow and a submerged cosmic-ray probe demonstrate that the cosmic-ray attenuation method merits consideration for a wide range of applications—especially those where alternative methods are made problematic by dense vegetation, rough terrain, deep snowpack or a lack of vehicular access. During the snow-free season, the instrumentation can be used to monitor soil moisture, thus providing another widely sought field measurement. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, C.A., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

  8. A simple model of space radiation damage in GaAs solar cells

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Stith, J. J.; Stock, L. V.

    1983-01-01

    A simple model is derived for the radiation damage of shallow junction gallium arsenide (GaAs) solar cells. Reasonable agreement is found between the model and specific experimental studies of radiation effects with electron and proton beams. In particular, the extreme sensitivity of the cell to protons stopping near the cell junction is predicted by the model. The equivalent fluence concept is of questionable validity for monoenergetic proton beams. Angular factors are quite important in establishing the cell sensitivity to incident particle types and energies. A fluence of isotropic incidence 1 MeV electrons (assuming infinite backing) is equivalent to four times the fluence of normal incidence 1 MeV electrons. Spectral factors common to the space radiations are considered, and cover glass thickness required to minimize the initial damage for a typical cell configuration is calculated. Rough equivalence between the geosynchronous environment and an equivalent 1 MeV electron fluence (normal incidence) is established.

  9. WWC Review of the Report "Benefits of Practicing 4 = 2 + 2: Nontraditional Problem Formats Facilitate Children's Understanding of Mathematical Equivalence." What Works Clearinghouse Single Study Review

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2014

    2014-01-01

    The 2011 study, "Benefits of Practicing 4 = 2 + 2: Nontraditional Problem Formats Facilitate Children's Understanding of Mathematical Equivalence," examined the effects of addition practice using nontraditional problem formats on students' understanding of mathematical equivalence. In nontraditional problem formats, operations appear on…

  10. Definitions and Omissions of Heroism

    ERIC Educational Resources Information Center

    Martens, Jeffrey W.

    2005-01-01

    This article presents comments on "The Heroism of Women and Men" by Selwyn W. Becker and Alice H. Eagly. Their article specifically addressed the "cultural association of heroism with men and masculinity . . . in natural settings." Becker and Eagly evidenced roughly equivalent rates of heroism by women and men in a variety of settings. However,…

  11. Use of Statistical Heuristics in Everyday Inductive Reasoning.

    ERIC Educational Resources Information Center

    Nisbett, Richard E.; And Others

    1983-01-01

    In everyday reasoning, people use statistical heuristics (judgmental tools that are rough intuitive equivalents of statistical principles). Use of statistical heuristics is more likely when (1) sampling is clear, (2) the role of chance is clear, (3) statistical reasoning is normative for the event, or (4) the subject has had training in…

  12. Finding Aristotle's Golden Mean: Social Justice and Academic Excellence

    ERIC Educational Resources Information Center

    Rivera, John

    2005-01-01

    Over 2,000 years ago, Aristotle wrote a treatise on ethics in which he proposed that there were both intellectual and moral virtues to be developed in the human being. Virtue ("aristeia") was roughly equivalent to the English word "excellence" and the unifying virtue that was both a moral and an intellectual virtue was…

  13. The Australian Press and Education: A Survey of National and Global Perspectives.

    ERIC Educational Resources Information Center

    Woolman, David C.

    The news media are often the main source of public information about education. This paper analyzes press coverage of selected issues in contemporary Australian education. From December 28, 1998, to February 17, 1999, daily educational reporting was surveyed in "The Australian" (a paper roughly equivalent to "USA Today") and in…

  14. Low Temperature Reactive Sputtering of Thin Aluminum Nitride Films on Metallic Nanocomposites

    PubMed Central

    Ramadan, Khaled Sayed Elbadawi; Evoy, Stephane

    2015-01-01

    Piezoelectric aluminum nitride thin films were deposited on aluminum-molybdenum (AlMo) metallic nanocomposites using reactive DC sputtering at room temperature. The effect of sputtering parameters on film properties was assessed. A comparative study between AlN grown on AlMo and pure aluminum showed an equivalent (002) crystallographic texture. The piezoelectric coefficients were measured to be 0.5±0.1 C m-2 and 0.9±0.1 C m-2, for AlN deposited on Al/0.32Mo and pure Al, respectively. Films grown onto Al/0.32Mo however featured improved surface roughness. Roughness values were measured to be 1.3nm and 5.4 nm for AlN films grown on AlMo and on Al, respectively. In turn, the dielectric constant was measured to be 8.9±0.7 for AlN deposited on Al/0.32Mo seed layer, and 8.7±0.7 for AlN deposited on aluminum; thus, equivalent within experimental error. Compatibility of this room temperature process with the lift-off patterning of the deposited AlN is also reported. PMID:26193701

  15. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  16. ROMI-3: Rough-Mill Simulator Version 3.0: User's Guide

    Treesearch

    Joel M. Weiss; R. Edward Thomas; R. Edward Thomas

    2005-01-01

    ROMI-3 Rough-Mill Simulator is a software package that simulates current industrial practices for rip-first and chop-first lumber processing. This guide shows the user how to set up and examine the results of simulations of current or proposed mill practices. ROMI-3 accepts cutting bills with as many as 600 combined solid and/or panel part sizes. Plots of processed...

  17. Gender Matters: Male and Female ECEC Practitioners' Perceptions and Practices Regarding Children's Rough-and-Tumble Play (R&T)

    ERIC Educational Resources Information Center

    Storli, Rune; Hansen Sandseter, Ellen Beate

    2017-01-01

    The aim of this study was to explore Norwegian early childhood education and care (ECEC) practitioners' perceptions and practices regarding children's indoor and outdoor rough-and-tumble play (R&T) from a gender perspective. A questionnaire and semi-structured interviews were used together in a mixed method design to provide quantitative data…

  18. Rayleigh's hypothesis and the geometrical optics limit.

    PubMed

    Elfouhaily, Tanos; Hahn, Thomas

    2006-09-22

    The Rayleigh hypothesis (RH) is often invoked in the theoretical and numerical treatment of rough surface scattering in order to decouple the analytical form of the scattered field. The hypothesis stipulates that the scattered field away from the surface can be extended down onto the rough surface even though it is formed by solely up-going waves. Traditionally this hypothesis is systematically used to derive the Volterra series under the small perturbation method which is equivalent to the low-frequency limit. In this Letter we demonstrate that the RH also carries the high-frequency or the geometrical optics limit, at least to first order. This finding has never been explicitly derived in the literature. Our result comforts the idea that the RH might be an exact solution under some constraints in the general case of random rough surfaces and not only in the case of small-slope deterministic periodic gratings.

  19. Unpolarized infrared emissivity with shadow from anisotropic rough sea surfaces with non-Gaussian statistics.

    PubMed

    Bourlier, Christophe

    2005-07-10

    The emissivity of two-dimensional anisotropic rough sea surfaces with non-Gaussian statistics is investigated. The emissivity derivation is of importance for retrieval of the sea-surface temperature or equivalent temperature of a rough sea surface by infrared thermal imaging. The well-known Cox-Munk slope probability-density function, considered non-Gaussian, is used for the emissivity derivation, in which the skewness and the kurtosis (related to the third- and fourth-order statistics, respectively) are included. The shadowing effect, which is significant for grazing angles, is also taken into account. The geometric optics approximation is assumed to be valid, which means that the rough surface is modeled as a collection of facets reflecting locally the light in the specular direction. In addition, multiple reflections are ignored. Numerical results of the emissivity are presented for Gaussian and non-Gaussian statistics, for moderate wind speeds, for near-infrared wavelengths, for emission angles ranging from 0 degrees (nadir) to 90 degrees (horizon), and according to the wind direction. In addition, the emissivity is compared with both measurements and a Monte Carlo ray-tracing method.

  20. Contact stiffness of regularly patterned multi-asperity interfaces

    NASA Astrophysics Data System (ADS)

    Li, Shen; Yao, Quanzhou; Li, Qunyang; Feng, Xi-Qiao; Gao, Huajian

    2018-02-01

    Contact stiffness is a fundamental mechanical index of solid surfaces and relevant to a wide range of applications. Although the correlation between contact stiffness, contact size and load has long been explored for single-asperity contacts, our understanding of the contact stiffness of rough interfaces is less clear. In this work, the contact stiffness of hexagonally patterned multi-asperity interfaces is studied using a discrete asperity model. We confirm that the elastic interaction among asperities is critical in determining the mechanical behavior of rough contact interfaces. More importantly, in contrast to the common wisdom that the interplay of asperities is solely dictated by the inter-asperity spacing, we show that the number of asperities in contact (or equivalently, the apparent size of contact) also plays an indispensable role. Based on the theoretical analysis, we propose a new parameter for gauging the closeness of asperities. Our theoretical model is validated by a set of experiments. To facilitate the application of the discrete asperity model, we present a general equation for contact stiffness estimation of regularly rough interfaces, which is further proved to be applicable for interfaces with single-scale random roughness.

  1. Organ doses from radionuclides on the ground. Part I. Simple time dependences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, P.; Paretzke, H.G.; Rosenbaum, H.

    1988-06-01

    Organ dose equivalents of mathematical, anthropomorphical phantoms ADAM and EVA for photon exposures from plane sources on the ground have been calculated by Monte Carlo photon transport codes and tabulated in this article. The calculation takes into account the air-ground interface and a typical surface roughness, the energy and angular dependence of the photon fluence impinging on the phantom and the time dependence of the contributions from daughter nuclides. Results are up to 35% higher than data reported in the literature for important radionuclides. This manuscript deals with radionuclides, for which the time dependence of dose equivalent rates and dosemore » equivalents may be approximated by a simple exponential. A companion manuscript treats radionuclides with non-trivial time dependences.« less

  2. A note on ``critical roughness height'' and ``transitional roughness''

    NASA Astrophysics Data System (ADS)

    Bradshaw, P.

    2000-06-01

    An unrigorous but plausible analysis suggests that the concept of a critical roughness height, below which roughness does not affect a turbulent wall flow, is erroneous. The Oseen approximation implies that the effect of roughness on the additive constant in the logarithmic law of the wall should vary as the square of the roughness Reynolds number (specifically the roughness height in "wall units"). This is an important point in determining whether surfaces used in experiments at high unit Reynolds number can be regarded as hydraulically smooth. Attention is also called to the qualitative difference between Nikuradse's measurements of friction factor in pipe flow with uniform-size sand-grain roughness in the "transitional" range of Reynolds number and the data correlation in the Moody chart of 1944; the latter was derived from tests on miscellaneous real-life rough surfaces in the 1930s. Nearly all textbooks on elementary fluid dynamics present, but practically none discuss, this difference. Nikuradse's monodisperse roughness is a very rare case with untypical behavior in the transitional range.

  3. Solving the Ancient Maritime Problem: Piracy Operations in the Gulf of Aden

    DTIC Science & Technology

    2010-04-22

    of2009, pirates captured the Greek-flagged tanker Maran Centaurus while it was carrying 275,000 metric tons of crude oi1.28 That is equivalent to about 2...million barrels of oil worth roughly $150 million, stated Ben Cahill, head of the Petroleum Risk Manager service at PFC Energy. Maran Centaurus was

  4. Wildlife management in southwestern Pinon-juniper woodlands

    Treesearch

    Jeffery C. Whitney

    2008-01-01

    Pinon-juniper woodlands in the southwestern United States (Arizona and New Mexico) represent approximately 54,000 square miles, equivalent to roughly 20% of the land base for the two states. Within this broad habitat type, there is a high degree of variability of vegetation in terms of species composition, their relative abundance, percent canopy cover, and typically...

  5. Tacit Social Knowledge Acquisition as a Function of General Intelligence and the Ability To Learn and Utilize Uncertain Social Feedback and Contingencies

    DTIC Science & Technology

    1992-12-01

    factory; Berry and Irvine (1986) document bricolage , which is roughly equivalent to handyman skills, within primitive cultures. Both these studies are...decoding of nonverbal cues. Intelligence, ;J, 263-287. Berry, J. W., & Irvine, S. H. (1986). Bricolage : Savages do it daily. In Sternberg, R., & Wagner

  6. Resolving Quasi-Synonym Relationships in Automatic Thesaurus Construction Using Fuzzy Rough Sets and an Inverse Term Frequency Similarity Function

    ERIC Educational Resources Information Center

    Davault, Julius M., III.

    2009-01-01

    One of the problems associated with automatic thesaurus construction is with determining the semantic relationship between word pairs. Quasi-synonyms provide a type of equivalence relationship: words are similar only for purposes of information retrieval. Determining such relationships in a thesaurus is hard to achieve automatically. The term…

  7. Lunar rocks as a source of oxygen

    NASA Technical Reports Server (NTRS)

    Poole, H. G.

    1963-01-01

    A thermodynamic study of the thermal stability of conventional terrestrial minerals in a hypothetical lunar atmosphere has opened some interesting speculation. Much of the Earth's crust is composed of oxides of silicon, aluminum, magnesium, and related compounds. These crust components may be as much a product of the Earth's atmosphere as vegetation and animal life. Though inanimate and long considered imperishable, these materials are stable under conditions of an atmosphere equivalent to 34 ft of water at sea level and persist under adverse conditions of moisture and temperature to altitudes of roughly 29,000 ft above sea level. The oxygen content averages 21% ) and the oxygen partial pressure would be roughly 1/5 atm.

  8. Building America Best Practices Series Volume 11. Builders Challenge Guide to 40% Whole-House Energy Savings in the Marine Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.

    2010-09-01

    This best practices guide is the eleventh in a series of guides for builders produced by the U.S. Department of Energy’s Building America Program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the marine climate (portions of Washington, Oregon, and California) can achieve homes that have whole house energy savings of 40% over the Building America benchmark (a home built to mid-1990s buildingmore » practices roughly equivalent to the 1993 Model Energy Code) with no added overall costs for consumers. These best practices are based on the results of research and demonstration projects conducted by Building America’s research teams. The guide includes information for managers, designers, marketers, site supervisors, and subcontractors, as well as case studies of builders who are successfully building homes that cut energy use by 40% in the marine climate. This document is available on the web at www.buildingamerica.gov. This report was originally cleared 06-29-2010. This version is Rev 1 cleared in Nov 2010. The only change is the reference to the Energy Star Windows critieria shown on pg 8.25 was updated to match the criteria - Version 5.0, 04/07/2009, effective 01/04/2010.« less

  9. Psychoacoustic and cognitive aspects of auditory roughness: definitions, models, and applications

    NASA Astrophysics Data System (ADS)

    Vassilakis, Pantelis N.; Kendall, Roger A.

    2010-02-01

    The term "auditory roughness" was first introduced in the 19th century to describe the buzzing, rattling auditory sensation accompanying narrow harmonic intervals (i.e. two tones with frequency difference in the range of ~15-150Hz, presented simultaneously). A broader definition and an overview of the psychoacoustic correlates of the auditory roughness sensation, also referred to as sensory dissonance, is followed by an examination of efforts to quantify it over the past one hundred and fifty years and leads to the introduction of a new roughness calculation model and an application that automates spectral and roughness analysis of sound signals. Implementation of spectral and roughness analysis is briefly discussed in the context of two pilot perceptual experiments, designed to assess the relationship among cultural background, music performance practice, and aesthetic attitudes towards the auditory roughness sensation.

  10. Standards for midwife practitioners of external cephalic version: A Delphi study.

    PubMed

    Walker, Shawn; Perilakalathil, Prasanth; Moore, Jenny; Gibbs, Claire L; Reavell, Karen; Crozier, Kenda

    2015-05-01

    expansion of advanced and specialist midwifery practitioner roles across professional boundaries requires an evidence-based framework to evaluate achievement and maintenance of competency. In order to develop the role of Breech Specialist Midwife to include the autonomous performance of external cephalic version within one hospital, guidance was required on standards of training and skill development, particularly in the use of ultrasound. a three-round Delphi survey was used to determine consensus among an expert panel, including highly experienced obstetric and midwife practitioners, as well as sonographers. The first round used mostly open-ended questions to gather data, from which statements were formed and returned to the panel for evaluation in subsequent rounds. standards for achieving and maintaining competence to perform ECV, and in the use of basic third trimester ultrasound as part of this practice, should be the same for midwives and doctors. The maintenance of proficiency requires regular practice. midwives can appropriately expand their sphere of practice to include ECV and basic third trimester ultrasound, according to internal guidelines, following the completion of a competency-based training programme roughly equivalent to those used to guide obstetric training. Ideally, ECV services should be offered in organised clinics where individual practitioners in either profession are able to perform approximately 30 or more ECVs per year in order to maintain an appropriate level of skill. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Building America Best Practices Series Volume 12: Builders Challenge Guide to 40% Whole-House Energy Savings in the Cold and Very Cold Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.

    2011-02-01

    This best practices guide is the twelfth in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the cold and very cold climates can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. Themore » best practices described in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and thos erequirements are highlighted in the text. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less

  12. Multi-hybrid method for investigation of EM scattering from inhomogeneous object above a dielectric rough surface

    NASA Astrophysics Data System (ADS)

    Li, Jie; Guo, LiXin; He, Qiong; Wei, Bing

    2012-10-01

    An iterative strategy combining Kirchhoff approximation^(KA) with the hybrid finite element-boundary integral (FE-BI) method is presented in this paper to study the interactions between the inhomogeneous object and the underlying rough surface. KA is applied to study scattering from underlying rough surfaces, whereas FE-BI deals with scattering from the above target. Both two methods use updated excitation sources. Huygens equivalence principle and an iterative strategy are employed to consider the multi-scattering effects. This hybrid FE-BI-KA scheme is an improved and generalized version of previous hybrid Kirchhoff approximation-method of moments (KA-MoM). This newly presented hybrid method has the following advantages: (1) the feasibility of modeling multi-scale scattering problems (large scale underlying surface and small scale target); (2) low memory requirement as in hybrid KA-MoM; (3) the ability to deal with scattering from inhomogeneous (including coated or layered) scatterers above rough surfaces. The numerical results are given to evaluate the accuracy of the multi-hybrid technique; the computing time and memory requirements consumed in specific numerical simulation of FE-BI-KA are compared with those of MoM. The convergence performance is analyzed by studying the iteration number variation caused by related parameters. Then bistatic scattering from inhomogeneous object of different configurations above dielectric Gaussian rough surface is calculated and the influences of dielectric compositions and surface roughness on the scattering pattern are discussed.

  13. Direct Numerical Simulation of Flow Over Passive Geometric Disturbances

    NASA Astrophysics Data System (ADS)

    Vizard, Alexander

    It is well understood that delaying flow separation on a bluff body allows significant drag reduction, which is attractive in many applications. With this in mind, many separation control mechanisms, both active and passive, have been developed and tested to optimize the effects of this phenomenon. Although this idea is generally accepted, the physical occurrences in the near-wall region during transition that lead to separation delay are not well understood. The current study evaluates the impact of both spherical dimples, and sandgrain style roughness on downstream flow by performing direct numerical simulations over such geometries on a zero pressure gradient flat plate. It is shown that although dimples and random roughness of similar characteristic length scales exhibit similar boundary layer characteristics, dimples are more successful in developing high momentum in the vicinity of the wall. Additionally it is shown that increasing the relative size of the rough elements does not increase the near-wall momentum, and is undesirable in controlling separation. Finally, it is shown that the impact of roughness elements on the flow is more immediate, and that, for the case of one row of dimples and an equivalent area of roughness, the roughness patch is more successful in transitioning the near-wall region to a non-laminar state. It can be concluded from variation in the span of the flowfield for a single row of dimples that the size and orientation of the disturbance region is significant to the results.

  14. Wind tunnel model surface gauge for measuring roughness

    NASA Technical Reports Server (NTRS)

    Vorburger, T. V.; Gilsinn, D. E.; Teague, E. C.; Giauque, C. H. W.; Scire, F. E.; Cao, L. X.

    1987-01-01

    The optical inspection of surface roughness research has proceeded along two different lines. First, research into a quantitative understanding of light scattering from metal surfaces and into the appropriate models to describe the surfaces themselves. Second, the development of a practical instrument for the measurement of rms roughness of high performance wind tunnel models with smooth finishes. The research is summarized, with emphasis on the second avenue of research.

  15. Reynolds number and roughness effects on turbulent stresses in sandpaper roughness boundary layers

    NASA Astrophysics Data System (ADS)

    Morrill-Winter, C.; Squire, D. T.; Klewicki, J. C.; Hutchins, N.; Schultz, M. P.; Marusic, I.

    2017-05-01

    Multicomponent turbulence measurements in rough-wall boundary layers are presented and compared to smooth-wall data over a large friction Reynolds number range (δ+). The rough-wall experiments used the same continuous sandpaper sheet as in the study of Squire et al. [J. Fluid Mech. 795, 210 (2016), 10.1017/jfm.2016.196]. To the authors' knowledge, the present measurements are unique in that they cover nearly an order of magnitude in Reynolds number (δ+≃2800 -17 400 ), while spanning the transitionally to fully rough regimes (equivalent sand-grain-roughness range, ks+≃37 -98 ), and in doing so also maintain very good spatial resolution. Distinct from previous studies, the inner-normalized wall-normal velocity variances, w2¯, exhibit clear dependencies on both ks+ and δ+ well into the wake region of the boundary layer, and only for fully rough flows does the outer portion of the profile agree with that in a comparable δ+ smooth-wall flow. Consistent with the mean dynamical constraints, the inner-normalized Reynolds shear stress profiles in the rough-wall flows are qualitatively similar to their smooth-wall counterparts. Quantitatively, however, at matched Reynolds numbers the peaks in the rough-wall Reynolds shear stress profiles are uniformly located at greater inner-normalized wall-normal positions. The Reynolds stress correlation coefficient, Ru w, is also greater in rough-wall flows at a matched Reynolds number. As in smooth-wall flows, Ru w decreases with Reynolds number, but at different rates depending on the roughness condition. Despite the clear variations in the Ru w profiles with roughness, inertial layer u , w cospectra evidence invariance with ks+ when normalized with the distance from the wall. Comparison of the normalized contributions to the Reynolds stress from the second quadrant (Q2) and fourth quadrant (Q4) exhibit noticeable differences between the smooth- and rough-wall flows. The overall time fraction spent in each quadrant is, however, shown to be nearly fixed for all of the flow conditions investigated. The data indicate that at fixed δ+ both Q2 and Q4 events exhibit a sensitivity to ks+. The present results are discussed relative to the combined influences of roughness and Reynolds number on the scaling behaviors of boundary layers.

  16. First Person Singular: Building the Road as We Travel

    ERIC Educational Resources Information Center

    Long, Michael H.

    2015-01-01

    After completing a law degree at the University of Birmingham when I was 20 and not really knowing what I wanted to do, except that it was not law, I became an English as a foreign language (EFL) teacher accidentally through signing up as a volunteer with the British United Nations Association (BUNA), roughly equivalent to the US Peace Corps.…

  17. Thermal-Work Strain and Energy Expenditure during Marine Rifle Squad Operations in Afghanistan (August 2013)

    DTIC Science & Technology

    2015-08-10

    value of 1 is equal to 0.155 K•m2/W [2] and roughly equivalent (Itot,clo = 1.17 clo) to wearing an ensemble including men’s underwear briefs, khaki...sniper fire , securing weapon caches, and an IED explosion. In fact, mean mission physiological data were similar to mean data for non-mission days

  18. Microwave remote sensing of snowpacks

    NASA Technical Reports Server (NTRS)

    Stiles, W. H.; Ulaby, F. T.

    1980-01-01

    The interaction mechanisms responsible for the microwave backscattering and emission behavior of snow were investigated, and models were developed relating the backscattering coefficient (sigma) and apparent temperature (T) to the physical parameters of the snowpack. The microwave responses to snow wetness, snow water equivalent, snow surface roughness, and to diurnal variations were investigated. Snow wetness was shown to have an increasing effect with increasing frequency and angle of incidence for both active and passive cases. Increasing snow wetness was observed to decrease the magnitude sigma and increase T. Snow water equivalent was also observed to exhibit a significant influence sigma and T. Snow surface configuration (roughness) was observed to be significant only for wet snow surface conditions. Diurnal variations were as large as 15 dB for sigma at 35 GHz and 120 K for T at 37 GHz. Simple models for sigma and T of a snowpack scene were developed in terms of the most significant ground-truth parameters. The coefficients for these models were then evaluated; the fits to the sigma and T measurements were generally good. Finally, areas of needed additional observations were outlined and experiments were specified to further the understanding of the microwave-snowpack interaction mechanisms.

  19. Universal emulsion stabilization from the arrested adsorption of rough particles at liquid-liquid interfaces

    PubMed Central

    Zanini, Michele; Marschelke, Claudia; Anachkov, Svetoslav E.; Marini, Emanuele; Synytska, Alla; Isa, Lucio

    2017-01-01

    Surface heterogeneities, including roughness, significantly affect the adsorption, motion and interactions of particles at fluid interfaces. However, a systematic experimental study, linking surface roughness to particle wettability at a microscopic level, is currently missing. Here we synthesize a library of all-silica microparticles with uniform surface chemistry, but tuneable surface roughness and study their spontaneous adsorption at oil–water interfaces. We demonstrate that surface roughness strongly pins the particles' contact lines and arrests their adsorption in long-lived metastable positions, and we directly measure the roughness-induced interface deformations around isolated particles. Pinning imparts tremendous contact angle hysteresis, which can practically invert the particle wettability for sufficient roughness, irrespective of their chemical nature. As a unique consequence, the same rough particles stabilize both water-in-oil and oil-in-water emulsions depending on the phase they are initially dispersed in. These results both shed light on fundamental phenomena concerning particle adsorption at fluid interfaces and indicate future design rules for particle-based emulsifiers. PMID:28589932

  20. Universal emulsion stabilization from the arrested adsorption of rough particles at liquid-liquid interfaces

    NASA Astrophysics Data System (ADS)

    Zanini, Michele; Marschelke, Claudia; Anachkov, Svetoslav E.; Marini, Emanuele; Synytska, Alla; Isa, Lucio

    2017-06-01

    Surface heterogeneities, including roughness, significantly affect the adsorption, motion and interactions of particles at fluid interfaces. However, a systematic experimental study, linking surface roughness to particle wettability at a microscopic level, is currently missing. Here we synthesize a library of all-silica microparticles with uniform surface chemistry, but tuneable surface roughness and study their spontaneous adsorption at oil-water interfaces. We demonstrate that surface roughness strongly pins the particles' contact lines and arrests their adsorption in long-lived metastable positions, and we directly measure the roughness-induced interface deformations around isolated particles. Pinning imparts tremendous contact angle hysteresis, which can practically invert the particle wettability for sufficient roughness, irrespective of their chemical nature. As a unique consequence, the same rough particles stabilize both water-in-oil and oil-in-water emulsions depending on the phase they are initially dispersed in. These results both shed light on fundamental phenomena concerning particle adsorption at fluid interfaces and indicate future design rules for particle-based emulsifiers.

  1. Hydrology team

    NASA Technical Reports Server (NTRS)

    Ragan, R.

    1982-01-01

    General problems faced by hydrologists when using historical records, real time data, statistical analysis, and system simulation in providing quantitative information on the temporal and spatial distribution of water are related to the limitations of these data. Major problem areas requiring multispectral imaging-based research to improve hydrology models involve: evapotranspiration rates and soil moisture dynamics for large areas; the three dimensional characteristics of bodies of water; flooding in wetlands; snow water equivalents; runoff and sediment yield from ungaged watersheds; storm rainfall; fluorescence and polarization of water and its contained substances; discriminating between sediment and chlorophyll in water; role of barrier island dynamics in coastal zone processes; the relationship between remotely measured surface roughness and hydraulic roughness of land surfaces and stream networks; and modeling the runoff process.

  2. Interfacial phonon scattering and transmission loss in > 1 µm thick silicon-on-insulator thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Puqing; Lindsay, Lucas R.; Huang, Xi

    Scattering of phonons at boundaries of a crystal (grains, surfaces, or solid/solid interfaces) is characterized by the phonon wavelength, the angle of incidence, and the interface roughness, as historically evaluated using a specularity parameter p formulated by Ziman [Electrons and Phonons (Clarendon Press, Oxford, 1960)]. This parameter was initially defined to determine the probability of a phonon specularly reflecting or diffusely scattering from the rough surface of a material. The validity of Ziman's theory as extended to solid/solid interfaces has not been previously validated. Here, to better understand the interfacial scattering of phonons and to test the validity of Ziman'smore » theory, we precisely measured the in-plane thermal conductivity of a series of Si films in silicon-on-insulator (SOI) wafers by time-domain thermoreflectance (TDTR) for a Si film thickness range of 1–10 μm and a temperature range of 100–300 K. The Si/SiO 2 interface roughness was determined to be 0.11±0.04nm using transmission electron microscopy (TEM). Furthermore, we compared our in-plane thermal conductivity measurements to theoretical calculations that combine first-principles phonon transport with Ziman's theory. Calculations using Ziman's specularity parameter significantly overestimate values from the TDTR measurements. We attribute this discrepancy to phonon transmission through the solid/solid interface into the substrate, which is not accounted for by Ziman's theory for surfaces. The phonons that are specularly transmitted into an amorphous layer will be sufficiently randomized by the time they come back to the crystalline Si layer, the effect of which is practically equivalent to a diffuse reflection at the interface. Finally, we derive a simple expression for the specularity parameter at solid/amorphous interfaces and achieve good agreement between calculations and measurement values.« less

  3. Interfacial phonon scattering and transmission loss in > 1 µm thick silicon-on-insulator thin films

    DOE PAGES

    Jiang, Puqing; Lindsay, Lucas R.; Huang, Xi; ...

    2018-05-17

    Scattering of phonons at boundaries of a crystal (grains, surfaces, or solid/solid interfaces) is characterized by the phonon wavelength, the angle of incidence, and the interface roughness, as historically evaluated using a specularity parameter p formulated by Ziman [Electrons and Phonons (Clarendon Press, Oxford, 1960)]. This parameter was initially defined to determine the probability of a phonon specularly reflecting or diffusely scattering from the rough surface of a material. The validity of Ziman's theory as extended to solid/solid interfaces has not been previously validated. Here, to better understand the interfacial scattering of phonons and to test the validity of Ziman'smore » theory, we precisely measured the in-plane thermal conductivity of a series of Si films in silicon-on-insulator (SOI) wafers by time-domain thermoreflectance (TDTR) for a Si film thickness range of 1–10 μm and a temperature range of 100–300 K. The Si/SiO 2 interface roughness was determined to be 0.11±0.04nm using transmission electron microscopy (TEM). Furthermore, we compared our in-plane thermal conductivity measurements to theoretical calculations that combine first-principles phonon transport with Ziman's theory. Calculations using Ziman's specularity parameter significantly overestimate values from the TDTR measurements. We attribute this discrepancy to phonon transmission through the solid/solid interface into the substrate, which is not accounted for by Ziman's theory for surfaces. The phonons that are specularly transmitted into an amorphous layer will be sufficiently randomized by the time they come back to the crystalline Si layer, the effect of which is practically equivalent to a diffuse reflection at the interface. Finally, we derive a simple expression for the specularity parameter at solid/amorphous interfaces and achieve good agreement between calculations and measurement values.« less

  4. Climate specific thermomechanical fatigue of flat plate photovoltaic module solder joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosco, Nick; Silverman, Timothy J.; Kurtz, Sarah

    FEM simulations of PbSn solder fatigue damage are used to evaluate seven cities that represent a variety of climatic zones. It is shown that the rate of solder fatigue damage is not ranked with the cities' climate designations. For an accurate ranking, the mean maximum daily temperature, daily temperature change and a characteristic of clouding events are all required. A physics-based empirical equation is presented that accurately calculates solder fatigue damage according to these three factors. An FEM comparison of solder damage accumulated through service and thermal cycling demonstrates the number of cycles required for an equivalent exposure. For anmore » equivalent 25-year exposure, the number of thermal cycles (-40 degrees C to 85 degrees C) required ranged from roughly 100 to 630 for the cities examined. It is demonstrated that increasing the maximum cycle temperature may significantly reduce the number of thermal cycles required for an equivalent exposure.« less

  5. Recommendations on evidence needed to support measurement equivalence between electronic and paper-based patient-reported outcome (PRO) measures: ISPOR ePRO Good Research Practices Task Force report.

    PubMed

    Coons, Stephen Joel; Gwaltney, Chad J; Hays, Ron D; Lundy, J Jason; Sloan, Jeff A; Revicki, Dennis A; Lenderking, William R; Cella, David; Basch, Ethan

    2009-06-01

    Patient-reported outcomes (PROs) are the consequences of disease and/or its treatment as reported by the patient. The importance of PRO measures in clinical trials for new drugs, biological agents, and devices was underscored by the release of the US Food and Drug Administration's draft guidance for industry titled "Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims." The intent of the guidance was to describe how the FDA will evaluate the appropriateness and adequacy of PRO measures used as effectiveness end points in clinical trials. In response to the expressed need of ISPOR members for further clarification of several aspects of the draft guidance, ISPOR's Health Science Policy Council created three task forces, one of which was charged with addressing the implications of the draft guidance for the collection of PRO data using electronic data capture modes of administration (ePRO). The objective of this report is to present recommendations from ISPOR's ePRO Good Research Practices Task Force regarding the evidence necessary to support the comparability, or measurement equivalence, of ePROs to the paper-based PRO measures from which they were adapted. The task force was composed of the leadership team of ISPOR's ePRO Working Group and members of another group (i.e., ePRO Consensus Development Working Group) that had already begun to develop recommendations regarding ePRO good research practices. The resulting task force membership reflected a broad array of backgrounds, perspectives, and expertise that enriched the development of this report. The prior work became the starting point for the Task Force report. A subset of the task force members became the writing team that prepared subsequent iterations of the report that were distributed to the full task force for review and feedback. In addition, review beyond the task force was sought and obtained. Along with a presentation and discussion period at an ISPOR meeting, a draft version of the full report was distributed to roughly 220 members of a reviewer group. The reviewer group comprised individuals who had responded to an emailed invitation to the full membership of ISPOR. This Task Force report reflects the extensive internal and external input received during the 16-month good research practices development process. RESULTS/RECOMMENDATIONS: An ePRO questionnaire that has been adapted from a paper-based questionnaire ought to produce data that are equivalent or superior (e.g., higher reliability) to the data produced from the original paper version. Measurement equivalence is a function of the comparability of the psychometric properties of the data obtained via the original and adapted administration mode. This comparability is driven by the amount of modification to the content and format of the original paper PRO questionnaire required during the migration process. The magnitude of a particular modification is defined with reference to its potential effect on the content, meaning, or interpretation of the measure's items and/or scales. Based on the magnitude of the modification, evidence for measurement equivalence can be generated through combinations of the following: cognitive debriefing/testing, usability testing, equivalence testing, or, if substantial modifications have been made, full psychometric testing. As long as only minor modifications were made to the measure during the migration process, a substantial body of existing evidence suggests that the psychometric properties of the original measure will still hold for the ePRO version. Hence, an evaluation limited to cognitive debriefing and usability testing only may be sufficient. However, where more substantive changes in the migration process has occurred, confirming that the adaptation to the ePRO format did not introduce significant response bias and that the two modes of administration produce essentially equivalent results is necessary. Recommendations regarding the study designs and statistical approaches for assessing measurement equivalence are provided. The electronic administration of PRO measures offers many advantages over paper administration. We provide a general framework for decisions regarding the level of evidence needed to support modifications that are made to PRO measures when they are migrated from paper to ePRO devices. The key issues include: 1) the determination of the extent of modification required to administer the PRO on the ePRO device and 2) the selection and implementation of an effective strategy for testing the measurement equivalence of the two modes of administration. We hope that these good research practice recommendations provide a path forward for researchers interested in migrating PRO measures to electronic data collection platforms.

  6. Describing soil surface microrelief by crossover length and fractal dimension

    NASA Astrophysics Data System (ADS)

    Vidal Vázquez, E.; Miranda, J. G. V.; Paz González, A.

    2007-05-01

    Accurate description of soil surface topography is essential because different tillage tools produce different soil surface roughness conditions, which in turn affects many processes across the soil surface boundary. Advantages of fractal analysis in soil microrelief assessment have been recognised but the use of fractal indices in practice remains challenging. There is also little information on how soil surface roughness decays under natural rainfall conditions. The objectives of this work were to investigate the decay of initial surface roughness induced by natural rainfall under different soil tillage systems and to compare the performances of a classical statistical index and fractal microrelief indices. Field experiments were performed on an Oxisol at Campinas, São Paulo State (Brazil). Six tillage treatments, namely, disc harrow, disc plow, chisel plow, disc harrow + disc level, disc plow + disc level and chisel plow + disc level were tested. Measurements were made four times, firstly just after tillage and subsequently with increasing amounts of natural rainfall. Duplicated measurements were taken per treatment and date, yielding a total of 48 experimental surfaces. The sampling scheme was a square grid with 25×25 mm point spacing and the plot size was 1350×1350 mm, so that each data set consisted of 3025 individual elevation points. Statistical and fractal indices were calculated both for oriented and random roughness conditions, i.e. after height reading have been corrected for slope and for slope and tillage tool marks. The main drawback of the standard statistical index random roughness, RR, lies in its no spatial nature. The fractal approach requires two indices, fractal dimension, D, which describes how roughness changes with scale, and crossover length, l, specifying the variance of surface microrelief at a reference scale. Fractal parameters D and l, were estimated by two independent self-affine models, semivariogram (SMV) and local root mean square (RMS). Both algorithms, SMV and RMS, gave equivalent results for D and l indices, irrespective of trend removal procedure, even if some bias was present which is in accordance with previous work. Treatments with two tillage operations had the greatest D values, irrespective of evolution stage under rainfall and trend removal procedure. Primary tillage had the greatest initial values of RR and l. Differences in D values between treatments with primary tillage and those with two successive tillage operations were significant for oriented but not for random conditions. The statistical index RR and the fractal indices l and D decreased with increasing cumulative rainfall following different patterns. The l and D decay from initial value was very sharp after the first 24.4 mm cumulative rainfall. For five out of six tillage treatments a significant relationship between D and l was found for the random microrelief conditions allowing a covariance analysis. It was concluded that using RR or l together with D best allow joint description of vertical and horizontal soil roughness variations.

  7. Summary of Research Academic Departments, 1987-1988

    DTIC Science & Technology

    1988-12-01

    quantify the computer nccring students and their faculty with roughly system’s ability to enhance learning of the course equivalent computers; one group...Sponsor: Naval Academy Instructional Development Advisory Committee To understand mathematics , a student must under- also to explain the central concepts... Mathematics Department. The project will attempt resources for in-class and extra instruction , to move toward these goals by preparing extra Students

  8. Transition Induced by a Streamwise Array of Roughness Elements on a Supersonic Flat Plate

    NASA Technical Reports Server (NTRS)

    Chou, Amanda; Kegerise, Michael A.

    2017-01-01

    Roughness is unavoidable on practical high-speed vehicles, so it is critical to determine its impact on boundary layer transition. The flow field downstream of a streamwise array of cylindrical roughness elements is probed with hot-wire anemometry in this experiment. Mean flow distortion is examined in several measurement planes in the wake of the cylindrical roughness using the streak strength profiles and contour plots of the mass flux and total temperature. The roughness element heights and spacings were varied and their instability modes were examined. Cylindrical roughness elements approximately 140 micron tall produce an odd instability mode that grows weakly with downstream distance in the measurement range of this experiment. Cylindrical roughness elements approximately 280 micron tall produce an even instability mode that grows, becomes nonlinear, and then breaks down. Transition onset remains constant relative to the most downstream roughness in the streamwise array when the 280 micron roughness elements are spaced 2 diameters apart. Transition onset occurs at an earlier upstream location relative to the most downstream roughness in the streamwise array when the roughness elements are spaced 4 diameters appear to recover before the next downstream roughness element, so the location of transition shifts with the location of the most downstream roughness element in the array. When the rough- apart. The wake behind roughness elements spaced 2 diameters apart do not ness elements are spaced 4 diameters apart, the flow behind the first roughness element has enough space to recover before feeding into the second roughness element, and thus, moves transition forward.

  9. A Practical Model of Quartz Crystal Microbalance in Actual Applications.

    PubMed

    Huang, Xianhe; Bai, Qingsong; Hu, Jianguo; Hou, Dong

    2017-08-03

    A practical model of quartz crystal microbalance (QCM) is presented, which considers both the Gaussian distribution characteristic of mass sensitivity and the influence of electrodes on the mass sensitivity. The equivalent mass sensitivity of 5 MHz and 10 MHz AT-cut QCMs with different sized electrodes were calculated according to this practical model. The equivalent mass sensitivity of this practical model is different from the Sauerbrey's mass sensitivity, and the error between them increases sharply as the electrode radius decreases. A series of experiments which plate rigid gold film onto QCMs were carried out and the experimental results proved this practical model is more valid and correct rather than the classical Sauerbrey equation. The practical model based on the equivalent mass sensitivity is convenient and accurate in actual measurements.

  10. Parameterisation of clastic sediments including benthic structures

    NASA Astrophysics Data System (ADS)

    Bobertz, B.; Harff, J.; Bohling, B.

    2009-02-01

    The sediment transport processes in the south-western Baltic Sea are predicted by means of a numerical model in the project DYNAS. There are two sediment parameters that influence the results of modelling remarkably: critical shear stress velocity and bottom roughness. This paper presents the way how to parameterise these factors and extrapolate them into the investigation area. The critical shear stress velocity is parameterised basing on grain size data, combining approximations after Hjulström [Hjulström, F., 1935: Studies in the morphological activity of rivers as illustrated by the river Fyris. Geological Institution of University of Uppsala: Bulletin (25): 221-528.], Shields [Shields, A., 1936: Anwendung der Ähnlichkeits-Mechanik und der Turbulenzforschung auf die Geschiebebewegung. Mitteilungen der Preussischen Versuchsanstalt für Wasserbau und Schiffahrt (26): 26 pp.] and Bohling [Bohling, B., 2003: Untersuchungen zur Mobilität natürlicher und anthropogener Sedimente in der Mecklenburger Bucht. unpublished doctoral thesis, Mathematisch-Naturwissenschaftliche Fakultät, Ernst-Moritz-Arndt-Universität Greifswald/Germany, 156 pp.]. The roughness length, in the case of absence of macro zoo-benthos and their structures, is parameterised basing on grain size too employing Soulsby [Soulsby, R.L., 1997: Dynamics of Marine Sands: a Manual for Practical Applications. London, Thomas Telford Publications. 249 pp.], Nielsen [Nielsen, P., 1983: Analytical determination of nearshore wave height variation due to refraction shoaling and friction. Coastal Engineering 7, 233-251.] and Yalin [Yalin, M.S., 1977: Mechanics of Sediment Transport. Pergamon Press, New York. 298 pp.]. No equivalent simple parameterisations for biologically caused bed roughness exist. Here, findings of Friedrichs [Friedrichs, M., 2004: Flow-induced effects of macro zoo-benthic structures on the near-bed sediment transport. Dissertation, Universität Rostock, 80 S.] and estimations by the DYNAS biologists group were combined in order to derive roughness lengths from abundance measurements of four previously selected key species which represent the originators of the dominating benthic structures at the sea floor in the south-western Baltic Sea. Critical shear stress velocity and bed roughness are known at few sample sites only. They were extrapolated into the larger investigation area using a proxy-target concept. The mean near bottom milieu (bathymetry, median grain size, salinity, oxygen) which was derived using results from numerical modelling serves as the proxy. Since the milieu parameters are measured at the sampling sites for which the target parameters have been determined, a combined hierarchical and supervised classification was employed to transfer the local knowledge into the unknown investigation area.

  11. Composite Pillars with a Tunable Interface for Adhesion to Rough Substrates

    PubMed Central

    2016-01-01

    The benefits of synthetic fibrillar dry adhesives for temporary and reversible attachment to hard objects with smooth surfaces have been successfully demonstrated in previous studies. However, surface roughness induces a dramatic reduction in pull-off stresses and necessarily requires revised design concepts. Toward this aim, we introduce cylindrical two-phase single pillars, which are composed of a mechanically stiff stalk and a soft tip layer. Adhesion to smooth and rough substrates is shown to exceed that of conventional pillar structures. The adhesion characteristics can be tuned by varying the thickness of the soft tip layer, the ratio of the Young’s moduli and the curvature of the interface between the two phases. For rough substrates, adhesion values similar to those obtained on smooth substrates were achieved. Our concept of composite pillars overcomes current practical limitations caused by surface roughness and opens up fields of application where roughness is omnipresent. PMID:27997118

  12. Arithmetic Practice Can Be Modified to Promote Understanding of Mathematical Equivalence

    ERIC Educational Resources Information Center

    McNeil, Nicole M.; Fyfe, Emily R.; Dunwiddie, April E.

    2015-01-01

    This experiment tested if a modified version of arithmetic practice facilitates understanding of math equivalence. Children within 2nd-grade classrooms (N = 166) were randomly assigned to practice single-digit addition facts using 1 of 2 workbooks. In the control workbook, problems were presented in the traditional "operations = answer"…

  13. Building America Best Practices Series Volume 15: 40% Whole-House Energy Savings in the Hot-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.

    2011-09-01

    This best practices guide is the 15th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the hot-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices describedmore » in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less

  14. Building America Best Practices Series Volume 16: 40% Whole-House Energy Savings in the Mixed-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.

    2011-09-01

    This best practices guide is the 16th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the mixed-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices describedmore » in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less

  15. Development of flexible rotor balancing criteria

    NASA Technical Reports Server (NTRS)

    Walter, W. W.; Rieger, N. F.

    1979-01-01

    Several studies in which analytical procedures were used to obtain balancing criteria for flexible rotors are described. General response data for a uniform rotor in damped flexible supports were first obtained for plain cylindrical bearings, tilting pad bearings, axial groove bearings, and partial arc bearings. These data formed the basis for the flexible rotor balance criteria presented. A procedure by which a practical rotor in bearings could be reduced to an equivalent uniform rotor was developed and tested. It was found that the equivalent rotor response always exceeded to practical rotor response by more than sixty percent for the cases tested. The equivalent rotor procedure was then tested against six practical rotor configurations for which data was available. It was found that the equivalent rotor method offered a procedure by which balance criteria could be selected for practical flexible rotors, using the charts given for the uniform rotor.

  16. Secure Integration of Radio Frequency Identification (RFID) Technology into a Supply Chain

    DTIC Science & Technology

    2005-09-01

    serves as the rough equivalent of a license plate on an automobile . Figure 1 (below) illustrates the typical construction of an RFID tag. An antenna...writable passive tags (RW) Reprogrammable Class 3 Semi-active tags Reprogrammable Class 4 Active tags Reprogrammable Class 5 Readers... Reprogrammable Table 1. EPC Tag Classes[3]. Table 2 summarizes the advantages, disadvantages and applications of each type of tag. Tag Type Advantages

  17. Smoothed Two-Dimensional Edges for Laminar Flow

    NASA Technical Reports Server (NTRS)

    Holmes, B. J.; Liu, C. H.; Martin, G. L.; Domack, C. S.; Obara, C. J.; Hassan, A.; Gunzburger, M. D.; Nicolaides, R. A.

    1986-01-01

    New concept allows passive method for installing flaps, slats, iceprotection equipment, and other leading-edge devices on natural-laminar-flow (NLF) wings without causing loss of laminar flow. Two-dimensional roughness elements in laminar boundary layers strategically shaped to increase critical (allowable) height of roughness. Facilitates installation of leading-edge devices by practical manufacturing methods.

  18. Effects of Roughing on Finish Rolling Simulations in Microalloyed Strip Steels

    NASA Astrophysics Data System (ADS)

    Chalimba, S. A. J.; Mostert, R. J.; Stumpf, W. E.; Siyasiya, C. W.; Banks, K. M.

    2017-11-01

    The effects of a roughing pass in hot rolling simulations were assessed in VN and Nb-Ti steels. Continuous cooling phase transformation temperatures, flow curves, softening mechanisms (dynamic transformation DT and dynamic recrystallization DRX), and deformed microstructure morphologies were analyzed. The application of one or more roughing passes eliminates the effects of prior microstructural history and ensures that all stock material experiences equivalent hot working conditions and state of the microalloying elements. It has been shown that roughing in hot simulation has the following positive influences: (1) provide more reliable flow stress data; (2) give greater consistencies and accuracy in analysis of softening mechanisms giving three distinct regimes (DT regime at temperatures below 800 °C, DT/DRX inter-mode regime between 800 and 950 °C and DRX regime for temperatures above 950 °C for VN steel); (3) promotion of softening mechanisms as evidence by low critical strains (ɛ_{{{c} {DT}}} was within the range 0.08-0.12, while for finishing-only pass, the ɛ_{{{c} {DT}}} was in the range of 0.11-0.14 at \\dot{ɛ } = 0.1 s-1); (4) for roughing and finishing schedules, DT was verified to occur at temperatures 117 and 133 °C above Ae3 for VN steel and Nb-Ti steel, respectively, compared to the F-only schedules which showed that DT can only occur at temperatures below the Ae3; (5) RF schedules promoted uniform microstructural morphologies compared to inhomogeneous microstructures realized in F-only schedules.

  19. Rock discontinuity surface roughness variation with scale

    NASA Astrophysics Data System (ADS)

    Bitenc, Maja; Kieffer, D. Scott; Khoshelham, Kourosh

    2017-04-01

    ABSTRACT: Rock discontinuity surface roughness refers to local departures of the discontinuity surface from planarity and is an important factor influencing the shear resistance. In practice, the Joint Roughness Coefficient (JRC) roughness parameter is commonly relied upon and input to a shear strength criterion such as developed by Barton and Choubey [1977]. The estimation of roughness by JRC is hindered firstly by the subjective nature of visually comparing the joint profile to the ten standard profiles. Secondly, when correlating the standard JRC values and other objective measures of roughness, the roughness idealization is limited to a 2D profile of 10 cm length. With the advance of measuring technologies that provide accurate and high resolution 3D data of surface topography on different scales, new 3D roughness parameters have been developed. A desirable parameter is one that describes rock surface geometry as well as the direction and scale dependency of roughness. In this research a 3D roughness parameter developed by Grasselli [2001] and adapted by Tatone and Grasselli [2009] is adopted. It characterizes surface topography as the cumulative distribution of local apparent inclination of asperities with respect to the shear strength (analysis) direction. Thus, the 3D roughness parameter describes the roughness amplitude and anisotropy (direction dependency), but does not capture the scale properties. In different studies the roughness scale-dependency has been attributed to data resolution or size of the surface joint (see a summary of researches in [Tatone and Grasselli, 2012]). Clearly, the lower resolution results in lower roughness. On the other hand, have the investigations of surface size effect produced conflicting results. While some studies have shown a decrease in roughness with increasing discontinuity size (negative scale effect), others have shown the existence of positive scale effects, or both positive and negative scale effects. We hypothesize that roughness can increase or decrease with the joint size, depending on the large scale roughness (or waviness), which is entering the roughness calculation once the discontinuity size increases. Therefore, our objective is to characterize roughness at various spatial scales, rather than at changing surface size. Firstly, the rock surface is interpolated into a grid on which a Discrete Wavelet Transform (DWT) is applied. The resulting surface components have different frequencies, or in other words, they have a certain physical scale depending on the decomposition level and input grid resolution. Secondly, the Grasselli Parameter is computed for the original and each decomposed surface. Finally, the relative roughness change is analyzed with respect to increasing roughness wavelength for four different rock samples. The scale variation depends on the sample itself and thus indicates its potential mechanical behavior. References: - Barton, N. and V. Choubey (1977). "The shear strength of rock joints in theory and practice." Rock Mechanics and Rock Engineering 10(1): 1-54. - Grasselli, G. (2001). Shear strength of rock joints based on quantified surface description. École Polytechnique Fédérale de Lausanne. Lausanne, EPFL. - Tatone, B. S. A. and G. Grasselli (2009). "A method to evaluate the three-dimensional roughness of fracture surfaces in brittle geomaterials." Review of Scientific Instruments 80(12) - Tatone, B. and G. Grasselli (2012). "An Investigation of Discontinuity Roughness Scale Dependency Using High-Resolution Surface Measurements." Rock Mechanics and Rock Engineering: 1-25.

  20. Rough Electrode Creates Excess Capacitance in Thin-Film Capacitors

    PubMed Central

    2017-01-01

    The parallel-plate capacitor equation is widely used in contemporary material research for nanoscale applications and nanoelectronics. To apply this equation, flat and smooth electrodes are assumed for a capacitor. This essential assumption is often violated for thin-film capacitors because the formation of nanoscale roughness at the electrode interface is very probable for thin films grown via common deposition methods. In this work, we experimentally and theoretically show that the electrical capacitance of thin-film capacitors with realistic interface roughness is significantly larger than the value predicted by the parallel-plate capacitor equation. The degree of the deviation depends on the strength of the roughness, which is described by three roughness parameters for a self-affine fractal surface. By applying an extended parallel-plate capacitor equation that includes the roughness parameters of the electrode, we are able to calculate the excess capacitance of the electrode with weak roughness. Moreover, we introduce the roughness parameter limits for which the simple parallel-plate capacitor equation is sufficiently accurate for capacitors with one rough electrode. Our results imply that the interface roughness beyond the proposed limits cannot be dismissed unless the independence of the capacitance from the interface roughness is experimentally demonstrated. The practical protocols suggested in our work for the reliable use of the parallel-plate capacitor equation can be applied as general guidelines in various fields of interest. PMID:28745040

  1. Rough Electrode Creates Excess Capacitance in Thin-Film Capacitors.

    PubMed

    Torabi, Solmaz; Cherry, Megan; Duijnstee, Elisabeth A; Le Corre, Vincent M; Qiu, Li; Hummelen, Jan C; Palasantzas, George; Koster, L Jan Anton

    2017-08-16

    The parallel-plate capacitor equation is widely used in contemporary material research for nanoscale applications and nanoelectronics. To apply this equation, flat and smooth electrodes are assumed for a capacitor. This essential assumption is often violated for thin-film capacitors because the formation of nanoscale roughness at the electrode interface is very probable for thin films grown via common deposition methods. In this work, we experimentally and theoretically show that the electrical capacitance of thin-film capacitors with realistic interface roughness is significantly larger than the value predicted by the parallel-plate capacitor equation. The degree of the deviation depends on the strength of the roughness, which is described by three roughness parameters for a self-affine fractal surface. By applying an extended parallel-plate capacitor equation that includes the roughness parameters of the electrode, we are able to calculate the excess capacitance of the electrode with weak roughness. Moreover, we introduce the roughness parameter limits for which the simple parallel-plate capacitor equation is sufficiently accurate for capacitors with one rough electrode. Our results imply that the interface roughness beyond the proposed limits cannot be dismissed unless the independence of the capacitance from the interface roughness is experimentally demonstrated. The practical protocols suggested in our work for the reliable use of the parallel-plate capacitor equation can be applied as general guidelines in various fields of interest.

  2. Boundary Layer Control for Hypersonic Airbreathing Vehicles

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Nowak, Robert J.; Horvath, Thomas J.

    2004-01-01

    Active and passive methods for tripping hypersonic boundary layers have been examined in NASA Langley Research Center wind tunnels using a Hyper-X model. This investigation assessed several concepts for forcing transition, including passive discrete roughness elements and active mass addition (or blowing), in the 20-Inch Mach 6 Air and the 31-Inch Mach 10 Air Tunnels. Heat transfer distributions obtained via phosphor thermography, shock system details, and surface streamline patterns were measured on a 0.333-scale model of the Hyper-X forebody. The comparisons between the active and passive methods for boundary layer control were conducted at test conditions that nearly match the Hyper-X nominal Mach 7 flight test-point of an angle-of-attack of 2-deg and length Reynolds number of 5.6 million. For passive roughness, the primary parametric variation was a range of trip heights within the calculated boundary layer thickness for several trip concepts. The passive roughness study resulted in a swept ramp configuration, scaled to be roughly 0.6 of the calculated boundary layer thickness, being selected for the Mach 7 flight vehicle. For the active blowing study, the manifold pressure was systematically varied (while monitoring the mass flow) for each configuration to determine the jet penetration height, with schlieren, and transition movement, with the phosphor system, for comparison to the passive results. All the blowing concepts tested, which included various rows of sonic orifices (holes), two- and three-dimensional slots, and random porosity, provided transition onset near the trip location with manifold stagnation pressures on the order of 40 times the model surface static pressure, which is adequate to ensure sonic jets. The present results indicate that the jet penetration height for blowing was roughly half the height required with passive roughness elements for an equivalent amount of transition movement.

  3. 21 CFR 26.15 - Monitoring continued equivalence.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... COMMUNITY Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.15 Monitoring continued equivalence. Monitoring activities for the purpose of maintaining equivalence shall include review... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Monitoring continued equivalence. 26.15 Section 26...

  4. 21 CFR 26.15 - Monitoring continued equivalence.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... COMMUNITY Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.15 Monitoring continued equivalence. Monitoring activities for the purpose of maintaining equivalence shall include review... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Monitoring continued equivalence. 26.15 Section 26...

  5. 21 CFR 26.15 - Monitoring continued equivalence.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... COMMUNITY Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.15 Monitoring continued equivalence. Monitoring activities for the purpose of maintaining equivalence shall include review... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Monitoring continued equivalence. 26.15 Section 26...

  6. Testing Einstein's theory of gravity in a millisecond pulsar triple system

    NASA Astrophysics Data System (ADS)

    Archibald, Anne

    2015-04-01

    Einstein's theory of gravity depends on a key postulate, the strong equivalence principle. This principle says, among other things, that all objects fall the same way, even objects with strong self-gravity. Almost every metric theory of gravity other than Einstein's general relativity violates the strong equivalence principle at some level. While the weak equivalence principle--for objects with negligible self-gravity--has been tested in the laboratory, the strong equivalence principle requires astrophysical tests. Lunar laser ranging provides the best current tests by measuring whether the Earth and the Moon fall the same way in the gravitational field of the Sun. These tests are limited by the weak self-gravity of the Earth: the gravitational binding energy (over c2) over the mass is only 4 . 6 ×10-10 . By contrast, for neutron stars this same ratio is expected to be roughly 0 . 1 . Thus the recently-discovered system PSR J0337+17, a hierarchical triple consisting of a millisecond pulsar and two white dwarfs, offers the possibility of a test of the strong equivalence principle that is more sensitive by a factor of 20 to 100 than the best existing test. I will describe our observations of this system and our progress towards such a test.

  7. Practicing What We Teach: Learning from Experience to Improve Adult Program Administration

    ERIC Educational Resources Information Center

    Jass, Lori K.

    2012-01-01

    Bethel University in St. Paul, Minnesota, comprises three primary units that each serve a distinct population: the College of Arts and Sciences (CAS) is a residential college for roughly 2,800 traditional-age undergraduates; the College of Adult and Professional Studies and Graduate School (CAPS/GS) serves roughly 2,200 adult learners at both the…

  8. Creating Cultures of Schooling: Historical and Conceptual Background of the KEEP/Rough Rock Collaboration.

    ERIC Educational Resources Information Center

    Jordan, Cathie

    1995-01-01

    Discusses the collaborative efforts of the Hawaiian Kamehameha Early Education Program (KEEP) and the Navajo Rough Rock Community School in Arizona to develop educational practices and strategies that would help minority-language children succeed in school. Examines the modification of KEEP strategies for use with Navajo children. (16 references)…

  9. Examination of Routine Practice Patterns in the Hospital Information Data Warehouse: Use of OLAP and Rough Set Analysis with Clinician Feedback

    PubMed Central

    Grant, Andrew; Grant, Gwyneth; Gagné, Jean; Blanchette, Carl; Comeau, Émilie; Brodeur, Guillaume; Dionne, Jonathon; Ayite, Alphonse; Synak, Piotr; Wroblewski, Jakub; Apanowitz, Cas

    2001-01-01

    The patient centred electronic patient record enables retrospective analysis of practice patterns as one means to assist clinicians adjust and improve their practice. An interrogation of the data-warehouse linking test use to Diagnostic Related Group (DRG) of one years data of the Sherbrooke University Hospital showed that one-third of patients used two-thirds of these diagnostic tests. Using RoughSets analysis, zones of repeated tests were demonstrated where results remained within stable limits. It was concluded that 30% of fluid and electrolyte testing was probably unnecessary. These findings led to an endorsement of changing the test request formats in the hospital information system from profiles to individual tests requiring justification.

  10. Positive and Negative Reinforcement Effects on Behavior in a Three-Person Microsociety.

    DTIC Science & Technology

    1983-12-01

    Per-hour earning potential uas roughly equivalent among the groups. Prdr The consequences of ompletlng a work trip were varied to assess the effects ...31 abandoned his previously established pattern. These effects are attributable, at least In part, to the style of alternating work that the... working under aversive control. These effects suggest that the functional properties of work (C-.h. oonsequences) were far more significant to the group

  11. The phase topology of a special case of Goryachev integrability in rigid body dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryabov, P. E., E-mail: orelryabov@mail.ru

    2014-07-31

    The phase topology of a special case of Goryachev integrability in the problem of motion of a rigid body in a fluid is investigated using the method of Boolean functions, which was developed by Kharlamov for algebraically separated systems. The bifurcation diagram of the moment map is found and the Fomenko invariant, which classifies the systems up to rough Liouville equivalence, is specified. Bibliography: 15 titles. (paper)

  12. Airborne Operations in World War II, European Theater

    DTIC Science & Technology

    1956-09-01

    GARDEN Gee Hamilcar HANDS UP Formation usually composed of two or more elements and roughly equivalent to a squadron Forward Visual Control Post Ground...USAF HISTORICAL STUDIES: NO. 97 AIRBORNE OPERATIONS IN WORLD WAR II, EUROPEAN THEATER By Dr. John C. Warren USAF Historical Division Research Studies...OMB control number. 1. REPORT DATE SEP 1956 2. REPORT TYPE 3. DATES COVERED - 4. TITLE AND SUBTITLE Airborne Operations in World War II 5a

  13. Pollutant emissions from flat-flame burners at high pressures

    NASA Technical Reports Server (NTRS)

    Maahs, H. G.; Miller, I. M.

    1980-01-01

    Maximum flame temperatures and pollutant emission measurements for NOx, CO, and UHC (unburned hydrocarbons) are reported for premixed methane air flat flames at constant total mass flow rate over the pressure range from 1.9 to 30 atm and for equivalence ratios from 0.84 to 1.12. For any given pressure, maxima typically occur in both the temperature and NOx emissions curves slightly to the lean side of stoichiometric conditions. The UHC emissions show minima at roughly the same equivalence ratios. The CO emissions, however, increase continually with increasing equivalence ratio. Flame temperature and NOx emissions decrease with increasing pressure, while the opposite is true for the CO and UHC emissions. The NOx data correlate reasonably well as a function of flame temperature only. Four flameholders, differing only slightly, were used. In general, the temperature and emissions data from these four flameholders are similar, but some differences also exist. These differences appear to be related to minor variations in the condition of the flameholder surfaces.

  14. Sodium Lauryl Sulfate Stimulates the Generation of Reactive Oxygen Species through Interactions with Cell Membranes.

    PubMed

    Mizutani, Taeko; Mori, Ryota; Hirayama, Misaki; Sagawa, Yuki; Shimizu, Kenji; Okano, Yuri; Masaki, Hitoshi

    2016-12-01

    Sodium lauryl sulfate (SLS), a representative anionic surfactant, is well-known to induce rough skin following single or multiple topical applications. The mechanism by which SLS induces rough skin is thought to result from the disruption of skin moisture function consisting of NMF and epidermal lipids. However, a recent study demonstrated that topically applied SLS easily penetrates into the living cell layers of the epidermis, which suggests that physiological alterations of keratinocytes might cause the SLS-induced rough skin. This study was conducted to clarify the effects of SLS on keratinocytes to demonstrate the contribution of SLS to the induction of rough skin. In addition, the potentials of other widely used anionic surfactants to induce rough skin were evaluated. HaCaT keratinocytes treated with SLS had increased levels of intracellular ROS and IL-1α secretion. Application of SLS on the surface of a reconstructed epidermal equivalent also showed the increased generation of ROS. Further, SLS-treated cells showed an increase of intracellular calpain activity associated with the increase of intracellular Ca 2+ concentration. The increase of intracellular ROS was abolished by the addition of BAPTA-AM, a specific chelator of Ca 2+ . In addition, IL-1α also stimulated ROS generation by HaCaT keratinocytes. An ESR spin-labeling study demonstrated that SLS increased the fluidity of membranes of liposomes and cells. Together, those results indicate that SLS initially interacts with cell membranes, which results in the elevation of intracellular Ca 2+ influx. Ca 2+ stimulates the secretion of IL-1α due to the activation of calpain, and also increases ROS generation. IL-1α also stimulates ROS generation by HaCaT keratinocytes. We conclude from these results that the elevation of intracellular ROS levels is one of the causes of SLS-induced rough skin. Finally, among the other anionic surfactants tested, sodium lauryl phosphate has less potential to induce rough skin because of its lower generation of ROS.

  15. Aerodynamic performance of transonic and subsonic airfoils: Effects of surface roughness, turbulence intensity, Mach number, and streamline curvature-airfoil shape

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang

    The effects of surface roughness, turbulence intensity, Mach number, and streamline curvature-airfoil shape on the aerodynamic performance of turbine airfoils are investigated in compressible, high speed flows. The University of Utah Transonic Wind Tunnel is employed for the experimental part of the study. Two different test sections are designed to produce Mach numbers, Reynolds numbers, passage mass flow rates, and physical dimensions, which match values along turbine blades in operating engines: (i) a nonturning test section with a symmetric airfoil, and (ii) a cascade test section with a cambered turbine vane. The nonuniform, irregular, three-dimensional surface roughness is characterized using the equivalent sand grain roughness size. Changing the airfoil surface roughness condition has a substantial effect on wake profiles of total pressure loss coefficients, normalized Mach number, normalized kinetic energy, and on the normalized and dimensional magnitudes of Integrated Aerodynamic Losses produced by the airfoils. Comparisons with results for a symmetric airfoil and a cambered vane show that roughness has more substantial effects on losses produced by the symmetric airfoil than the cambered vane. Data are also provided that illustrate the larger loss magnitudes are generally present with flow turning and cambered airfoils, than with symmetric airfoils. Wake turbulence structure of symmetric airfoils and cambered vanes are also studied experimentally. The effects of surface roughness and freestream turbulence levels on wake distributions of mean velocity, turbulence intensity, and power spectral density profiles and vortex shedding frequencies are quantified one axial chord length downstream of the test airfoils. As the level of surface roughness increases, all wake profile quantities broaden significantly and nondimensional vortex shedding frequencies decrease. Wake profiles produced by the symmetric airfoil are more sensitive to variations of surface roughness and freestream turbulence, compared with data from the cambered vane airfoil. Stanton numbers, skin friction coefficients, aerodynamic losses, and Reynolds analogy behavior are numerically predicted for a turbine vane using the FLUENT with a k-epsilon RNG model to show the effects of Mach number, mainstream turbulence level, and surface roughness. Comparisons with wake aerodynamic loss experimental data are made. Numerically predicted skin friction coefficients and Stanton numbers are also used to deduce Reynolds analogy behavior on the vane suction and pressure sides.

  16. Electro-thermal analysis of contact resistance

    NASA Astrophysics Data System (ADS)

    Pandey, Nitin; Jain, Ishant; Reddy, Sudhakar; Gulhane, Nitin P.

    2018-05-01

    Electro-Mechanical characterization over copper samples are performed at the macroscopic level to understand the dependence of electrical contact resistance and temperature on surface roughness and contact pressure. For two different surface roughness levels of samples, six levels of load are selected and varied to capture the bulk temperature rise and electrical contact resistance. Accordingly, the copper samples are modelled and analysed using COMSOLTM as a simulation package and the results are validated by the experiments. The interface temperature during simulation is obtained using Mikic-Elastic correlation and by directly entering experimental contact resistance value. The load values are varied and then reversed in a similar fashion to capture the hysteresis losses. The governing equations & assumptions underlying these models and their significance are examined & possible justification for the observed variations are discussed. Equivalent Greenwood model is also predicted by mapping the results of the experiment.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Eun-Kyeong; Yeong Kim, Ji; Sub Kim, Sang, E-mail: sangsub@inha.ac.kr

    We describe the preparation of superhydrophobic SiO{sub 2} layers through a combination of surface roughness and fluorination. Electrospraying SiO{sub 2} precursor solutions that were prepared by a sol-gel route and included trichloro(1H,1H,2H,2H-perfluorooctyl)silane as a fluorination source produced highly rough, fluorinated SiO{sub 2} layers. In sharp contrast to the fluorinated flat SiO{sub 2} layer, the fluorinated rough SiO{sub 2} layer showed much enhanced repellency toward liquid droplets of different surface tensions. The surface fraction and the work of adhesion of the superhydrophobic SiO{sub 2} layers were determined, respectively, based on Cassie-Baxter and Young-Dupre equations. The satisfactory long-term stability for 30 days,more » the ultraviolet resistance and the thermal stability up to 400 {sup o}C of the superhydrophobic SiO{sub 2} layers prepared in this work confirm a promising practical application. - Graphical abstract: A schematic illustration of the electrospray deposition used for preparing SiO{sub 2} layers. Shapes of liquid droplets of water, glycerol, coffee, juice and milk created on the fluorinated rough SiO{sub 2} layer deposited on a silicon wafer. Highlights: Black-Right-Pointing-Pointer Superhydrophobic SiO{sub 2} layers are realized by a combination of surface roughness and fluorination. Black-Right-Pointing-Pointer The fluorinated rough SiO{sub 2} layer shows enhanced repellency toward various liquid droplets. Black-Right-Pointing-Pointer The wetting behavior is explained based on Cassie-Baxter and Young-Dupre equations. Black-Right-Pointing-Pointer The superhydrophobic SiO{sub 2} layers confirm a promising practical application.« less

  18. Effects of surface roughness and electrokinetic heterogeneity on electroosmotic flow in microchannel

    NASA Astrophysics Data System (ADS)

    Masilamani, Kannan; Ganguly, Suvankar; Feichtinger, Christian; Bartuschat, Dominik; Rüde, Ulrich

    2015-06-01

    In this paper, a hybrid lattice-Boltzmann and finite-difference (LB-FD) model is applied to simulate the effects of three-dimensional surface roughness and electrokinetic heterogeneity on electroosmotic flow (EOF) in a microchannel. The lattice-Boltzmann (LB) method has been employed to obtain the flow field and a finite-difference (FD) method is used to solve the Poisson-Boltzmann (PB) equation for the electrostatic potential distribution. Numerical simulation of flow through a square cross-section microchannel with designed roughness is conducted and the results are critically analysed. The effects of surface heterogeneity on the electroosmotic transport are investigated for different roughness height, width, roughness interval spacing, and roughness surface potential. Numerical simulations reveal that the presence of surface roughness changes the nature of electroosmotic transport through the microchannel. It is found that the electroosmotic velocity decreases with the increase in roughness height and the velocity profile becomes asymmetric. For the same height of the roughness elements, the EOF velocity rises with the increase in roughness width. For the heterogeneously charged rough channel, the velocity profile shows a distinct deviation from the conventional plug-like flow pattern. The simulation results also indicate locally induced flow vortices which can be utilized to enhance the flow and mixing within the microchannel. The present study has important implications towards electrokinetic flow control in the microchannel, and can provide an efficient way to design a microfluidic system of practical interest.

  19. ROMI 4.0: Rough mill simulator 4.0 users manual

    Treesearch

    R. Edward Thomas; Timo Grueneberg; Urs Buehlmann

    2015-01-01

    The Rough MIll simulator (ROMI Version 4.0) is a computer software package for personal computers (PCs) that simulates current industrial practices for rip-first, chop-first, and rip and chop-first lumber processing. This guide shows how to set up the software; design, implement, and execute simulations; and examine the results. ROMI 4.0 accepts cutting bills with as...

  20. Action Research, Stories and Practical Philosophy

    ERIC Educational Resources Information Center

    Cotton, Tony; Griffiths, Morwenna

    2007-01-01

    This collaborative piece written by a philosopher/action researcher and an action researcher/philosopher explores the use of practical philosophy as a tool in action research. The paper explores the connection to be made between what we refer to, roughly, as "theory" and "practice" (while never losing hold of either). The…

  1. Numerical study of the effects of lamp configuration and reactor wall roughness in an open channel water disinfection UV reactor.

    PubMed

    Sultan, Tipu

    2016-07-01

    This article describes the assessment of a numerical procedure used to determine the UV lamp configuration and surface roughness effects on an open channel water disinfection UV reactor. The performance of the open channel water disinfection UV reactor was numerically analyzed on the basis of the performance indictor reduction equivalent dose (RED). The RED values were calculated as a function of the Reynolds number to monitor the performance. The flow through the open channel UV reactor was modelled using a k-ε model with scalable wall function, a discrete ordinate (DO) model for fluence rate calculation, a volume of fluid (VOF) model to locate the unknown free surface, a discrete phase model (DPM) to track the pathogen transport, and a modified law of the wall to incorporate the reactor wall roughness effects. The performance analysis was carried out using commercial CFD software (ANSYS Fluent 15.0). Four case studies were analyzed based on open channel UV reactor type (horizontal and vertical) and lamp configuration (parallel and staggered). The results show that lamp configuration can play an important role in the performance of an open channel water disinfection UV reactor. The effects of the reactor wall roughness were Reynolds number dependent. The proposed methodology is useful for performance optimization of an open channel water disinfection UV reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Benefits of Leapfrogging to Superefficiency and Low Global Warming Potential Refrigerants in Room Air Conditioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Nihar; Wei, Max; Letschert, Virginie

    2015-10-01

    Hydrofluorocarbons (HFCs) emitted from uses such as refrigerants and thermal insulating foam, are now the fastest growing greenhouse gases (GHGs), with global warming potentials (GWP) thousands of times higher than carbon dioxide (CO2). Because of the short lifetime of these molecules in the atmosphere, mitigating the amount of these short-lived climate pollutants (SLCPs) provides a faster path to climate change mitigation than control of CO2 alone. This has led to proposals from Africa, Europe, India, Island States, and North America to amend the Montreal Protocol on Substances that Deplete the Ozone Layer (Montreal Protocol) to phase-down high-GWP HFCs. Simultaneously, energymore » efficiency market transformation programs such as standards, labeling and incentive programs are endeavoring to improve the energy efficiency for refrigeration and air conditioning equipment to provide life cycle cost, energy, GHG, and peak load savings. In this paper we provide an estimate of the magnitude of such GHG and peak electric load savings potential, for room air conditioning, if the refrigerant transition and energy efficiency improvement policies are implemented either separately or in parallel. We find that implementing HFC refrigerant transition and energy efficiency improvement policies in parallel for room air conditioning, roughly doubles the benefit of either policy implemented separately. We estimate that shifting the 2030 world stock of room air conditioners from the low efficiency technology using high-GWP refrigerants to higher efficiency technology and low-GWP refrigerants in parallel would save between 340-790 gigawatts (GW) of peak load globally, which is roughly equivalent to avoiding 680-1550 peak power plants of 500MW each. This would save 0.85 GT/year annually in China equivalent to over 8 Three Gorges dams and over 0.32 GT/year annually in India equivalent to roughly twice India’s 100GW solar mission target. While there is some uncertainty associated with emissions and growth projections, moving to efficient room air conditioning (~30% more efficient than current technology) in parallel with low-GWP refrigerants in room air conditioning could avoid up to ~25 billion tonnes of CO2 in 2030, ~33 billion in 2040, and ~40 billion in 2050, i.e. cumulative savings up to 98 billion tonnes of CO2 by 2050. Therefore, superefficient room ACs using low-GWP refrigerants merit serious consideration to maximize peak load reduction and GHG savings.« less

  3. Cris-atms Retrievals Using an AIRS Science Team Version 6-like Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis C.; Iredell, Lena

    2014-01-01

    CrIS is the infrared high spectral resolution atmospheric sounder launched on Suomi-NPP in 2011. CrISATMS comprise the IRMW Sounding Suite on Suomi-NPP. CrIS is functionally equivalent to AIRS, the high spectral resolution IR sounder launched on EOS Aqua in 2002 and ATMS is functionally equivalent to AMSU on EOS Aqua. CrIS is an interferometer and AIRS is a grating spectrometer. Spectral coverage, spectral resolution, and channel noise of CrIS is similar to AIRS. CrIS spectral sampling is roughly twice as coarse as AIRSAIRS has 2378 channels between 650 cm-1 and 2665 cm-1. CrIS has 1305 channels between 650 cm-1 and 2550 cm-1. Spatial resolution of CrIS is comparable to AIRS.

  4. Multilayer Relaxation and Surface Energies of Metallic Surfaces

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Rodriguez, Agustin M.; Ferrante, John

    1994-01-01

    The perpendicular and parallel multilayer relaxations of fcc (210) surfaces are studied using equivalent crystal theory (ECT). A comparison with experimental and theoretical results is made for AI(210). The effect of uncertainties in the input parameters on the magnitudes and ordering of surface relaxations for this semiempirical method is estimated. A new measure of surface roughness is proposed. Predictions for the multilayer relaxations and surface energies of the (210) face of Cu and Ni are also included.

  5. Spin Hall effect originated from fractal surface

    NASA Astrophysics Data System (ADS)

    Hajzadeh, I.; Mohseni, S. M.; Movahed, S. M. S.; Jafari, G. R.

    2018-05-01

    The spin Hall effect (SHE) has shown promising impact in the field of spintronics and magnonics from fundamental and practical points of view. This effect originates from several mechanisms of spin scatterers based on spin–orbit coupling (SOC) and also can be manipulated through the surface roughness. Here, the effect of correlated surface roughness on the SHE in metallic thin films with small SOC is investigated theoretically. Toward this, the self-affine fractal surface in the framework of the Born approximation is exploited. The surface roughness is described by the k-correlation model and is characterized by the roughness exponent H , the in-plane correlation length ξ and the rms roughness amplitude δ. It is found that the spin Hall angle in metallic thin film increases by two orders of magnitude when H decreases from H  =  1 to H  =  0. In addition, the source of SHE for surface roughness with Gaussian profile distribution function is found to be mainly the side jump scattering while that with a non-Gaussian profile suggests both of the side jump and skew scatterings are present. Our achievements address how details of the surface roughness profile can adjust the SHE in non-heavy metals.

  6. Fingerprinting the type of line edge roughness

    NASA Astrophysics Data System (ADS)

    Fernández Herrero, A.; Pflüger, M.; Scholze, F.; Soltwisch, V.

    2017-06-01

    Lamellar gratings are widely used diffractive optical elements and are prototypes of structural elements in integrated electronic circuits. EUV scatterometry is very sensitive to structure details and imperfections, which makes it suitable for the characterization of nanostructured surfaces. As compared to X-ray methods, EUV scattering allows for steeper angles of incidence, which is highly preferable for the investigation of small measurement fields on semiconductor wafers. For the control of the lithographic manufacturing process, a rapid in-line characterization of nanostructures is indispensable. Numerous studies on the determination of regular geometry parameters of lamellar gratings from optical and Extreme Ultraviolet (EUV) scattering also investigated the impact of roughness on the respective results. The challenge is to appropriately model the influence of structure roughness on the diffraction intensities used for the reconstruction of the surface profile. The impact of roughness was already studied analytically but for gratings with a periodic pseudoroughness, because of practical restrictions of the computational domain. Our investigation aims at a better understanding of the scattering caused by line roughness. We designed a set of nine lamellar Si-gratings to be studied by EUV scatterometry. It includes one reference grating with no artificial roughness added, four gratings with a periodic roughness distribution, two with a prevailing line edge roughness (LER) and another two with line width roughness (LWR), and four gratings with a stochastic roughness distribution (two with LER and two with LWR). We show that the type of line roughness has a strong impact on the diffuse scatter angular distribution. Our experimental results are not described well by the present modelling approach based on small, periodically repeated domains.

  7. Assessing the Practical Equivalence of Conversions when Measurement Conditions Change

    ERIC Educational Resources Information Center

    Liu, Jinghua; Dorans, Neil J.

    2012-01-01

    At times, the same set of test questions is administered under different measurement conditions that might affect the psychometric properties of the test scores enough to warrant different score conversions for the different conditions. We propose a procedure for assessing the practical equivalence of conversions developed for the same set of test…

  8. Effect Size Indices for Analyses of Measurement Equivalence: Understanding the Practical Importance of Differences between Groups

    ERIC Educational Resources Information Center

    Nye, Christopher D.; Drasgow, Fritz

    2011-01-01

    Because of the practical, theoretical, and legal implications of differential item functioning (DIF) for organizational assessments, studies of measurement equivalence are a necessary first step before scores can be compared across individuals from different groups. However, commonly recommended criteria for evaluating results from these analyses…

  9. Inner-outer interactions in a turbulent boundary layer overlying complex roughness

    NASA Astrophysics Data System (ADS)

    Pathikonda, Gokul; Christensen, Kenneth T.

    2017-04-01

    Hot-wire measurements were performed in a zero-pressure-gradient turbulent boundary layer overlying both a smooth and a rough wall for the purpose of investigating the details of inner-outer flow interactions. The roughness considered embodies a broad range of topographical scales arranged in an irregular manner and reflects the topographical complexity often encountered in practical flow systems. Single-probe point-wise measurements with a traversing probe were made at two different regions of the rough-wall flow, which was previously shown to be heterogeneous in the spanwise direction, to investigate the distribution of streamwise turbulent kinetic energy and large scale-small scale interactions. In addition, two-probe simultaneous measurements were conducted enabling investigation of inner-outer interactions, wherein the large scales were independently sampled in the outer layer. Roughness-induced changes to the near-wall behavior were investigated, particularly by contrasting the amplitude and frequency modulation effects of inner-outer interactions in the rough-wall flow with well-established smooth-wall flow phenomena. It was observed that the rough-wall flow exhibits both amplitude and frequency modulation features close to the wall in a manner very similar to smooth-wall flow, though the correlated nature of these effects was found to be more intense in the rough-wall flow. In particular, frequency modulation was found to illuminate these enhanced modulation effects in the rough-wall flow. The two-probe measurements helped in evaluating the suitability of the interaction-schematic recently proposed by Baars et al., Exp. Fluids 56, 1 (2015), 10.1007/s00348-014-1876-4 for rough-wall flows. This model was found to be suitable for the rough-wall flow considered herein, and it was found that frequency modulation is a "cleaner" measure of the inner-outer modulation interactions for this rough-wall flow.

  10. Addressing scale dependence in roughness and morphometric statistics derived from point cloud data.

    NASA Astrophysics Data System (ADS)

    Buscombe, D.; Wheaton, J. M.; Hensleigh, J.; Grams, P. E.; Welcker, C. W.; Anderson, K.; Kaplinski, M. A.

    2015-12-01

    The heights of natural surfaces can be measured with such spatial density that almost the entire spectrum of physical roughness scales can be characterized, down to the morphological form and grain scales. With an ability to measure 'microtopography' comes a demand for analytical/computational tools for spatially explicit statistical characterization of surface roughness. Detrended standard deviation of surface heights is a popular means to create continuous maps of roughness from point cloud data, using moving windows and reporting window-centered statistics of variations from a trend surface. If 'roughness' is the statistical variation in the distribution of relief of a surface, then 'texture' is the frequency of change and spatial arrangement of roughness. The variance in surface height as a function of frequency obeys a power law. In consequence, roughness is dependent on the window size through which it is examined, which has a number of potential disadvantages: 1) the choice of window size becomes crucial, and obstructs comparisons between data; 2) if windows are large relative to multiple roughness scales, it is harder to discriminate between those scales; 3) if roughness is not scaled by the texture length scale, information on the spacing and clustering of roughness `elements' can be lost; and 4) such practice is not amenable to models describing the scattering of light and sound from rough natural surfaces. We discuss the relationship between roughness and texture. Some useful parameters which scale vertical roughness to characteristic horizontal length scales are suggested, with examples of bathymetric point clouds obtained using multibeam from two contrasting riverbeds, namely those of the Colorado River in Grand Canyon, and the Snake River in Hells Canyon. Such work, aside from automated texture characterization and texture segmentation, roughness and grain size calculation, might also be useful for feature detection and classification from point clouds.

  11. Geologic framework of the regional ground-water flow system in the Upper Deschutes Basin, Oregon

    USGS Publications Warehouse

    Lite, Kenneth E.; Gannett, Marshall W.

    2002-12-10

    Geologic units in the Deschutes Basin were divided into several distinct hydrogeologic units. In some instances the units correspond to existing stratigraphic divisions. In other instances, hydrogeologic units correspond to different facies within a single stratigraphic unit or formation. The hydrogeologic units include Quaternary sediment, deposits of the Cascade Range and Newberry Volcano, four zones within the Deschutes Formation and age-equivalent rocks that roughly correspond with depositional environments, and pre-Deschutes-age strata.

  12. Ignition and Combustion Characteristics of Nanoscale Al/AgIO3: A Potential Energetic Biocidal System

    DTIC Science & Technology

    2011-01-01

    the actual particle morphology consists of thin platelets , roughly 1 mm in diam- eter. Silver iodide was purchased from Sigma Aldrich, and the size was...2008), and shows that mixing is limited by clumping of both ingredients. The AgIO3 has a platelet -like morphology, and could potentially mix...in this study is 80 nm from NanoTechnologies. The CuO in this study is 45 nm from Technanogy. Each sample was fuel rich in this study with equivalency

  13. Cloud computing and cloud security in China

    NASA Astrophysics Data System (ADS)

    Zhang, Shaohe; Jiang, Cuenyun; Wang, Ruxin

    2018-04-01

    We live in the data age. It's not easy to measure the total volume of data stored electronically, but an IDC estimate put the size of the "digital universe" at 0.18 zettabytes in 2006 and is forecasting a tenfold growth by 2011 to 1.8 zettabytes. A zettabyte is 1021 bytes, or equivalently one thousand exabytes, one million petabytes, or one billion terabytes. That's roughly the same order of magnitude as one disk drive for every person in the world.

  14. Increasing the capacity for treatment of chemical plant wastewater by replacing existing suspended carrier media with Kaldnes Moving Bed media at a plant in Singapore.

    PubMed

    Wessman, F G; Yan Yuegen, E; Zheng, Q; He, G; Welander, T; Rusten, B

    2004-01-01

    The Kaldnes biomedia K1, which is used in the patented Kaldnes Moving Bed biofilm process, has been tested along with other types of biofilm carriers for biological pretreatment of a complex chemical industry wastewater. The main objective of the test was to find a biofilm carrier that could replace the existing suspended carrier media and at the same time increase the capacity of the existing roughing filter-activated sludge plant by 20% or more. At volumetric organic loads of 7.1 kg COD/m3/d the Kaldnes Moving Bed process achieved much higher removal rates and much lower effluent concentrations than roughing filters using other carriers. The Kaldnes roughing stage achieved more than 85% removal of organic carbon and more than 90% removal of BOD5 at the tested organic load, which was equivalent to a specific biofilm surface area load of 24 g COD/m2/d. Even for the combined roughing filter-activated sludge process, the Kaldnes carriers outperformed the other carriers, with 98% removal of organic carbon and 99.6% removal of BOD5. The Kaldnes train final effluent concentrations were only 22 mg FOC/L and 7 mg BOD5/L. Based on the successful pilot testing, the full-scale plant was upgraded with Kaldnes Moving Bed roughing filters. During normal operation the upgraded plant has easily met the discharge limits of 100 mg COD/L and 50 mg SS/L. For the month of September 2002, with organic loads between 100 and 115% of the design load for the second half of the month, average effluent concentrations were as low as 9 mg FOC/L, 51 mg COD/L and 12 mg SS/L.

  15. Results of the Imager for Mars Pathfinder windsock experiment

    USGS Publications Warehouse

    Sullivan, R.; Greeley, R.; Kraft, M.; Wilson, G.; Golombek, M.; Herkenhoff, K.; Murphy, J.; Smith, P.

    2000-01-01

    The Imager for Mars Pathfinder (IMP) windsock experiment measured wind speeds at three heights within 1.2 m of the Martian surface during Pathfinder landed operations. These wind data allowed direct measurement of near-surface wind profiles on Mars for the first time, including determination of aerodynamic roughness length and wind friction speeds. Winds were light during periods of windsock imaging, but data from the strongest breezes indicate aerodynamic roughness length of 3 cm at the landing site, with wind friction speeds reaching 1 m/s. Maximum wind friction speeds were about half of the threshold-of-motion friction speeds predicted for loose, fine-grained materials on smooth Martian terrain and about one third of the threshold-of-motion friction speeds predicted for the same size particles over terrain with aerodynamic roughness of 3 cm. Consistent with this, and suggesting that low wind speeds prevailed when the windsock array was not imaged and/or no particles were available for aeolian transport, no wind-related changes to the surface during mission operations have been recognized. The aerodynamic roughness length reported here implies that proposed deflation of fine particles around the landing site, or activation of duneforms seen by IMP and Sojourner, would require wind speeds >28 m/s at the Pathfinder top windsock height (or >31 m/s at the equivalent Viking wind sensor height of 1.6 m) and wind speeds >45 m/s above 10 m. These wind speeds would cause rock abrasion if a supply of durable particles were available for saltation. Previous analyses indicate that the Pathfinder landing site probably is rockier and rougher than many other plains units on Mars, so aerodynamic roughness length elsewhere probably is less than the 3-cm value reported for the Pathfinder site. Copyright 2000 by the American Geophysical Union.

  16. Superhydrophobicity of electrospray-synthesized fluorinated silica layers.

    PubMed

    Kim, Eun-Kyeong; Lee, Chul-Sung; Kim, Sang Sub

    2012-02-15

    The preparation of superhydrophobic SiO(2) layers through a combination of a nanoscale surface roughness and a fluorination treatment is reported. Electrospraying SiO(2) precursor solutions that had been prepared by a sol-gel chemical route produced very rough SiO(2) layers. Subsequent fluorination treatment with a solution containing trichloro(1H,1H,2H,2H-perfluorooctyl)silane resulted in highly rough, fluorinated SiO(2) layers. The fluorinated rough SiO(2) layers exhibited excellent repellency toward various liquid droplets. In particular, water repellency of 168° was observed. On the bases of Cassie-Baxter and Young-Dupre equations, the surface fraction and the work of adhesion of the rough, fluorinated SiO(2) layers were respectively estimated. In light of the durability in water, ultraviolet resistance, and thermal stability, the superhydrophobic SiO(2) layers prepared in this work hold promise in a range of practical applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Assaults by Mentally Disordered Offenders in Prison: Equity and Equivalence.

    PubMed

    Hales, Heidi; Dixon, Amy; Newton, Zoe; Bartlett, Annie

    2016-06-01

    Managing the violent behaviour of mentally disordered offenders (MDO) is challenging in all jurisdictions. We describe the ethical framework and practical management of MDOs in England and Wales in the context of the move to equivalence of healthcare between hospital and prison. We consider the similarities and differences between prison and hospital management of the violent and challenging behaviours of MDOs. We argue that both types of institution can learn from each other and that equivalence of care should extend to equivalence of criminal proceedings in court and prisons for MDOs. We argue that any adjudication process in prison for MDOs is enhanced by the relevant involvement of mental health professionals and the articulation of the ethical principles underpinning health and criminal justice practices.

  18. Investigation of ellipsometric parameters of 2D microrough surfaces by FDTD.

    PubMed

    Qiu, J; Ran, D F; Liu, Y B; Liu, L H

    2016-07-10

    Ellipsometry is a powerful method for measuring the optical constants of materials and is very sensitive to surface roughness. In previous ellipsometric measurement of optical constants of solid materials with rough surfaces, researchers frequently used effective medium approximation (EMA) with roughness already known to fit the complex refractive index of the material. However, the ignored correlation length, the other important parameter of rough surfaces, will definitely result in fitting errors. Hence it is necessary to consider the influence of surface roughness and correlation length on the ellipsometric parameters Δ (phase difference) and Ψ (azimuth) characterizing practical systems. In this paper, the influence of roughness of two-dimensional randomly microrough surfaces (relative roughness σ/λ ranges from 0.001 to 0.025) of silicon on ellipsometric parameters was simulated by the finite-difference time-domain method which was validated with experimental results. The effects of incident angle, relative roughness, and correlation length were numerically investigated for two-dimensional Gaussian distributed randomly microrough surfaces, respectively. The simulated results showed that compared with the smooth surface, only tiny changes of the ellipsometric parameter Δ could be observed for microrough silicon surface in the vicinity of the Brewster angle, but obviously changes of Ψ occur especially in the vicinity of the Brewster angle. More differences between the ellipsometric parameters of the rough surface and smooth surface can been seen especially in the vicinity of the Brewster angle as the relative roughness σ/λ increases or correlation length τ decreases. The results reveal that when we measure the optical constants of solid materials by ellipsometry, the smaller roughness, larger correlation length and larger incident wavelength will lead to the higher precision of measurements.

  19. Scattering of electromagnetic waves from a body over a random rough surface

    NASA Astrophysics Data System (ADS)

    Ripoll, J.; Madrazo, A.; Nieto-Vesperinas, M.

    1997-02-01

    A numerical study is made of the effect on the angular distribution of mean far field intensity due to the presence of an arbitrary body located over a random rough surface. It is found that the presence of the body decreases the coherent backscattering peak produced by the surface roughness. Also, for low dielectric constants, the reflected intensity is practically equal to the sum of the individual reflected intensities of the body and the surface respectively, namely, interaction between both bodies is almost negligible. The full interaction between object and surface only appears when both bodies are highly reflective. Results are compared with the case when the body is buried beneath the surface, and are illustrated with a 2-D calculation of a cylinder either partially immersed or above a 2-D rough profile.

  20. Line edge roughness (LER) mitigation studies specific to interference-like lithography

    NASA Astrophysics Data System (ADS)

    Baylav, Burak; Estroff, Andrew; Xie, Peng; Smith, Bruce W.

    2013-04-01

    Line edge roughness (LER) is a common problem to most lithography approaches and is seen as the main resolution limiter for advanced technology nodes1. There are several contributors to LER such as chemical/optical shot noise, random nature of acid diffusion, development process, and concentration of acid generator/base quencher. Since interference-like lithography (IL) is used to define one directional gridded patterns, some LER mitigation approaches specific to IL-like imaging can be explored. Two methods investigated in this work for this goal are (i) translational image averaging along the line direction and (ii) pupil plane filtering. Experiments regarding the former were performed on both interferometric and projection lithography systems. Projection lithography experiments showed a small amount of reduction in low/mid frequency LER value for image averaged cases at pitch of 150 nm (193 nm illumination, 0.93 NA) with less change for smaller pitches. Aerial image smearing did not significantly increase LER since it was directional. Simulation showed less than 1% reduction in NILS (compared to a static, smooth mask equivalent) with ideal alignment. In addition, description of pupil plane filtering on the transfer of mask roughness is given. When astigmatism-like aberrations were introduced in the pupil, transfer of mask roughness is decreased at best focus. It is important to exclude main diffraction orders from the filtering to prevent contrast and NILS loss. These ideas can be valuable as projection lithography approaches to conditions similar to IL (e.g. strong RET methods).

  1. Effects of TEA·HCl hardening accelerator on the workability of cement-based materials

    NASA Astrophysics Data System (ADS)

    Pan, Wenhao; Ding, Zhaoyang; Chen, Yanwen

    2017-03-01

    The aim of the test is to research the influence rules of TEA·HCl on the workability of cement paste and concrete. Based on the features of the new hardening accelerator, an experimental analysis system were established through different dosages of hardening accelerator, and the feasibility of such accelerator to satisfy the need of practical engineering was verified. The results show that adding of the hardening accelerator can accelerate the cement hydration, and what’s more, when the dosage was 0.04%, the setting time was the shortest while the initial setting time and final setting time were 130 min and 180 min, respectively. The initial fluidity of cement paste of adding accelerator was roughly equivalent compared with that of blank. After 30 min, fluidity loss would decrease with the dosage increasing, but fluidity may increase. The application of the hardening accelerator can make the early workability of concrete enhance, especially the slump loss of 30 min can improve more significantly. The bleeding rate of concrete significantly decreases after adding TEA·HCl. The conclusion is that the new hardening accelerator can meet the need of the workability of cement-based materials in the optimum dosage range.

  2. Aeroelastic simulation of higher harmonic control

    NASA Technical Reports Server (NTRS)

    Robinson, Lawson H.; Friedmann, Peretz P.

    1994-01-01

    This report describes the development of an aeroelastic analysis of a helicopter rotor and its application to the simulation of helicopter vibration reduction through higher harmonic control (HHC). An improved finite-state, time-domain model of unsteady aerodynamics is developed to capture high frequency aerodynamic effects. An improved trim procedure is implemented which accounts for flap, lead-lag, and torsional deformations of the blade. The effect of unsteady aerodynamics is studied and it is found that its impact on blade aeroelastic stability and low frequency response is small, but it has a significant influence on rotor hub vibrations. Several different HHC algorithms are implemented on a hingeless rotor and their effectiveness in reducing hub vibratory shears is compared. All the controllers are found to be quite effective, but very differing HHC inputs are required depending on the aerodynamic model used. Effects of HHC on rotor stability and power requirements are found to be quite small. Simulations of roughly equivalent articulated and hingeless rotors are carried out, and it is found that hingeless rotors can require considerably larger HHC inputs to reduce vibratory shears. This implies that the practical implementation of HHC on hingeless rotors might be considerably more difficult than on articulated rotors.

  3. Nondestructive, fast, and cost-effective image processing method for roughness measurement of randomly rough metallic surfaces.

    PubMed

    Ghodrati, Sajjad; Kandi, Saeideh Gorji; Mohseni, Mohsen

    2018-06-01

    In recent years, various surface roughness measurement methods have been proposed as alternatives to the commonly used stylus profilometry, which is a low-speed, destructive, expensive but precise method. In this study, a novel method, called "image profilometry," has been introduced for nondestructive, fast, and low-cost surface roughness measurement of randomly rough metallic samples based on image processing and machine vision. The impacts of influential parameters such as image resolution and filtering approach for elimination of the long wavelength surface undulations on the accuracy of the image profilometry results have been comprehensively investigated. Ten surface roughness parameters were measured for the samples using both the stylus and image profilometry. Based on the results, the best image resolution was 800 dpi, and the most practical filtering method was Gaussian convolution+cutoff. In these conditions, the best and worst correlation coefficients (R 2 ) between the stylus and image profilometry results were 0.9892 and 0.9313, respectively. Our results indicated that the image profilometry predicted the stylus profilometry results with high accuracy. Consequently, it could be a viable alternative to the stylus profilometry, particularly in online applications.

  4. A numerical approach for assessing effects of shear on equivalent permeability and nonlinear flow characteristics of 2-D fracture networks

    NASA Astrophysics Data System (ADS)

    Liu, Richeng; Li, Bo; Jiang, Yujing; Yu, Liyuan

    2018-01-01

    Hydro-mechanical properties of rock fractures are core issues for many geoscience and geo-engineering practices. Previous experimental and numerical studies have revealed that shear processes could greatly enhance the permeability of single rock fractures, yet the shear effects on hydraulic properties of fractured rock masses have received little attention. In most previous fracture network models, single fractures are typically presumed to be formed by parallel plates and flow is presumed to obey the cubic law. However, related studies have suggested that the parallel plate model cannot realistically represent the surface characters of natural rock fractures, and the relationship between flow rate and pressure drop will no longer be linear at sufficiently large Reynolds numbers. In the present study, a numerical approach was established to assess the effects of shear on the hydraulic properties of 2-D discrete fracture networks (DFNs) in both linear and nonlinear regimes. DFNs considering fracture surface roughness and variation of aperture in space were generated using an originally developed code DFNGEN. Numerical simulations by solving Navier-Stokes equations were performed to simulate the fluid flow through these DFNs. A fracture that cuts through each model was sheared and by varying the shear and normal displacements, effects of shear on equivalent permeability and nonlinear flow characteristics of DFNs were estimated. The results show that the critical condition of quantifying the transition from a linear flow regime to a nonlinear flow regime is: 10-4 〈 J < 10-3, where J is the hydraulic gradient. When the fluid flow is in a linear regime (i.e., J < 10-4), the relative deviation of equivalent permeability induced by shear, δ2, is linearly correlated with J with small variations, while for fluid flow in the nonlinear regime (J 〉 10-3), δ2 is nonlinearly correlated with J. A shear process would reduce the equivalent permeability significantly in the orientation perpendicular to the sheared fracture as much as 53.86% when J = 1, shear displacement Ds = 7 mm, and normal displacement Dn = 1 mm. By fitting the calculated results, the mathematical expression for δ2 is established to help choose proper governing equations when solving fluid flow problems in fracture networks.

  5. Effect of film slicks on near-surface wind

    NASA Astrophysics Data System (ADS)

    Charnotskii, Mikhail; Ermakov, Stanislav; Ostrovsky, Lev; Shomina, Olga

    2016-09-01

    The transient effects of horizontal variation of sea-surface wave roughness due to surfactant films on near-surface turbulent wind are studied theoretically and experimentally. Here we suggest two practical schemes for calculating variations of wind velocity profiles near the water surface, the average short-wave roughness of which is varying in space and time when a film slick is present. The schemes are based on a generalized two-layer model of turbulent air flow over a rough surface and on the solution of the continuous model involving the equation for turbulent kinetic energy of the air flow. Wave tank studies of wind flow over wind waves in the presence of film slicks are described and compared with theory.

  6. The Nature of Feedback in English: Teacher Practices

    ERIC Educational Resources Information Center

    Dargusch, Joanne

    2014-01-01

    This paper reports on the findings of a study that investigated formative assessment practices of Senior English teachers in the standards-based Queensland assessment system. This paper focuses in particular on the teachers' provision of feedback on rough draft summative assessment items. It identifies the links between assessment criteria and…

  7. Impact of roughness on the instability of a free-cooling granular gas

    NASA Astrophysics Data System (ADS)

    Garzó, Vicente; Santos, Andrés; Kremer, Gilberto M.

    2018-05-01

    A linear stability analysis of the hydrodynamic equations with respect to the homogeneous cooling state is carried out to identify the conditions for stability of a granular gas of rough hard spheres. The description is based on the results for the transport coefficients derived from the Boltzmann equation for inelastic rough hard spheres [Phys. Rev. E 90, 022205 (2014), 10.1103/PhysRevE.90.022205], which take into account the complete nonlinear dependence of the transport coefficients and the cooling rate on the coefficients of normal and tangential restitution. As expected, linear stability analysis shows that a doubly degenerate transversal (shear) mode and a longitudinal ("heat") mode are unstable with respect to long enough wavelength excitations. The instability is driven by the shear mode above a certain inelasticity threshold; at larger inelasticity, however, the instability is driven by the heat mode for an inelasticity-dependent range of medium roughness. Comparison with the case of a granular gas of inelastic smooth spheres confirms previous simulation results about the dual role played by surface friction: while small and large levels of roughness make the system less unstable than the frictionless system, the opposite happens at medium roughness. On the other hand, such an intermediate window of roughness values shrinks as inelasticity increases and eventually disappears at a certain value, beyond which the rough-sphere gas is always less unstable than the smooth-sphere gas. A comparison with some preliminary simulation results shows a very good agreement for conditions of practical interest.

  8. Cross-ethnic measurement equivalence of measures of depression, social anxiety, and worry.

    PubMed

    Hambrick, James P; Rodebaugh, Thomas L; Balsis, Steve; Woods, Carol M; Mendez, Julia L; Heimberg, Richard G

    2010-06-01

    Although study of clinical phenomena in individuals from different ethnic backgrounds has improved over the years, African American and Asian American individuals continue to be underrepresented in research samples. Without adequate psychometric data about how questionnaires perform in individuals from different ethnic samples, findings from both within and across groups are arguably uninterpretable. Analyses based on item response theory (IRT) allow us to make fine-grained comparisons of the ways individuals from different ethnic groups respond to clinical measures. This study compared response patterns of African American and Asian American undergraduates to White undergraduates on measures of depression, social anxiety, and worry. On the Beck Depression Inventory-II, response patterns for African American participants were roughly equivalent to the response patterns of White participants. On measures of worry and social anxiety, there were substantial differences, suggesting that the use of these measures in African American and Asian American populations may lead to biased conclusions.

  9. A Numerical Study of 2-D Surface Roughness Effects on the Growth of Wave Modes in Hypersonic Boundary Layers

    NASA Astrophysics Data System (ADS)

    Fong, Kahei Danny

    The current understanding and research efforts on surface roughness effects in hypersonic boundary-layer flows focus, almost exclusively, on how roughness elements trip a hypersonic boundary layer to turbulence. However, there were a few reports in the literature suggesting that roughness elements in hypersonic boundary-layer flows could sometimes suppress the transition process and delay the formation of turbulent flow. These reports were not common and had not attracted much attention from the research community. Furthermore, the mechanisms of how the delay and stabilization happened were unknown. A recent study by Duan et al. showed that when 2-D roughness elements were placed downstream of the so-called synchronization point, the unstable second-mode wave in a hypersonic boundary layer was damped. Since the second-mode wave is typically the most dangerous and dominant unstable mode in a hypersonic boundary layer for sharp geometries at a zero angle of attack, this result has pointed to an explanation on how roughness elements delay transition in a hypersonic boundary layer. Such an understanding can potentially have significant practical applications for the development of passive flow control techniques to suppress hypersonic boundary-layer transition, for the purpose of aero-heating reduction. Nevertheless, the previous study was preliminary because only one particular flow condition with one fixed roughness parameter was considered. The study also lacked an examination on the mechanism of the damping effect of the second mode by roughness. Hence, the objective of the current research is to conduct an extensive investigation of the effects of 2-D roughness elements on the growth of instability waves in a hypersonic boundary layer. The goal is to provide a full physical picture of how and when 2-D roughness elements stabilize a hypersonic boundary layer. Rigorous parametric studies using numerical simulation, linear stability theory (LST), and parabolized stability equation (PSE) are performed to ensure the fidelity of the data and to study the relevant flow physics. All results unanimously confirm the conclusion that the relative location of the synchronization point with respect to the roughness element determines the roughness effect on the second mode. Namely, a roughness placed upstream of the synchronization point amplifies the unstable waves while placing a roughness downstream of the synchronization point damps the second-mode waves. The parametric study also shows that a tall roughness element within the local boundary-layer thickness results in a stronger damping effect, while the effect of the roughness width is relatively insignificant compared with the other roughness parameters. On the other hand, the fact that both LST and PSE successfully predict the damping effect only by analyzing the meanflow suggests the mechanism of the damping is by the meanflow alteration due to the existence of roughness elements, rather than new mode generation. In addition to studying the unstable waves, the drag force and heating with and without roughness have been investigated by comparing the numerical simulation data with experimental correlations. It is shown that the increase in drag force generated by the Mach wave around a roughness element in a hypersonic boundary layer is insignificant compared to the reduction of drag force by suppressing turbulent flow. The study also shows that, for a cold wall flow which is the case for practical flight applications, the Stanton number decreases as roughness elements smooth out the temperature gradient in the wall-normal direction. Based on the knowledge of roughness elements damping the second mode gained from the current study, a novel passive transition control method using judiciously placed roughness elements has been developed, and patented, during the course of this research. The main idea of the control method is that, with a given geometry and flow condition, it is possible to find the most unstable second-mode frequency that can lead to transition. And by doing a theoretical analysis such as LST, the synchronization location for the most unstable frequency can be found. Roughness elements are then strategically placed downstream of the synchronization point to damp out this dangerous second-mode wave, thus stabilizing the boundary layer and suppressing the transition process. This method is later experimentally validated in Purdue's Mach 6 quiet wind tunnel. Overall, this research has not only provided details of when and how 2-D roughness stabilizes a hypersonic boundary layer, it also has led to a successful application of numerical simulation data to the development of a new roughness-based transition delay method, which could potentially have significant contributions to the design of future generation hypersonic vehicles.

  10. Relationships between aerodynamic roughness and land use and land cover in Baltimore, Maryland

    USGS Publications Warehouse

    Nicholas, F.W.; Lewis, J.E.

    1980-01-01

    Urbanization changes the radiative, thermal, hydrologic, and aerodynamic properties of the Earth's surface. Knowledge of these surface characteristics, therefore, is essential to urban climate analysis. Aerodynamic or surface roughness of urban areas is not well documented, however, because of practical constraints in measuring the wind profile in the presence of large buildings. Using an empirical method designed by Lettau, and an analysis of variance of surface roughness values calculated for 324 samples averaging 0.8 hectare (ha) of land use and land cover sample in Baltimore, Md., a strong statistical relation was found between aerodynamic roughness and urban land use and land cover types. Assessment of three land use and land cover systems indicates that some of these types have significantly different surface roughness characteristics. The tests further indicate that statistically significant differences exist in estimated surface roughness values when categories (classes) from different land use and land cover classification systems are used as surrogates. A Level III extension of the U.S. Geological Survey Level II land use and land cover classification system provided the most reliable results. An evaluation of the physical association between the aerodynamic properties of land use and land cover and the surface climate by numerical simulation of the surface energy balance indicates that changes in surface roughness within the range of values typical of the Level III categories induce important changes in the surface climate.

  11. A Call for Innovation: Reflective Practices and Clinical Curricula of US Army Special Operations Forces Medics.

    PubMed

    Rocklein, Kate

    2014-01-01

    Special Operations Forces (SOF) medics have written and published numerous practice reflections that intricately describe their practice environments, clinical dilemmas, and suggestions for teaching and practice. The lack of translation of SOF medics experiential evidence to their curriculum has created a gap in evidence-based curriculum development. This study analyzed SOF medics learning and practice patterns and compared it to the evidence in the interdisciplinary clinical literature. After framing the problem, the literature was reviewed to determine appropriate tools by which perceptions and attitudes toward reflection-centered curricula could be measured. A recognizable practice reflection was extracted from the published SOF clinical literature and presented in writing to self-identified SOF medics and medic instructors via a descriptive crossover design, to ensure possible biases were mitigated. To measure SOF medics perceptions of reflection-based curricula, the Dundee Ready Education Environment Measure survey instrument was used, as it has validated psychometric properties and is used worldwide. SOF medics averaged scores of perceptions of their medic education indicated positive but not completely statistically significant preferences toward reflection-based curricula over traditional curriculum. Special Operations, medics, reflective practice, curricula BACKGROUND Special Operations Forces (SOF) medics practice in environments that are violent, austere, clandestine, and far removed from definitive hospital facilities. What was true almost 20 years ago?". . . academic demands of [Special Forces medic training] are roughly equivalent to those of an upper-level undergraduate curriculum in science or perhaps to those of first year medical school"?is even more challenging today. During this study, medics, physicians, and educators within the SOF medical community publicly and privately (ergo, names were redacted) expressed the need for curricular changes to teach SOF medics about the worst of clinical scenarios, such as situations in which evacuation of critically injured Soldiers to higher echelons of care is not possible or is prolonged, due to combat engagements or other complications. These experts consistently describe the need for curriculum derived from experienced medics practices, to guide force-wide knowledge acquisition and augment student medics professional development. Given the investigator?s clinical familiarity with SOF medics practice and evidence, senior, enlisted SOF medics and SOF medic instructors proposed that a doctoral-prepared nurse, whose clinical specialty was trauma, could spearhead academic focus and publication on the experiences and curriculum of SOF medics. 2014.

  12. Tropical Convective Outflow and Near Surface Equivalent Potential Temperatures

    NASA Technical Reports Server (NTRS)

    Folkins, Ian; Oltmans, Samuel J.; Thompson, Anne M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    We use clear sky heating rates to show that convective outflow in the tropics decreases rapidly with height between the 350 K and 360 K potential temperature surfaces (or between roughly 13 and 15 km). There is also a rapid fall-off in the pseudoequivalent potential temperature probability distribution of near surface air parcels between 350 K and 360 K. This suggests that the vertical variation of convective outflow in the upper tropical troposphere is to a large degree determined by the distribution of sub cloud layer entropy.

  13. Suitability of Sites for Hazardous Waste Disposal, Concord Naval Weapons Station, Concord, California.

    DTIC Science & Technology

    1987-09-01

    mainly as a band of low hills situated centrally within Clayton Valley. The old alluvium may be roughly equivalent to beds mapped northeast of Suisun Bay...this site is selected for further investi- gations. Landsliding is unlikely on the relatively gentle valley floor. The low position of the water ...full depth of 110.0 ft is given in Fig- ure 17. Ground- water level is documented in Table 2. The piezometric surface for the tip at 105 ft is at 48 ft. A

  14. Flow Resistance Interactions on Hillslopes With Heterogeneous Attributes: Effects on Runoff Hydrograph Characteristics

    NASA Astrophysics Data System (ADS)

    Papanicolaou, Athanasios N.; Abban, Benjamin K. B.; Dermisis, Dimitrios C.; Giannopoulos, Christos P.; Flanagan, Dennis C.; Frankenberger, James R.; Wacha, Kenneth M.

    2018-01-01

    An improved modeling framework for capturing the effects of space and time-variant resistance to overland flow is developed for intensively managed landscapes. The framework builds on the WEPP model but it removes the limitations of the "equivalent" plane and time-invariant roughness assumption. The enhanced model therefore accounts for spatiotemporal changes in flow resistance along a hillslope due to changes in roughness, in profile curvature, and downslope variability. The model is used to quantify the degree of influence—from individual soil grains to aggregates, "isolated roughness elements," and vegetation—on overland flow characteristics under different storm magnitudes, downslope gradients, and profile curvatures. It was found that the net effects of land use change from vegetation to a bare surface resulted in hydrograph peaks that were up to 133% larger. Changes in hillslope profile curvature instead resulted in peak runoff rate changes that were only up to 16%. The stream power concept is utilized to develop a taxonomy that relates the influence of grains, isolated roughness elements, and vegetation, on overland flow under different storm magnitudes and hillslope gradients. Critical storm magnitudes and hillslope gradients were found beyond which the effects of these landscape attributes on the peak stream power were negligible. The results also highlight weaknesses of the space/time-invariant flow resistance assumption and demonstrate that assumptions on landscape terrain characteristics exert a strong control both on the shape and magnitude of hydrographs, with deviations reaching 65% in the peak runoff when space/time-variant resistance effects are ignored in some cases.

  15. TaC-coated graphite prepared via a wet ceramic process: Application to CVD susceptors for epitaxial growth of wide-bandgap semiconductors

    NASA Astrophysics Data System (ADS)

    Nakamura, Daisuke; Kimura, Taishi; Narita, Tetsuo; Suzumura, Akitoshi; Kimoto, Tsunenobu; Nakashima, Kenji

    2017-11-01

    A novel sintered tantalum carbide coating (SinTaC) prepared via a wet ceramic process is proposed as an approach to reducing the production cost and improving the crystal quality of bulk-grown crystals and epitaxially grown films of wide-bandgap semiconductors. Here, we verify the applicability of the SinTaC components as susceptors for chemical vapor deposition (CVD)-SiC and metal-organic chemical vapor deposition (MOCVD)-GaN epitaxial growth in terms of impurity incorporation from the SinTaC layers and also clarify the surface-roughness controllability of SinTaC layers and its advantage in CVD applications. The residual impurity elements in the SinTaC layers were confirmed to not severely incorporate into the CVD-SiC and MOCVD-GaN epilayers grown using the SinTaC susceptors. The quality of the epilayers was also confirmed to be equivalent to that of epilayers grown using conventional susceptors. Furthermore, the surface roughness of the SinTaC components was controllable over a wide range of average roughness (0.4 ≤ Ra ≤ 5 μm) and maximum height roughness (3 ≤ Rz ≤ 36 μm) through simple additional surface treatment procedures, and the surface-roughened SinTaC susceptor fabricated using these procedures was predicted to effectively reduce thermal stress on epi-wafers. These results confirm that SinTaC susceptors are applicable to epitaxial growth processes and are advantageous over conventional susceptor materials for reducing the epi-cost and improving the quality of epi-wafers.

  16. Summary of Drag Characteristics of Practical-Construction Wing Sections

    NASA Technical Reports Server (NTRS)

    Quinn, John H , Jr

    1948-01-01

    The effect of several parameters on the drag characteristics of practical-construction wing sections have been considered and evaluated. The effects considered were those of surface roughness, surface waviness, compressive load, and de-icers. The data were obtained from a number of tests in the Langley two-dimensional low-turbulence tunnels.

  17. Generation of nano roughness on fibrous materials by atmospheric plasma

    NASA Astrophysics Data System (ADS)

    Kulyk, I.; Scapinello, M.; Stefan, M.

    2012-12-01

    Atmospheric plasma technology finds novel applications in textile industry. It eliminates the usage of water and of hazard liquid chemicals, making production much more eco-friendly and economically convenient. Due to chemical effects of atmospheric plasma, it permits to optimize dyeing and laminating affinity of fabrics, as well as anti-microbial treatments. Other important applications such as increase of mechanical resistance of fiber sleeves and of yarns, anti-pilling properties of fabrics and anti-shrinking property of wool fabrics were studied in this work. These results could be attributed to the generation of nano roughness on fibers surface by atmospheric plasma. Nano roughness generation is extensively studied at different conditions. Alternative explanations for the important practical results on textile materials and discussed.

  18. ROMI-RIP: Rough mill rip-first simulator. Forest Service general technical report (Final)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, R.E.

    1995-07-01

    The ROugh Mill Rip-First Simulator (ROMI-RIP) is a computer software package that simulates the gang-ripping of lumber. ROMI-RIP was designed to closely simulate current machines and industrial practice. This simulator allows the user to perform `what if` analyses on various gang-rip-first rough mill operations with fixed, floating outer blade and all-movable blade arbors. ROMI-RIP accepts cutting bills with up to 300 different part sizes. Plots of processed boards are easily viewed or printed. Detailed summaries of processing steps (number of rips and crosscuts) and yields (single boards or entire board files) can also be viewed of printed. ROMI-RIP requires IBMmore » personal computers with 80286 of higher processors.« less

  19. Effect of Tooling Material on the Internal Surface Quality of Ti6Al4V Parts Fabricated by Hot Isostatic Pressing

    NASA Astrophysics Data System (ADS)

    Cai, Chao; Song, Bo; Wei, Qingsong; Yan, Wu; Xue, Pengju; Shi, Yusheng

    2017-01-01

    For the net-shape hot isostatic pressing (HIP) process, control of the internal surface roughness of as-HIPped parts remains a challenge for practical engineering. To reveal the evolution mechanism of the internal surface of the parts during the HIP process, the effect of different tooling materials (H13, T8, Cr12 steel, and graphite) as internal cores on the interfacial diffusion and surface roughness was systematically studied.

  20. Tension Strength, Failure Prediction and Damage Mechanisms in 2D Triaxial Braided Composites with Notch

    NASA Technical Reports Server (NTRS)

    Norman, Timothy L.; Anglin, Colin

    1995-01-01

    The unnotched and notched (open hole) tensile strength and failure mechanisms of two-dimensional (2D) triaxial braided composites were examined. The effect of notch size and notch position were investigated. Damage initiation and propagation in notched and unnotched coupons were also examined. Theory developed to predict the normal stress distribution near an open hole and failure for tape laminated composites was evaluated for its applicability to 2D triaxial braided textile composite materials. Four different fiber architectures were considered; braid angle, yarn and braider size, percentage of longitudinal yarns and braider angle varied. Tape laminates equivalent to textile composites were also constructed for comparison. Unnotched tape equivalents were stronger than braided textiles but exhibited greater notch sensitivity. Notched textiles and tape equivalents have roughly the same strength at large notch sizes. Two common damage mechanisms were found: braider yarn cracking and near notch longitudinal yarn splitting. Cracking was found to initiate in braider yarns in unnotched and notched coupons, and propagate in the direction of the braider yarns until failure. Damage initiation stress decreased with increasing braid angle. No significant differences in prediction of near notch strain between textile and tape equivalents could be detected for small braid angle, but the correlations were weak for textiles with large braid angle. Notch strength could not be predicted using existing anisotropic theory for braided textiles due to their insensitivity to notch.

  1. Inferring river properties with SWOT like data

    NASA Astrophysics Data System (ADS)

    Garambois, Pierre-André; Monnier, Jérôme; Roux, Hélène

    2014-05-01

    Inverse problems in hydraulics are still open questions such as the estimation of river discharges. Remotely sensed measurements of hydrosystems can provide valuable information but adequate methods are still required to exploit it. The future Surface Water and Ocean Topography (SWOT) mission would provide new cartographic measurements of inland water surfaces. The highlight of SWOT will be its almost global coverage and temporal revisits on the order of 1 to 4 times per 22 days repeat cycle [1]. Lots of studies have shown the possibility of retrieving discharge given the river bathymetry or roughness and/or in situ time series. The new challenge is to use SWOT type data to inverse the triplet formed by the roughness, the bathymetry and the discharge. The method presented here is composed of two steps: following an inverse formulation from [2], the first step consists in retrieving an equivalent bathymetry profile of a river given one in situ depth measurement and SWOT like data of the water surface, that is to say water elevation, free surface slope and width. From this equivalent bathymetry, the second step consists in solving mass and Manning equation in the least square sense [3]. Nevertheless, for cases where no in situ measurement of water depth is available, it is still possible to solve a system formed by mass and Manning equations in the least square sense (or with other methods such as Bayesian ones, see e.g. [4]). We show that a good a priori knowledge of bathymetry and roughness is compulsory for such methods. Depending on this a priori knowledge, the inversion of the triplet (roughness, bathymetry, discharge) in SWOT context was evaluated on the Garonne River [5, 6]. The results are presented on 80 km of the Garonne River downstream of Toulouse in France [7]. An equivalent bathymetry is retrieved with less than 10% relative error with SWOT like observations. After that, encouraging results are obtained with less than 10% relative error on the identified discharge. References [1] E. Rodriguez, SWOT science requirements document, JPL document, JPL, 2012. [2] A. Gessese, K. Wa, and M. Sellier, Bathymetry reconstruction based on the zero-inertia shallow water approximation, Theoretical and Computational Fluid Dynamics, vol. 27, no. 5, pp. 721-732, 2013. [3] P. A. Garambois and J. Monnier, Inference of river properties from remotly sensed observations of water surface, under final redaction for HESS, 2014. [4] M. Durand, Sacramento river airswot discharge estimation scenario. http://swotdawg.wordpress.com/2013/04/18/sacramento-river-airswot-discharge-estimation-scenario/, 2013. [5] P. A. Garambois and H. Roux, Garonne River discharge estimation. http://swotdawg.wordpress.com/2013/07/01/garonne-river-discharge-estimation/, 2013. [6] P. A. Garambois and H. Roux, Sensitivity of discharge uncertainty to measurement errors, case of the Garonne River. http://swotdawg.wordpress.com/2013/07/01/sensitivity-of-discharge-uncertainty-to-measurement-errors-case-of-the-garonne-river/, 2013. [7] H. Roux and P. A. Garambois, Tests of reach averaging and manning equation on the Garonne River. http://swotdawg.wordpress.com/2013/07/01/tests-of-reach-averaging-and-manning-equation-on-the-garonne-river/, 2013.

  2. The malpractice premium costs of obstetrics.

    PubMed

    Norton, S A

    1997-01-01

    This study examined, in 1992, the variation in the level of malpractice premiums, and the incremental malpractice premium costs associated with the practice of obstetrics for family practitioners and obstetricians. On average, in 1992 obstetricians and family practitioners providing obstetric services paid malpractice premiums of roughly $44,000 and $16,000, respectively. The incremental increase in malpractice premium costs represented roughly 70% of the premium the physicians would have paid had they not provided obstetric services. These results suggest that for both family practitioners and obstetricians, there is a considerable premium penalty associated with providing obstetric services which may have implications for women's access to obstetric services. Moreover, the results make it clear that physicians practicing in different states, and different specialists within a state, may face very different malpractice premium costs.

  3. Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study

    ERIC Educational Resources Information Center

    Rusticus, Shayna A.; Lovato, Chris Y.

    2014-01-01

    The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…

  4. Measurement Equivalence of the Job Descriptive Index Across Chinese and American Workers: Results from Confirmatory Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Wang, Mo; Russell, Steven S.

    2005-01-01

    Despite increased awareness of practical issues in multinational data collection, few studies have addressed the issue of measurement equivalence across Western and Eastern cultures, especially using measures of job attitudes. Therefore, the measurement equivalence of the Job Descriptive Index (JDI) was examined across 2,638 Chinese workers and…

  5. Characterization of surface roughness effects on pressure drop in single-phase flow in minichannels

    NASA Astrophysics Data System (ADS)

    Kandlikar, Satish G.; Schmitt, Derek; Carrano, Andres L.; Taylor, James B.

    2005-10-01

    Roughness features on the walls of a channel wall affect the pressure drop of a fluid flowing through that channel. This roughness effect can be described by (i) flow area constriction and (ii) increase in the wall shear stress. Replotting the Moody's friction factor chart with the constricted flow diameter results in a simplified plot and yields a single asymptotic value of friction factor for relative roughness values of ɛ /D>0.03 in the fully developed turbulent region. After reviewing the literature, three new roughness parameters are proposed (maximum profile peak height Rp, mean spacing of profile irregularities RSm, and floor distance to mean line Fp). Three additional parameters are presented to consider the localized hydraulic diameter variation (maximum, minimum, and average) in future work. The roughness ɛ is then defined as Rp+Fp. This definition yields the same value of roughness as obtained from the sand-grain roughness [H. Darcy, Recherches Experimentales Relatives au Mouvement de L'Eau dans les Tuyaux (Mallet-Bachelier, Paris, France, 1857); J. T. Fanning, A Practical Treatise on Hydraulic and Water Supply Engineering (Van Nostrand, New York, 1877, revised ed. 1886); J. Nikuradse, "Laws of flow in rough pipes" ["Stromungsgesetze in Rauen Rohren," VDI-Forschungsheft 361 (1933)]; Beilage zu "Forschung auf dem Gebiete des Ingenieurwesens," Ausgabe B Band 4, English translation NACA Tech. Mem. 1292 (1937)]. Specific experiments are conducted using parallel sawtooth ridge elements, placed normal to the flow direction, in aligned and offset configurations in a 10.03mm wide rectangular channel with variable gap (resulting hydraulic diameters of 325μm-1819μm with Reynolds numbers ranging from 200 to 7200 for air and 200 to 5700 for water). The use of constricted flow diameter extends the applicability of the laminar friction factor equations to relative roughness values (sawtooth height) up to 14%. In the turbulent region, the aligned and offset roughness arrangements yield different results indicating a need to further characterize the surface features. The laminar to turbulent transition is also seen to occur at lower Reynolds numbers with an increase in the relative roughness.

  6. Roughness influence on human blood drop spreading and splashing

    NASA Astrophysics Data System (ADS)

    Smith, Fiona; Buntsma, Naomi; Brutin, David

    2017-11-01

    The impact behaviour of complex fluid droplets is a topic that has been extensively studied but with much debate. The Bloodstain Pattern Analysis (BPA) community is encountering this scientific problem with daily practical cases since they use bloodstains as evidence in crime scene reconstruction. We aim to provide fundamental explanations in the study of blood drip stains by investigating the influence of surface roughness and wettability on the splashing limit of droplets of blood, a non-Newtonian colloidal fluid. Droplets of blood impacting perpendicularly different surfaces at different velocities were recorded. The recordings were analysed as well as the surfaces characteristics in order to find an empirical solution since we found that roughness plays a major role in the threshold of the splashing/non-splashing behaviour of blood compared to the wettability. Moreover it appears that roughness alters the deformation of the drip stains. These observations are key in characterising features of drip stains with the impacting conditions, which would answer some forensic issues.

  7. 21 CFR 26.6 - Equivalence assessment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY SYSTEM AUDIT REPORTS... draft programs for assessing the equivalence of the respective regulatory systems in terms of quality... inspection reports), joint training, and joint inspections for the purpose of assessing regulatory systems...

  8. Attempt at forming an expression of Manning's 'n' for Open Channel Flow

    NASA Astrophysics Data System (ADS)

    De, S. K.; Khosa, R.

    2016-12-01

    Study of open channel hydraulics finds application in diverse areas such as design of river banks, bridges and other structures. Principal hydraulic elements used in these applications include surface water profiles and flow velocity and these carry significant influences of fluid properties, channel properties and boundary conditions. As per current practice, friction influences are routinely captured in a single factor and commonly referred to as the roughness coefficient and amongst the most widely used equation of flow that uses the latter coefficient is the Manning's equation. As of now, selection of the Manning's roughness coefficient is made from existing tabulated data and accompanying pictures and, clearly as per these practices, the selection and choice of this coefficient is inevitably very subjective and a source of uncertainty in the application of transport models. In this study, an attempt has been made to develop a more rational and computationally feasible expression of the Manning's constant 'n' so that it partially or fully eliminates the need to refer to a table whenever performing a computation. The development of an equation of the Manning's constant uses the basic parameters of the flow and also consideration for influences such as vegetation and form roughness as well.

  9. A lumped parameter method of characteristics approach and multigroup kernels applied to the subgroup self-shielding calculation in MPACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.

    An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilitiesmore » have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.« less

  10. A lumped parameter method of characteristics approach and multigroup kernels applied to the subgroup self-shielding calculation in MPACT

    DOE PAGES

    Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.; ...

    2017-07-17

    An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilitiesmore » have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.« less

  11. Systematic review of emergency medicine clinical practice guidelines: Implications for research and policy.

    PubMed

    Venkatesh, Arjun K; Savage, Dan; Sandefur, Benjamin; Bernard, Kenneth R; Rothenberg, Craig; Schuur, Jeremiah D

    2017-01-01

    Over 25 years, emergency medicine in the United States has amassed a large evidence base that has been systematically assessed and interpreted through ACEP Clinical Policies. While not previously studied in emergency medicine, prior work has shown that nearly half of all recommendations in medical specialty practice guidelines may be based on limited or inconclusive evidence. We sought to describe the proportion of clinical practice guideline recommendations in Emergency Medicine that are based upon expert opinion and low level evidence. Systematic review of clinical practice guidelines (Clinical Policies) published by the American College of Emergency Physicians from January 1990 to January 2016. Standardized data were abstracted from each Clinical Policy including the number and level of recommendations as well as the reported class of evidence. Primary outcomes were the proportion of Level C equivalent recommendations and Class III equivalent evidence. The primary analysis was limited to current Clinical Policies, while secondary analysis included all Clinical Policies. A total of 54 Clinical Policies including 421 recommendations and 2801 cited references, with an average of 7.8 recommendations and 52 references per guideline were included. Of 19 current Clinical Policies, 13 of 141 (9.2%) recommendations were Level A, 57 (40.4%) Level B, and 71 (50.4%) Level C. Of 845 references in current Clinical Policies, 67 (7.9%) were Class I, 272 (32.3%) Class II, and 506 (59.9%) Class III equivalent. Among all Clinical Policies, 200 (47.5%) recommendations were Level C equivalent, and 1371 (48.9%) of references were Class III equivalent. Emergency medicine clinical practice guidelines are largely based on lower classes of evidence and a majority of recommendations are expert opinion based. Emergency medicine appears to suffer from an evidence gap that should be prioritized in the national research agenda and considered by policymakers prior to developing future quality standards.

  12. Parameterized Spectral Bathymetric Roughness Using the Nonequispaced Fast Fourier Transform

    NASA Astrophysics Data System (ADS)

    Fabre, David Hanks

    The ocean and acoustic modeling community has specifically asked for roughness from bathymetry. An effort has been undertaken to provide what can be thought of as the high frequency content of bathymetry. By contrast, the low frequency content of bathymetry is the set of contours. The two-dimensional amplitude spectrum calculated with the nonequispaced fast Fourier transform (Kunis, 2006) is exploited as the statistic to provide several parameters of roughness following the method of Fox (1996). When an area is uniformly rough, it is termed isotropically rough. When an area exhibits lineation effects (like in a trough or a ridge line in the bathymetry), the term anisotropically rough is used. A predominant spatial azimuth of lineation summarizes anisotropic roughness. The power law model fit produces a roll-off parameter that also provides insight into the roughness of the area. These four parameters give rise to several derived parameters. Algorithmic accomplishments include reviving Fox's method (1985, 1996) and improving the method with the possibly geophysically more appropriate nonequispaced fast Fourier transform. A new composite parameter, simply the overall integral length of the nonlinear parameterizing function, is used to make within-dataset comparisons. A synthetic dataset and six multibeam datasets covering practically all depth regimes have been analyzed with the tools that have been developed. Data specific contributions include possibly discovering an aspect ratio isotropic cutoff level (less than 1.2), showing a range of spectral fall-off values between about -0.5 for a sandybottomed Gulf of Mexico area, to about -1.8 for a coral reef area just outside of the Saipan harbor. We also rank the targeted type of dataset, the best resolution gridded datasets, from smoothest to roughest using a factor based on the kernel dimensions, a percentage from the windowing operation, all multiplied by the overall integration length.

  13. Chemical Education in Bulgaria

    NASA Astrophysics Data System (ADS)

    Garkov, Vladimir N.

    1999-08-01

    The sociopolitical changes in Eastern Europe of the 1990s and the ongoing globalization of the chemical industry and chemical education prompted this analysis of the current status of chemical education in Bulgaria, which is not very different from the educational practices in the rest of Europe. The level of chemistry knowledge expected from all high-school graduates in Bulgaria is roughly equivalent to the general and organic chemistry courses for science majors at U.S. universities. The newly introduced four-year bachelor's degree curriculum (based on 15-week semesters) at the University of Sofia includes a core of 106 semester hours (labs counted as 1 hour each), 41 semester hours of electives, and 445 contact hours (11 weeks) of research, which ends with a thesis defense. The instructional techniques in Bulgaria are subject-centered and follow the hierarchical structure of knowledge in an integrated and unitary manner. In conclusion, the Bulgarian system of education in chemistry aims at preparing a scientifically literate citizenry and broadly trained chemists by imposing a very challenging and rigid curriculum with very few choices. It is speculated that the laissez-faire climate of free intellectual initiative seen only at American universities provides a more appropriate environment for talent encouragement and scientific innovation for overseas-educated undergraduate and graduate students than their home institutions.

  14. Use of Mini-Mag Orion and superconducting coils for near-term interstellar transportation

    NASA Astrophysics Data System (ADS)

    Lenard, Roger X.; Andrews, Dana G.

    2007-06-01

    Interstellar transportation to nearby star systems over periods shorter than the human lifetime requires speeds in the range of 0.1-0.15 c and relatively high accelerations. These speeds are not attainable using rockets, even with advanced fusion engines because at these velocities, the energy density of the spacecraft approaches the energy density of the fuel. Anti-matter engines are theoretically possible but current physical limitations would have to be suspended to get the mass densities required. Interstellar ramjets have not proven practicable, so this leaves beamed momentum propulsion or a continuously fueled Mag-Orion system as the remaining candidates. However, deceleration is also a major issue, but part of the Mini-Mag Orion approach assists in solving this problem. This paper reviews the state of the art from a Phases I and II SBIT between Sandia National Laboratories and Andrews Space, applying our results to near-term interstellar travel. A 1000 T crewed spacecraft and propulsion system dry mass at .1c contains ˜9×1021J. The author has generated technology requirements elsewhere for use of fission power reactors and conventional Brayton cycle machinery to propel a spacecraft using electric propulsion. Here we replace the electric power conversion, radiators, power generators and electric thrusters with a Mini-Mag Orion fission-fusion hybrid. Only a small fraction of fission fuel is actually carried with the spacecraft, the remainder of the propellant (macro-particles of fissionable material with a D-T core) is beamed to the spacecraft, and the total beam energy requirement for an interstellar probe mission is roughly 1020J, which would require the complete fissioning of 1000 ton of Uranium assuming 35% power plant efficiency. This is roughly equivalent to a recurring cost per flight of 3.0 billion dollars in reactor grade enriched uranium using today's prices. Therefore, interstellar flight is an expensive proposition, but not unaffordable, if the nonrecurring costs of building the power plant can be minimized.

  15. A rapid and low noise switch from RANS to WMLES on curvilinear grids with compressible flow solvers

    NASA Astrophysics Data System (ADS)

    Deck, Sébastien; Weiss, Pierre-Elie; Renard, Nicolas

    2018-06-01

    A turbulent inflow for a rapid and low noise switch from RANS to Wall-Modelled LES on curvilinear grids with compressible flow solvers is presented. It can be embedded within the computational domain in practical applications with WMLES grids around three-dimensional geometries in a flexible zonal hybrid RANS/LES modelling context. It relies on a physics-motivated combination of Zonal Detached Eddy Simulation (ZDES) as the WMLES technique together with a Dynamic Forcing method processing the fluctuations caused by a Zonal Immersed Boundary Condition describing roughness elements. The performance in generating a physically-sound turbulent flow field with the proper mean skin friction and turbulent profiles after a short relaxation length is equivalent to more common inflow methods thanks to the generation of large-scale streamwise vorticity by the roughness elements. Comparisons in a low Mach-number zero-pressure-gradient flat-plate turbulent boundary layer up to Reθ = 6 100 reveal that the pressure field is dominated by the spurious noise caused by the synthetic turbulence methods (Synthetic Eddy Method and White Noise injection), contrary to the new low-noise approach which may be used to obtain the low-frequency component of wall pressure and reproduce its intermittent nature. The robustness of the method is tested in the flow around a three-element airfoil with WMLES in the upper boundary layer near the trailing edge of the main element. In spite of the very short relaxation distance allowed, self-sustainable resolved turbulence is generated in the outer layer with significantly less spurious noise than with the approach involving White Noise. The ZDES grid count for this latter test case is more than two orders of magnitude lower than the Wall-Resolved LES requirement and a unique mesh is involved, which is much simpler than some multiple-mesh strategies devised for WMLES or turbulent inflow.

  16. Hydraulic properties of 3D rough-walled fractures during shearing: An experimental study

    NASA Astrophysics Data System (ADS)

    Yin, Qian; Ma, Guowei; Jing, Hongwen; Wang, Huidong; Su, Haijian; Wang, Yingchao; Liu, Richeng

    2017-12-01

    This study experimentally analyzed the influence of shear processes on nonlinear flow behavior through 3D rough-walled rock fractures. A high-precision apparatus was developed to perform stress-dependent fluid flow tests of fractured rocks. Then, water flow tests on rough-walled fractures with different mechanical displacements were conducted. At each shear level, the hydraulic pressure ranged from 0 to 0.6 MPa, and the normal load varied from 7 to 35 kN. The results show that (i) the relationship between the volumetric flow rate and hydraulic gradient of rough-walled fractures can be well fit using Forchheimer's law. Notably, both the linear and nonlinear coefficients in Forchheimer's law decrease during shearing; (ii) a sixth-order polynomial function is used to evaluate the transmissivity based on the Reynolds number of fractures during shearing. The transmissivity exhibits a decreasing trend as the Reynolds number increases and an increasing trend as the shear displacement increases; (iii) the critical hydraulic gradient, critical Reynolds number and equivalent hydraulic aperture of the rock fractures all increase as the shear displacement increases. When the shear displacement varies from 0 to 15 mm, the critical hydraulic gradient ranges from 0.3 to 2.2 for a normal load of 7 kN and increases to 1.8-8.6 for a normal load of 35 kN; and (iv) the Forchheimer law results are evaluated by plotting the normalized transmissivity of the fractures during shearing against the Reynolds number. An increase in the normal load shifts the fitted curves downward. Additionally, the Forchheimer coefficient β decreases with the shear displacement but increases with the applied normal load.

  17. Surface-roughness considerations for atmospheric correction of ocean color sensors. I: The Rayleigh-scattering component.

    PubMed

    Gordon, H R; Wang, M

    1992-07-20

    The first step in the coastal zone color scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering contribution, Lr(r), to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm Lr(r), is computed by assuming that the ocean surface is flat. Computations of the radiance leaving a Rayleigh-scattering atmosphere overlying a rough Fresnel-reflecting ocean are presented to assess the radiance error caused by the flat-ocean assumption. The surface-roughness model is described in detail for both scalar and vector (including polarization) radiative transfer theory. The computations utilizing the vector theory show that the magnitude of the error significantly depends on the assumptions made in regard to the shadowing of one wave by another. In the case of the coastal zone color scanner bands, we show that for moderate solar zenith angles the error is generally below the 1 digital count level, except near the edge of the scan for high wind speeds. For larger solar zenith angles, the error is generally larger and can exceed 1 digital count at some wavelengths over the entire scan, even for light winds. The error in Lr(r) caused by ignoring surface roughness is shown to be the same order of magnitude as that caused by uncertainties of +/- 15 mb in the surface atmospheric pressure or of +/- 50 Dobson units in the ozone concentration. For future sensors, which will have greater radiometric sensitivity, the error caused by the flat-ocean assumption in the computation of Lr(r) could be as much as an order of magnitude larger than the noise-equivalent spectral radiance in certain situations.

  18. Origins and nature of non-Fickian transport through fractures

    NASA Astrophysics Data System (ADS)

    Wang, L.; Cardenas, M. B.

    2014-12-01

    Non-Fickian transport occurs across all scales within fractured and porous geological media. Fundamental understanding and appropriate characterization of non-Fickian transport through fractures is critical for understanding and prediction of the fate of solutes and other scalars. We use both analytical and numerical modeling, including direct numerical simulation and particle tracking random walk, to investigate the origin of non-Fickian transport through both homogeneous and heterogeneous fractures. For the simple homogenous fracture case, i.e., parallel plates, we theoretically derived a formula for dynamic longitudinal dispersion (D) within Poiseuille flow. Using the closed-form expression for the theoretical D, we quantified the time (T) and length (L) scales separating preasymptotic and asymptotic dispersive transport, with T and L proportional to aperture (b) of parallel plates to second and fourth orders, respectively. As for heterogeneous fractures, the fracture roughness and correlation length are closely associated with the T and L, and thus indicate the origin for non-Fickian transport. Modeling solute transport through 2D rough-walled fractures with continuous time random walk with truncated power shows that the degree of deviation from Fickian transport is proportional to fracture roughness. The estimated L for 2D rough-walled fractures is significantly longer than that derived from the formula within Poiseuille flow with equivalent b. Moreover, we artificially generated normally distributed 3D fractures with fixed correlation length but different fracture dimensions. Solute transport through 3D fractures was modeled with a particle tracking random walk algorithm. We found that transport transitions from non-Fickian to Fickian with increasing fracture dimensions, where the estimated L for the studied 3D fractures is related to the correlation length.

  19. Single-scatter vector-wave scattering from surfaces with infinite slopes using the Kirchhoff approximation.

    PubMed

    Bruce, Neil C

    2008-08-01

    This paper presents a new formulation of the 3D Kirchhoff approximation that allows calculation of the scattering of vector waves from 2D rough surfaces containing structures with infinite slopes. This type of surface has applications, for example, in remote sensing and in testing or imaging of printed circuits. Some preliminary calculations for rectangular-shaped grooves in a plane are presented for the 2D surface method and are compared with the equivalent 1D surface calculations for the Kirchhoff and integral equation methods. Good agreement is found between the methods.

  20. Two photon excitation of atomic oxygen

    NASA Technical Reports Server (NTRS)

    Pindzola, M. S.

    1977-01-01

    A standard perturbation expansion in the atom-radiation field interaction is used to calculate the two photon excitation cross section for 1s(2) 2s(2) 2p(4) p3 to 1s(2) 2s(2) 2p(3) (s4) 3p p3 transition in atomic oxygen. The summation over bound and continuum intermediate states is handled by solving the equivalent inhomogeneous differential equation. Exact summation results differ by a factor of 2 from a rough estimate obtained by limiting the intermediate state summation to one bound state. Higher order electron correlation effects are also examined.

  1. Hydrodynamic collimation of gamma-ray-burst fireballs

    PubMed

    Levinson; Eichler

    2000-07-10

    Analytic solutions are presented for the hydrodynamic collimation of a relativistic fireball by a surrounding baryonic wind emanating from a torus. The opening angle is shown to be the ratio of the power output of the inner fireball to that of the exterior baryonic wind. The gamma ray burst 990123 might thus be interpreted as a baryon-poor jet (BPJ) with an energy output of order 10(50) erg or less, collimated by a baryonic wind from a torus with an energy output of order 10(52.5) erg, roughly the geometric mean of the BPJ and its isotropic equivalent.

  2. Text-Messaging Practices and Links to General Spelling Skill: A Study of Australian Children

    ERIC Educational Resources Information Center

    Bushnell, Catherine; Kemp, Nenagh; Martin, Frances Heritage

    2011-01-01

    This study investigated 10- to 12-year-old Australian children's text-messaging practices and their relationship to traditional spelling ability. Of the 227 children tested, 82% reported sending text-messages; a median of 5 per day. Use of predictive and multi-press entry methods was roughly equal. Children produced a wide range of text-message…

  3. Optimizing best management practices to control anthropogenic sources of atmospheric phosphorus deposition to inland lakes.

    PubMed

    Weiss, Lee; Thé, Jesse; Winter, Jennifer; Gharabaghi, Bahram

    2018-04-18

    Excessive phosphorus loading to inland freshwater lakes around the globe has resulted in nuisance plant growth along the waterfronts, degraded habitat for cold water fisheries, and impaired beaches, marinas and waterfront property. The direct atmospheric deposition of phosphorus can be a significant contributing source to inland lakes. The atmospheric deposition monitoring program for Lake Simcoe, Ontario indicates roughly 20% of the annual total phosphorus load (2010-2014 period) is due to direct atmospheric deposition (both wet and dry deposition) on the lake. This novel study presents a first-time application of the Genetic Algorithm (GA) methodology to optimize the application of best management practices (BMPs) related to agriculture and mobile sources to achieve atmospheric phosphorus reduction targets and restore the ecological health of the lake. The novel methodology takes into account the spatial distribution of the emission sources in the airshed, the complex atmospheric long-range transport and deposition processes, cost and efficiency of the popular management practices and social constraints related to the adoption of BMPs. The optimization scenarios suggest that the optimal overall capital investment of approximately $2M, $4M, and $10M annually can achieve roughly 3, 4 and 5 tonnes reduction in atmospheric P load to the lake, respectively. The exponential trend indicates diminishing returns for the investment beyond roughly $3M per year and that focussing much of this investment in the upwind, nearshore area will significantly impact deposition to the lake. The optimization is based on a combination of the lowest-cost, most-beneficial and socially-acceptable management practices that develops a science-informed promotion of implementation/BMP adoption strategy. The geospatial aspect to the optimization (i.e. proximity and location with respect to the lake) will help land managers to encourage the use of these targeted best practices in areas that will most benefit from the phosphorus reduction approach.

  4. Modification of equation of motion of fluid-conveying pipe for laminar and turbulent flow profiles

    NASA Astrophysics Data System (ADS)

    Guo, C. Q.; Zhang, C. H.; Païdoussis, M. P.

    2010-07-01

    Considering the non-uniformity of the flow velocity distribution in fluid-conveying pipes caused by the viscosity of real fluids, the centrifugal force term in the equation of motion of the pipe is modified for laminar and turbulent flow profiles. The flow-profile-modification factors are found to be 1.333, 1.015-1.040 and 1.035-1.055 for laminar flow in circular pipes, turbulent flow in smooth-wall circular pipes and turbulent flow in rough-wall circular pipes, respectively. The critical flow velocities for divergence in the above-mentioned three cases are found to be 13.4%, 0.74-1.9% and 1.7-2.6%, respectively, lower than that with plug flow, while those for flutter are even lower, which could reach 36% for the laminar flow profile. By introducing two new concepts of equivalent flow velocity and equivalent mass, fluid-conveying pipe problems with different flow profiles can be solved with the equation of motion for plug flow.

  5. LiTaO3 Shear Wave Resonator for Viscosity Measurement of Polymer Liquid in MHz Range

    NASA Astrophysics Data System (ADS)

    Bannai, Mai; Wakatsuki, Noboru

    2004-05-01

    We are studying the response of a strip-type LiTaO3 shear wave resonator in polymer liquid in MHz range. The element size is small (1.0× 7.4× 0.49 mm3). The side surfaces of the resonator were covered with a highly viscous silicone rubber material. Using Newton fluid theory, the characteristic mechanical impedance of the shear wave in the liquid was derived for the equivalent circuit of the resonator. The analytical values of glycerin were roughly consistent with the experiment using only 0.1 cm3. The polymer liquid used for the measurement was silicone oil. The static viscosity was from 9.8 to 94,720 mPa\\cdots. The resonance frequency change was from 0.05% to 0.07%. The resonance resistance change was from 57 Ω to 190 Ω. The experiment results were examined using Mason’s equivalent circuit with Maxwell model of a viscoelastic polymer.

  6. Effect of open hole on tensile failure properties of 2D triaxial braided textile composites and tape equivalents

    NASA Technical Reports Server (NTRS)

    Norman, Timothy L.; Anglin, Colin; Gaskin, David; Patrick, Mike

    1995-01-01

    The unnotched and notched (open hole) tensile strength and failure mechanisms of two-dimensional (2D) triaxial braided composites were examined. The effect of notch size and notch position were investigated. Damage initiation and propagation in notched and unnotched coupons were also examined. Theory developed to predict the normal stress distribution near an open hole and failure for tape laminated composites was evaluated for its applicability to triaxial braided textile composite materials. Four fiber architectures were considered with different combinations of braid angle, longitudinal and braider yam size, and percentage of longitudinal yarns. Tape laminates equivalent to textile composites were also constructed for comparison. Unnotched tape equivalents were stronger than braided textiles but exhibited greater notch sensitivity. Notched textiles and tape equivalents have roughly the same strength at large notch sizes. Two common damage mechanisms were found: braider yams cracking and near notch longitudinal yarn splitting. Cracking was found to initiate in braider yarns in unnotched and notched coupons, and propagate in the direction of the braider yarns until failure. Longitudinal yarn splitting occurred in three of four architectures that were longitudinally fiber dominated. Damage initiation stress decreased with increasing braid angle. No significant differences in prediction of near notch stress between measured and predicted stress were weak for textiles with large braid angle. Notch strength could not be predicted using existing anisotropic theory for braided textiles due to their insensitivity to notch.

  7. Equivalence Testing as a Tool for Fatigue Risk Management in Aviation.

    PubMed

    Wu, Lora J; Gander, Philippa H; van den Berg, Margo; Signal, T Leigh

    2018-04-01

    Many civilian aviation regulators favor evidence-based strategies that go beyond hours-of-service approaches for managing fatigue risk. Several countries now allow operations to be flown outside of flight and duty hour limitations, provided airlines demonstrate an alternative method of compliance that yields safety levels "at least equivalent to" the prescriptive regulations. Here we discuss equivalence testing in occupational fatigue risk management. We present suggested ratios/margins of practical equivalence when comparing operations inside and outside of prescriptive regulations for two common aviation safety performance indicators: total in-flight sleep duration and psychomotor vigilance task reaction speed. Suggested levels of practical equivalence, based on expertise coupled with evidence from field and laboratory studies, are ≤ 30 min in-flight sleep and ± 15% of reference response speed. Equivalence testing is illustrated in analyses of a within-subjects field study during an out-and-back long-range trip. During both sectors of their trip, 41 pilots were monitored via actigraphy, sleep diary, and top of descent psychomotor vigilance task. Pilots were assigned to take rest breaks in a standard lie-flat bunk on one sector and in a bunk tapered 9 from hip to foot on the other sector. Total in-flight sleep duration (134 ± 53 vs. 135 ± 55 min) and mean reaction speed at top of descent (3.94 ± 0.58 vs. 3.77 ± 0.58) were equivalent after rest in the full vs. tapered bunk. Equivalence testing is a complimentary statistical approach to difference testing when comparing levels of fatigue and performance in occupational settings and can be applied in transportation policy decision making.Wu LJ, Gander PH, van den Berg M, Signal TL. Equivalence testing as a tool for fatigue risk management in aviation. Aerosp Med Hum Perform. 2018; 89(4):383-388.

  8. LET spectra measurements from the STS-35 CPDs

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Linear energy transfer (LET) spectra derived form automated track analysis system (ATAS) track parameter measurements for crew passive dosimeters (CPD's) flown with the astronauts on STS-35 are plotted. The spread between the seven individual spectra is typical of past manual measurements of sets of CPD's. This difference is probably due to the cumulative net shielding variations experienced by the CPD's as the astronauts carrying them went about their activities on the Space Shuttle. The STS-35 mission was launched on Dec. 2, 1990, at 28.5 degrees inclination and 352-km altitude. This is somewhat higher than the nominal 300-km flights and the orbit intersects more of the high intensity trapped proton region in the South Atlantic Anomaly (SAA). However, in comparison with APD spectra measured on earlier lower altitude missions (STS-26, -29, -30, -32), the flux spectra are all roughly comparable. This may be due to the fact that the STS-35 mission took place close to solar maximum (Feb. 1990), or perhaps to shielding differences. The corresponding dose and dose equivalent spectra for this mission are shown. The effect of statistical fluctuations at the higher LET values, where track densities are small, is very noticeable. This results in an increased spread within the dose rate and dose equivalent rate spectra, as compared to the flux spectra. The contribution to dose and dose equivalent per measured track is much greater in the high LET region and the differences, though numerically small, are heavily weighted in the integral spectra. The optimum measurement and characterization of the high LET tails of the spectra represent an important part of the research into plastic nuclear track detector (PNTD) response. The integral flux, dose rate, dose equivalent rate and mission dose equivalent for the seven astronauts are also given.

  9. The equivalence between dislocation pile-ups and cracks

    NASA Technical Reports Server (NTRS)

    Liu, H. W.; Gao, Q.

    1990-01-01

    Cracks and dislocation pile-ups are equivalent to each other. In this paper, the physical equivalence between cracks and pile-ups is delineated, and the relationshps between crack-extension force, force on the leading dislocation, stress-intensity factor, and dislocation density are reviewed and summarized. These relations make it possible to extend quantitatively the recent advances in the concepts and practices of fracture mechanics to the studies of microfractures and microplastic deformations.

  10. Beneficial Effects of the Genus Aloe on Wound Healing, Cell Proliferation, and Differentiation of Epidermal Keratinocytes

    PubMed Central

    Uda, Junki; Kubo, Hirokazu; Nakajima, Yuka; Goto, Arisa; Akaki, Junji; Yoshida, Ikuyo; Matsuoka, Nobuya; Hayakawa, Takao

    2016-01-01

    Aloe has been used as a folk medicine because it has several important therapeutic properties. These include wound and burn healing, and Aloe is now used in a variety of commercially available topical medications for wound healing and skin care. However, its effects on epidermal keratinocytes remain largely unclear. Our data indicated that both Aloe vera gel (AVG) and Cape aloe extract (CAE) significantly improved wound healing in human primary epidermal keratinocytes (HPEKs) and a human skin equivalent model. In addition, flow cytometry analysis revealed that cell surface expressions of β1-, α6-, β4-integrin, and E-cadherin increased in HPEKs treated with AVG and CAE. These increases may contribute to cell migration and wound healing. Treatment with Aloe also resulted in significant changes in cell-cycle progression and in increases in cell number. Aloe increased gene expression of differentiation markers in HPEKs, suggesting roles for AVG and CAE in the improvement of keratinocyte function. Furthermore, human skin epidermal equivalents developed from HPEKs with medium containing Aloe were thicker than control equivalents, indicating the effectiveness of Aloe on enhancing epidermal development. Based on these results, both AVG and CAE have benefits in wound healing and in treatment of rough skin. PMID:27736988

  11. Evidence for montmorillonite or its compositional equivalent in Columbia Hills, Mars

    USGS Publications Warehouse

    Clark, B. C.; Arvidson, R. E.; Gellert, Ralf; Morris, R.V.; Ming, D. W.; Richter, L.; Ruff, S.W.; Michalski, J.R.; Farrand, W. H.; Yen, A. S.; Herkenhoff, K. E.; Li, R.; Squyres, S. W.; Schroder, C.; Klingelhofer, G.; Bell, J.F.

    2007-01-01

    During its exploration of the Columbia Hills, the Mars Exploration Rover "Spirit" encountered several similar samples that are distinctly different from Martian meteorites and known Gusev crater soils, rocks, and sediments. Occurring in a variety of contexts and locations, these "Independence class" samples are rough-textured, iron-poor (equivalent FeO ??? 4 wt%), have high Al/Si ratios, and often contain unexpectedly high concentrations of one or more minor or trace elements (including Cr, Ni, Cu, Sr, and Y). Apart from accessory minerals, the major component common to these samples has a compositional profile of major and minor elements which is similar to the smectite montmorillonite, implicating this mineral, or its compositional equivalent. Infrared thermal emission spectra do not indicate the presence of crystalline smectite. One of these samples was found spatially associated with a ferric sulfate-enriched soil horizon, possibly indicating a genetic relationship between these disparate types of materials. Compared to the nearby Wishstone and Watchtower class rocks, major aqueous alteration involving mineral dissolution and mobilization with consequent depletions of certain elements is implied for this setting and may be undetectable by remote sensing from orbit because of the small scale of the occurrences and obscuration by mantling with soil and dust. Copyright 2007 by the American Geophysical Union.

  12. A statistical assessment of differences and equivalences between genetically modified and reference plant varieties

    PubMed Central

    2011-01-01

    Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199

  13. Backscattering from a Gaussian distributed, perfectly conducting, rough surface

    NASA Technical Reports Server (NTRS)

    Brown, G. S.

    1977-01-01

    The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.

  14. Evidence of a metal-rich surface for the Asteroid (16) Psyche from interferometric observations in the thermal infrared

    NASA Astrophysics Data System (ADS)

    Matter, Alexis; Delbo, Marco; Carry, Benoit; Ligori, Sebastiano

    2013-09-01

    We describe the first determination of thermal properties and size of the M-type Asteroid (16) Psyche from interferometric observations obtained with the Mid-Infrared Interferometric Instrument (MIDI) of the Very Large Telescope Interferometer. We used a thermophysical model to interpret our interferometric data. Our analysis shows that Psyche has a low macroscopic surface roughness. Using a convex 3-D shape model obtained by Kaasalainen et al. (Kaasalainen, M., Torppa, J., Piironen, J. [2002]. Icarus 159, 369-395), we derived a volume-equivalent diameter for (16) Psyche of 247 ± 25 km or 238 ± 24 km, depending on the possible values of surface roughness. Our corresponding thermal inertia estimates are 133 or 114 J m-2 s-0.5 K-1, with a total uncertainty estimated at 40 J m-2 s-0.5 K-1. They are among the highest thermal inertia values ever measured for an asteroid of this size. We consider this as a new evidence of a metal-rich surface for the Asteroid (16) Psyche.

  15. 7 CFR 1450.208 - Eligible practices.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... practices specified in the conservation plan, forest stewardship plan, or equivalent plan that meet all standards needed to cost-effectively establish: (1) Annual crops; (2) Non-woody perennial crops; and (3...

  16. 7 CFR 1450.208 - Eligible practices.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... practices specified in the conservation plan, forest stewardship plan, or equivalent plan that meet all standards needed to cost-effectively establish: (1) Annual crops; (2) Non-woody perennial crops; and (3...

  17. 7 CFR 1450.208 - Eligible practices.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... practices specified in the conservation plan, forest stewardship plan, or equivalent plan that meet all standards needed to cost-effectively establish: (1) Annual crops; (2) Non-woody perennial crops; and (3...

  18. 7 CFR 1450.208 - Eligible practices.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... practices specified in the conservation plan, forest stewardship plan, or equivalent plan that meet all standards needed to cost-effectively establish: (1) Annual crops; (2) Non-woody perennial crops; and (3...

  19. On the stability of von Kármán rotating-disk boundary layers with radial anisotropic surface roughness

    NASA Astrophysics Data System (ADS)

    Garrett, S. J.; Cooper, A. J.; Harris, J. H.; Özkan, M.; Segalini, A.; Thomas, P. J.

    2016-01-01

    We summarise results of a theoretical study investigating the distinct convective instability properties of steady boundary-layer flow over rough rotating disks. A generic roughness pattern of concentric circles with sinusoidal surface undulations in the radial direction is considered. The goal is to compare predictions obtained by means of two alternative, and fundamentally different, modelling approaches for surface roughness for the first time. The motivating rationale is to identify commonalities and isolate results that might potentially represent artefacts associated with the particular methodologies underlying one of the two modelling approaches. The most significant result of practical relevance obtained is that both approaches predict overall stabilising effects on type I instability mode of rotating disk flow. This mode leads to transition of the rotating-disk boundary layer and, more generally, the transition of boundary-layers with a cross-flow profile. Stabilisation of the type 1 mode means that it may be possible to exploit surface roughness for laminar-flow control in boundary layers with a cross-flow component. However, we also find differences between the two sets of model predictions, some subtle and some substantial. These will represent criteria for establishing which of the two alternative approaches is more suitable to correctly describe experimental data when these become available.

  20. Influence of Selective Laser Melting Processing Parameters of Co-Cr-W Powders on the Roughness of Exterior Surfaces

    NASA Astrophysics Data System (ADS)

    Baciu, M. A.; Baciu, E. R.; Bejinariu, C.; Toma, S. L.; Danila, A.; Baciu, C.

    2018-06-01

    Selective Laser Melting (SLM) represents an Additive Manufacturing method widely used in medical practice, mainly in dental medicine. The powder of 59% Co, 25% Cr, 2.5% W alloy (Starbond CoS Powder 55, S&S Scheftner C, Germany) was processed (SLM) on a Realizer SLM 50 device (SLM Solution, Germany). After laser processing and simple sanding with Al2O3 or two-phase sanding (Al2O3 and glass balls), measurements of surface roughness were conducted. This paper presents the influences exercised by laser power (P = 60 W, 80 W and 100 W), the scanning speed (vscan = 333 mm/s, 500 mm/s and 1000 mm/s) and exposure time (te = 20 µs, 40 µs and 60 µs) on the roughness of surfaces obtained by SLM processing. Based on the experimental results obtained for roughness (Ra), some recommendations regarding the choice of favorable combinations among the values of technological parameters under study in order to obtain the surface quality necessary for subsequent applications of the processed parts (SLM) have been made.

  1. Roughness topographical effects on mean momentum and stress budgets in developed turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Aghaei Jouybari, Mostafa; Yuan, Junlin

    2017-11-01

    Direct numerical simulations of turbulent channel flows are carried out over two surfaces: a synthesized sand-grain surface and a realistic turbine roughness that is characterized by more prominent large-scale surface features. To separate the effects of wall-normal variation of the roughness area fraction from the (true) variation of flow statistics, the governing equations are area-averaged using intrinsic averaging, contrary to the usually practice based on the total area (i.e., superficial averaging). Additional terms appear in the mean-momentum equation resulted from the wall-normal variation of the solid fraction and play a role in the near-wall balance. Results from surfaces with a step solidity function (e.g., cubes) will also be discussed. Compared to the sand grains, the turbine surface generates stronger form-induced fluctuations, despite weaker dispersive shear stress. This is associated with more significant form-induced productions (comparable to shear production) in Reynolds stress budgets, weaker pressure work, and, consequently, more anisotropic redistribution of turbulent kinetic energy in the roughness sublayer, which potentially leads to different turbulent responses between the two surfaces in non-equilibrium flows.

  2. Fabrication of transparent superhydrophobic polytetrafluoroethylene coating

    NASA Astrophysics Data System (ADS)

    Alawajji, Raad A.; Kannarpady, Ganesh K.; Biris, Alexandru S.

    2018-06-01

    Polytetrafluoroethylene (PTFE) thin films were successfully deposited on glass substrates using pulsed laser deposition, with deposition times ranging from 30 to 120 minutes (min). The surface roughness of the films increased as deposition time increased, with micro/nanoscale roughness developing when deposition time increased over 60 min. This roughness made the surface superhydrophobic, having a contact angle of about 151.6°±1. UV-vis spectroscopic analysis of the PTFE films revealed that they were highly transparent, up to ∼90% in visible and near-infrared ranges. Furthermore, when the deposition time was increased-which increased the films' thickness-the films were able to absorb 80-90% of ultraviolet light in the wavelength range <300 nm. The researchers used an x-ray photoelectron spectrometer to find the chemical and elemental composition of the films' surfaces. Atomic force microscopy was used to determine the effect of surface roughness on the films' hydrophobicity. The fabricated superhydrophobic films have many potential practical uses, from self-cleaning materials to solar cell panel coatings. Additionally, the low dielectric properties of PTFE make the films' ideal for communication antenna coatings and similar applications.

  3. A study of the interactions between glass-ionomer cement and S. sanguis biofilms

    NASA Astrophysics Data System (ADS)

    Hengtrakool, Chanotai

    Glass-ionomer cements (GIC) have been used for dental procedures for many years and more recently in other medical applications such as bone cements, for bone reconstruction and also as drug release agents. The postulated caries-preventive activities of GIC are thought to result from their sealing ability, remineralization potential and antibacterial effects. Extensive 'in vitro' investigations have attempted to quantify these effects. In this study, an artificial mouth model, simulating 'in vivo' conditions at the tooth surface, was used to achieve a better understanding of the interaction of oral bacteria with the cements. This study investigated the interaction of Streptococcus sanguis, a common mouth commensal, with two glass-ionomer formulations (one containing fluoride and the other without fluoride ion) with particular reference to bacterial growth, changes in surface roughness and hardness of the glass-ionomer cement with respect to time. Restorative materials with rough surfaces will promote bacterial accumulation 'in vivo' and plaque formation is one factor in surface degradation. The constant depth film fermenter (CDFF) permits the examination of these phenomena and was used to investigate glass-ionomer/S. sanguis biofilm interaction over periods up to 14 days. In conjunction with these studies, surface roughness was measured using a 3-dimension laser profilometer and the surface hardness evaluated using a micro-indenter. Fluoride release from the cement was measured over 84 days. The results showed that autoclaving the CDFF prior to bacterial innoculate did not appear to affect the long-term fluoride release of the GIC. Laser profilometry revealed that the initial roughness and surface area of the GICs was significantly greater than the hydroxyapatite control. S. sanguis viable counts were significantly reduced for both glass-ionomer formulations in the shortterm, the greater reduction being with fluoride-GIC. S. sanguis biofilms produced similar hardness reduction on the surface of GIC:A to those observed with lactic acid pH 5 and artificial saliva whereas the effect on GIC:B was equivalent to that of lactic acid pH between 4 and 5. GICs showed changes in surface roughness after removal the biofilms. This indicates that while S. sanguis biofilm is affected by the GIC, there is also an increase in roughness of the cement indicating some degradation.

  4. 40 CFR 53.4 - Applications for reference or equivalent method determinations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... using information such as service reports and customer complaints to eliminate potential causes of... standards of good practice and by qualified personnel. Test anomalies or irregularities shall be documented... designated as a reference or equivalent method, to ensure that all analyzers or samplers offered for sale...

  5. Compositional and Mechanical Properties of Peanuts Roasted to Equivalent Colors using Different Time/Temperature Combinations

    USDA-ARS?s Scientific Manuscript database

    Peanuts in North America and Europe are primarily consumed after dry roasting. Standard industry practice is to roast peanuts to a specific surface color (Hunter L-value) for a given application; however, equivalent surface colors can be attained using different roast temperature/time combinations,...

  6. Should non-disclosures be considered as morally equivalent to lies within the doctor–patient relationship?

    PubMed Central

    Cox, Caitriona L; Fritz, Zoe

    2016-01-01

    In modern practice, doctors who outright lie to their patients are often condemned, yet those who employ non-lying deceptions tend to be judged less critically. Some areas of non-disclosure have recently been challenged: not telling patients about resuscitation decisions; inadequately informing patients about risks of alternative procedures and withholding information about medical errors. Despite this, there remain many areas of clinical practice where non-disclosures of information are accepted, where lies about such information would not be. Using illustrative hypothetical situations, all based on common clinical practice, we explore the extent to which we should consider other deceptive practices in medicine to be morally equivalent to lying. We suggest that there is no significant moral difference between lying to a patient and intentionally withholding relevant information: non-disclosures could be subjected to Bok's ‘Test of Publicity’ to assess permissibility in the same way that lies are. The moral equivalence of lying and relevant non-disclosure is particularly compelling when the agent's motivations, and the consequences of the actions (from the patient's perspectives), are the same. We conclude that it is arbitrary to claim that there is anything inherently worse about lying to a patient to mislead them than intentionally deceiving them using other methods, such as euphemism or non-disclosure. We should question our intuition that non-lying deceptive practices in clinical practice are more permissible and should thus subject non-disclosures to the same scrutiny we afford to lies. PMID:27451425

  7. Assessment of the equivalence of a generic to a branded femoral stem

    PubMed Central

    Hothi, H.; Henckel, J.; Shearing, P.; Holme, T.; Cerquiglini, A.; Laura, A. Di; Atrey, A.; Skinner, J.; Hart, A.

    2017-01-01

    Aims The aim of this study was to compare the design of the generic OptiStem XTR femoral stem with the established Exeter femoral stem. Materials and Methods We obtained five boxed, as manufactured, implants of both designs at random (ten in total). Two examiners were blinded to the implant design and independently measured the mass, volume, trunnion surface topography, trunnion roughness, trunnion cone angle, Caput-Collum-Diaphyseal (CCD) angle, femoral offset, stem length, neck length, and the width and roughness of the polished stem shaft using peer-reviewed methods. We then compared the stems using these parameters. Results We found that the OptiStems were lighter (p < 0.001), had a rougher trunnion surface (p < 0.001) with a greater spacing and depth of the machined threads (p < 0.001), had greater trunnion cone angles (p = 0.007), and a smaller radius at the top of the trunnion (p = 0.007). There was no difference in stem volume (p = 0.643), CCD angle (p = 0.788), offset (p = 0.993), neck length (p = 0.344), stem length (p = 0.808), shaft width (p = 0.058 to 0.720) or roughness of the polished surface (p = 0.536). Conclusion This preliminary investigation found that whilst there were similarities between the two designs, the generic OptiStem is different to the branded Exeter design. Cite this article: Bone Joint J 2017;99-B:310–16. PMID:28249969

  8. Quantitative evaluation of performance of three-dimensional printed lenses

    NASA Astrophysics Data System (ADS)

    Gawedzinski, John; Pawlowski, Michal E.; Tkaczyk, Tomasz S.

    2017-08-01

    We present an analysis of the shape, surface quality, and imaging capabilities of custom three-dimensional (3-D) printed lenses. 3-D printing technology enables lens prototypes to be fabricated without restrictions on surface geometry. Thus, spherical, aspherical, and rotationally nonsymmetric lenses can be manufactured in an integrated production process. This technique serves as a noteworthy alternative to multistage, labor-intensive, abrasive processes, such as grinding, polishing, and diamond turning. Here, we evaluate the quality of lenses fabricated by Luxexcel using patented Printoptical©; technology that is based on an inkjet printing technique by comparing them to lenses made with traditional glass processing technologies (grinding, polishing, etc.). The surface geometry and roughness of the lenses were evaluated using white-light and Fizeau interferometers. We have compared peak-to-valley wavefront deviation, root mean square (RMS) wavefront error, radii of curvature, and the arithmetic roughness average (Ra) profile of plastic and glass lenses. In addition, the imaging performance of selected pairs of lenses was tested using 1951 USAF resolution target. The results indicate performance of 3-D printed optics that could be manufactured with surface roughness comparable to that of injection molded lenses (Ra<20 nm). The RMS wavefront error of 3-D printed prototypes was at a minimum 18.8 times larger than equivalent glass prototypes for a lens with a 12.7 mm clear aperture, but, when measured within 63% of its clear aperture, the 3-D printed components' RMS wavefront error was comparable to glass lenses.

  9. Storage and release of organic carbon from glaciers and ice sheets

    NASA Astrophysics Data System (ADS)

    Hood, Eran; Battin, Tom J.; Fellman, Jason; O'Neel, Shad; Spencer, Robert G. M.

    2015-02-01

    Polar ice sheets and mountain glaciers, which cover roughly 11% of the Earth's land surface, store organic carbon from local and distant sources and then release it to downstream environments. Climate-driven changes to glacier runoff are expected to be larger than climate impacts on other components of the hydrological cycle, and may represent an important flux of organic carbon. A compilation of published data on dissolved organic carbon from glaciers across five continents reveals that mountain and polar glaciers represent a quantitatively important store of organic carbon. The Antarctic Ice Sheet is the repository of most of the roughly 6 petagrams (Pg) of organic carbon stored in glacier ice, but the annual release of glacier organic carbon is dominated by mountain glaciers in the case of dissolved organic carbon and the Greenland Ice Sheet in the case of particulate organic carbon. Climate change contributes to these fluxes: approximately 13% of the annual flux of glacier dissolved organic carbon is a result of glacier mass loss. These losses are expected to accelerate, leading to a cumulative loss of roughly 15 teragrams (Tg) of glacial dissolved organic carbon by 2050 due to climate change -- equivalent to about half of the annual flux of dissolved organic carbon from the Amazon River. Thus, glaciers constitute a key link between terrestrial and aquatic carbon fluxes, and will be of increasing importance in land-to-ocean fluxes of organic carbon in glacierized regions.

  10. Quantitative evaluation of performance of 3D printed lenses

    PubMed Central

    Gawedzinski, John; Pawlowski, Michal E.; Tkaczyk, Tomasz S.

    2017-01-01

    We present an analysis of the shape, surface quality, and imaging capabilities of custom 3D printed lenses. 3D printing technology enables lens prototypes to be fabricated without restrictions on surface geometry. Thus, spherical, aspherical and rotationally non-symmetric lenses can be manufactured in an integrated production process. This technique serves as a noteworthy alternative to multistage, labor-intensive, abrasive processes such as grinding, polishing and diamond turning. Here, we evaluate the quality of lenses fabricated by Luxexcel using patented Printoptical© technology that is based on an inkjet printing technique by comparing them to lenses made with traditional glass processing technologies (grinding, polishing etc.). The surface geometry and roughness of the lenses were evaluated using white-light and Fizeau interferometers. We have compared peak-to-valley wavefront deviation, root-mean-squared wavefront error, radii of curvature and the arithmetic average of the roughness profile (Ra) of plastic and glass lenses. Additionally, the imaging performance of selected pairs of lenses was tested using 1951 USAF resolution target. The results indicate performance of 3D printed optics that could be manufactured with surface roughness comparable to that of injection molded lenses (Ra < 20 nm). The RMS wavefront error of 3D printed prototypes was at a minimum 18.8 times larger than equivalent glass prototypes for a lens with a 12.7 mm clear aperture, but when measured within 63% of its clear aperture, 3D printed components’ RMS wavefront error was comparable to glass lenses. PMID:29238114

  11. Storage and release of organic carbon from glaciers and ice sheets

    USGS Publications Warehouse

    Hood, Eran; Battin, Tom J.; Fellman, Jason; O'Neel, Shad; Spencer, Robert G. M.

    2015-01-01

    Polar ice sheets and mountain glaciers, which cover roughly 11% of the Earth's land surface, store organic carbon from local and distant sources and then release it to downstream environments. Climate-driven changes to glacier runoff are expected to be larger than climate impacts on other components of the hydrological cycle, and may represent an important flux of organic carbon. A compilation of published data on dissolved organic carbon from glaciers across five continents reveals that mountain and polar glaciers represent a quantitatively important store of organic carbon. The Antarctic Ice Sheet is the repository of most of the roughly 6 petagrams (Pg) of organic carbon stored in glacier ice, but the annual release of glacier organic carbon is dominated by mountain glaciers in the case of dissolved organic carbon and the Greenland Ice Sheet in the case of particulate organic carbon. Climate change contributes to these fluxes: approximately 13% of the annual flux of glacier dissolved organic carbon is a result of glacier mass loss. These losses are expected to accelerate, leading to a cumulative loss of roughly 15 teragrams (Tg) of glacial dissolved organic carbon by 2050 due to climate change — equivalent to about half of the annual flux of dissolved organic carbon from the Amazon River. Thus, glaciers constitute a key link between terrestrial and aquatic carbon fluxes, and will be of increasing importance in land-to-ocean fluxes of organic carbon in glacierized regions.

  12. Double and multiple contacts of similar elastic materials

    NASA Astrophysics Data System (ADS)

    Sundaram, Narayan K.

    Ongoing fretting fatigue research has focussed on developing robust contact mechanics solutions for complicated load histories involving normal, shear, moment and bulk loads. For certain indenter profiles and applied loads, the contact patch separates into two disconnected regions. Existing Singular Integral Equation (SIE) techniques do not address these situations. A fast numerical tool is developed to solve such problems for similar elastic materials for a wide range of profiles and load paths including applied moments and remote bulk-stress effects. This tool is then used to investigate two problems in double contacts. The first, to determine the shear configuration space for a biquadratic punch for the generalized Cattaneo-Mindlin problem. The second, to obtain quantitative estimates of the interaction between neighboring cylindrical contacts for both the applied normal load and partial slip problems up to the limits of validity of the halfspace assumption. In double contact problems without symmetry, obtaining a unique solution requires the satisfaction of a condition relating the contact ends, rigid-body rotation and profile function. This condition has the interpretation that a rigid-rod connecting the inner contact ends of an equivalent frictionless double contact of a rigid indenter and halfspace may only undergo rigid body motions. It is also found that the ends of stick-zones, local slips and remote-applied strains in double contact problems are related by an equation expressing tangential surface-displacement continuity. This equation is essential to solve partial-slip problems without contact equivalents. Even when neighboring cylindrical contacts may be treated as non-interacting for the purpose of determining the pressure tractions, this is not generally true if a shear load is applied. The mutual influence of neighboring contacts in partial slip problems is largest at small shear load fractions. For both the pressure and partial slip problems, the interactions are stronger with increasing strength of loading and contact proximity. A new contact algorithm is developed and the SIE method extended to tackle contact problems with an arbitrary number of contact patches with no approximations made about contact interactions. In the case of multiple contact problems determining the correct contact configuration is significantly more complicated than in double contacts, necessitating a new approach. Both the normal contact and partial slip problems are solved. The tool is then used to study contacts of regular rough cylinders, a flat with rounded punch with superimposed sinusoidal roughness and is also applied to analyze the contact of an experimental rough surface with a halfspace. The partial slip results for multiple-contacts are generally consistent with Cattaneo-Mindlin continuum scale results, in that the outermost contacts tend to be in full sliding. Lastly, the influence of plasticity on frictionless multiple contact problems is studied using FEM for two common steel and aluminum alloys. The key findings are that the plasticity decreases the peak pressure and increases both real and apparent contact areas, thus 'blunting' the sharp pressures caused by the contact asperities in pure elasticity. Further, it is found that contact plasticity effects and load for onset of first yield are strongly dependent on roughness amplitude, with higher plasticity effects and lower yield-onset load at higher roughness amplitudes.

  13. An intelligent knowledge mining model for kidney cancer using rough set theory.

    PubMed

    Durai, M A Saleem; Acharjya, D P; Kannan, A; Iyengar, N Ch Sriman Narayana

    2012-01-01

    Medical diagnosis processes vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases themselves. Rough set approach has two major advantages over the other methods. First, it can handle different types of data such as categorical, numerical etc. Secondly, it does not make any assumption like probability distribution function in stochastic modeling or membership grade function in fuzzy set theory. It involves pattern recognition through logical computational rules rather than approximating them through smooth mathematical functional forms. In this paper we use rough set theory as a data mining tool to derive useful patterns and rules for kidney cancer faulty diagnosis. In particular, the historical data of twenty five research hospitals and medical college is used for validation and the results show the practical viability of the proposed approach.

  14. Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak

    2018-03-01

    The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using Taguchi L9 orthogonal methodology was proposed.

  15. 75 FR 3753 - Tishomingo National Wildlife Refuge, Comprehensive Conservation Plan, Johnston County, OK

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-22

    ... equivalent to existing restoration practices. Recreational opportunities would continue to be limited to... improved or expanded to accommodate more visitors. Current habitat management practices would continue... management practices would contribute to ongoing monitoring and modification of Refuge resources for years to...

  16. Equivalence relations in individuals with language limitations and mental retardation.

    PubMed Central

    O'Donnell, Jennifer; Saunders, Kathryn J

    2003-01-01

    The study of equivalence relations exhibited by individuals with mental retardation and language limitations holds the promise of providing information of both theoretical and practical significance. We reviewed the equivalence literature with this population, defined in terms of subjects having moderate, severe, or profound mental retardation. The literature includes 55 such individuals, most of whom showed positive outcomes on equivalence tests. The results to date suggest that naming skills are not necessary for positive equivalence test outcomes. Thus far, however, relatively few subjects with minimal language have been studied. Moreover, we suggest that the scientific contributions of studies in this area would be enhanced with better documentation of language skills and other subject characteristics. With recent advances in laboratory procedures for establishing the baseline performances necessary for equivalence tests, this research area is poised for rapid growth. PMID:13677612

  17. Beach profile modification and sediment transport by ice: an overlooked process on Lake Michigan

    USGS Publications Warehouse

    Barnes, P.W.; Kempema, E.W.; Reimnitz, E.; McCormick, M.; Weber, W.S.; Hayden, E.C.

    1993-01-01

    Coastal lake ice includes a belt of mobile crash and slush ice and a stable nearshore-ice complex (NIC). Sediment concentrations indicate that the NIC and the belt of brash and slush contains 180 to 280 t (113 to 175m3) of sand per kilometer of coast. This static sediment load is roughly equivalent to the average amount of sand eroded from the bluffs and to the amount accumulating in the deep lake basin each year. Sediment is being rafted alongshore in the mobile brash and slush at rates of 10 to 30 cm/sec. -from Authors

  18. Improved treatment of global positioning system force parameters in precise orbit determination applications

    NASA Technical Reports Server (NTRS)

    Vigue, Y.; Lichten, S. M.; Muellerschoen, R. J.; Blewitt, G.; Heflin, M. B.

    1993-01-01

    Data collected from a worldwide 1992 experiment were processed at JPL to determine precise orbits for the satellites of the Global Positioning System (GPS). A filtering technique was tested to improve modeling of solar-radiation pressure force parameters for GPS satellites. The new approach improves orbit quality for eclipsing satellites by a factor of two, with typical results in the 25- to 50-cm range. The resultant GPS-based estimates for geocentric coordinates of the tracking sites, which include the three DSN sites, are accurate to 2 to 8 cm, roughly equivalent to 3 to 10 nrad of angular measure.

  19. Serial Escape System For Aircraft Crews

    NASA Technical Reports Server (NTRS)

    Wood, Kenneth E.

    1990-01-01

    Emergency escape system for aircraft and aerospace vehicles ejects up to seven crewmembers, one by one, within 120 s. Intended for emergencies in which disabled craft still in stable flight at no more than 220 kn (113 m/s) equivalent airspeed and sinking no faster than 110 ft/s (33.5 m/s) at altitudes up to 50,000 ft (15.2 km). Ejection rockets load themselves from magazine after each crewmember ejected. Jumpmaster queues other crewmembers and helps them position themselves on egress ramp. Rockets pull crewmembers clear of aircraft structure. Provides orderly, controlled exit and avoids ditching at sea or landing in rough terrain.

  20. Metal stocks and sustainability

    PubMed Central

    Gordon, R. B.; Bertram, M.; Graedel, T. E.

    2006-01-01

    The relative proportions of metal residing in ore in the lithosphere, in use in products providing services, and in waste deposits measure our progress from exclusive use of virgin ore toward full dependence on sustained use of recycled metal. In the U.S. at present, the copper contents of these three repositories are roughly equivalent, but metal in service continues to increase. Providing today's developed-country level of services for copper worldwide (as well as for zinc and, perhaps, platinum) would appear to require conversion of essentially all of the ore in the lithosphere to stock-in-use plus near-complete recycling of the metals from that point forward. PMID:16432205

  1. Study on steady state wind and turbulence environments. [structure of wakes near buildings

    NASA Technical Reports Server (NTRS)

    Brundidge, K. C.

    1977-01-01

    The structure of wakes and how this structure is related to the size and shape of buildings and other obstacles, and to ambient winds, was investigated. Mean values of natural atmospheric flow were obtained and used in conjunction with theoretical relationships developed by dimensional analysis to establish a model of the flow in the wake. Results indicate that conventional and V/STOL aircraft passing through the wake during takeoff and landing would experience not only a change in turbulence level, but also a change in mean wind speed of a magnitude roughly equivalent to that of the eddy components.

  2. Distal alluvial fan sediments in early Proterozoic red beds of the Wilgerivier formation, Waterberg Group, South Africa

    NASA Astrophysics Data System (ADS)

    Van Der Neut, M.; Eriksson, P. G.; Callaghan, C. C.

    The 1900 - 1700 M.a. Waterberg Group belongs to a series of southern African cratonic cover sequences of roughly equivalent age. Red beds of the Wilgerivier Formation comprise sandstones, interbedded with subordinate conglomerates and minor mudrocks. These immature sedimentary rocks exhibit lenticular bedding, radial palaeocurrent patterns and features indicative of both streamflow and gravity-flow deposition. A distal wet alluvial fan palaeoenvironmental setting is envisaged, with fan-deltas forming where alluvial lobes prograded into a lacustrine basin. Intrastratal, diagenetic alteration of ferromagnesian detrital grains and ferruginous grain coatings led to the red colouration of the Wilgerivier sediments.

  3. How does sediment affect the hydraulics of bedrock-alluvial rivers?

    NASA Astrophysics Data System (ADS)

    Hodge, Rebecca; Hoey, Trevor; Maniatis, George; Leprêtre, Emilie

    2016-04-01

    Relationships between flow, sediment transport and channel morphology are relatively well established in coarse-grained alluvial channels. Developing equivalent relationships for bedrock-alluvial channels is complicated by the two different components that comprise the channel morphology: bedrock and sediment. These two components usually have very different response times to hydraulic forcing, meaning that the bedrock morphology may be inherited from previous conditions. The influence of changing sediment cover on channel morphology and roughness will depend on the relative magnitudes of the sediment size and the spatial variations in bedrock elevation. We report results from experiments in a 0.9m wide flume designed to quantify the interactions between flow and sediment patch morphology using two contrasting bedrock topographies. The first topography is a plane bed with sand-scale roughness, and the second is a 1:10 scale, 3D printed, model of a bedrock channel with spatially variable roughness (standard deviation of elevations = 12 mm in the flume). In all experiments, a sediment pulse was added to the flume (D50 between 7 and 15 mm) and sediment patches were allowed to stabilise under constant flow conditions. The flow was then incrementally increased in order to identify the discharges at which sediment patches and isolated grains were eroded. In the plane bed experiments ˜20% sediment cover is sufficient to alter the channel hydraulics through the increased roughness of the bed; this impact is expressed as the increased discharge at which isolated grains are entrained. In the scaled bed experiments, partial sediment cover decreased local flow velocities on a relatively smooth area of the bed. At the scale of the entire channel, the bed morphology, and the hydraulics induced by it, was a primary control on sediment cover stability at lower sediment inputs. At higher inputs, where sediment infilled the local bed topography, patches were relatively more stable, suggesting an increased impact on the hydraulics and the role of grain-grain interactions. We draw together these experiments using a theoretical framework to express the impact of sediment cover on channel roughness and hence hydraulics.

  4. 7 CFR 51.759 - U.S. No. 3.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... colored; (4) Rough texture, not seriously bumpy; (5) Similar varietal characteristics; and, (6) Slightly...

  5. The Environmental Cost of Misinformation: Why the Recommendation to Use Elevated Temperatures for Handwashing is Problematic

    PubMed Central

    Carrico, Amanda R.; Spoden, Micajah; Wallston, Kenneth A.; Vandenbergh, Michael P.

    2013-01-01

    Multiple government and health organizations recommend the use of warm or hot water in publications designed to educate the public on best practices for washing one’s hands. This is despite research suggesting that the use of an elevated water temperature does not improve handwashing efficacy, but can cause hand irritation. There is reason to believe that the perception that warm or hot water is more effective at cleaning one’s hands is pervasive, and may be one factor that is driving up unnecessary energy consumption and greenhouse gas emissions. We examine handwashing practices and beliefs about water temperature using a survey of 510 adults in the United States. The survey included measures of handwashing frequency, duration, the proportion of time an elevated temperature was used, and beliefs about water temperature and handwashing efficacy. We also estimate the energy consumed and resultant carbon dioxide equivalent emissions (CO2eq) in the U.S. due to the use of elevated temperatures during handwashing. Participants used an elevated temperature 64% of the time, causing 6.3 million metric tons (MMt) of CO2eq which is 0.1% of total annual emissions and 0.3% of commercial and residential sector emissions. Roughly 69% of the sample believed that elevated temperatures improve handwashing efficacy. Updating these beliefs could prevent 1 MMt of CO2eq annually, exceeding the total emissions from many industrial sources in the U.S. including the Lead and Zinc industries. In addition to causing skin irritation, the recommendation to use an elevated temperature during handwashing contributes to another major threat to public health—climate change. Health and consumer protection organizations should consider advocating for the use of a “comfortable” temperature rather than warm or hot water. PMID:23814480

  6. 21 CFR 26.15 - Monitoring continued equivalence.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... MUTUAL RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY SYSTEM... COMMUNITY Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.15 Monitoring... number of joint inspections; and the conduct of common training sessions. ...

  7. 21 CFR 26.15 - Monitoring continued equivalence.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... MUTUAL RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY SYSTEM... COMMUNITY Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.15 Monitoring... number of joint inspections; and the conduct of common training sessions. ...

  8. 40 CFR Table 7 to Subpart Eeee of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... equivalent control that meets the requirements in Table 4 to this subpart, item 1.a i. After emptying and... out a leak detection and repair program or equivalent control according to one of the subparts listed... CATEGORIES National Emission Standards for Hazardous Air Pollutants: Organic Liquids Distribution (Non...

  9. 40 CFR Table 7 to Subpart Eeee of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... equivalent control that meets the requirements in Table 4 to this subpart, item 1.a i. After emptying and... out a leak detection and repair program or equivalent control according to one of the subparts listed... CATEGORIES National Emission Standards for Hazardous Air Pollutants: Organic Liquids Distribution (Non...

  10. Calibration factors for the SNOOPY NP-100 neutron dosimeter

    NASA Astrophysics Data System (ADS)

    Moscu, D. F.; McNeill, F. E.; Chase, J.

    2007-10-01

    Within CANDU nuclear power facilities, only a small fraction of workers are exposed to neutron radiation. For these individuals, roughly 4.5% of the total radiation equivalent dose is the result of exposure to neutrons. When this figure is considered across all workers receiving external exposure of any kind, only 0.25% of the total radiation equivalent dose is the result of exposure to neutrons. At many facilities, the NP-100 neutron dosimeter, manufactured by Canberra Industries Incorporated, is employed in both direct and indirect dosimetry methods. Also known as "SNOOPY", these detectors undergo calibration, which results in a calibration factor relating the neutron count rate to the ambient dose equivalent rate, using a standard Am-Be neutron source. Using measurements presented in a technical note, readings from the dosimeter for six different neutron fields in six source-detector orientations were used, to determine a calibration factor for each of these sources. The calibration factor depends on the neutron energy spectrum and the radiation weighting factor to link neutron fluence to equivalent dose. Although the neutron energy spectra measured in the CANDU workplace are quite different than that of the Am-Be calibration source, the calibration factor remains constant - within acceptable limits - regardless of the neutron source used in the calibration; for the specified calibration orientation and current radiation weighting factors. However, changing the value of the radiation weighting factors would result in changes to the calibration factor. In the event of changes to the radiation weighting factors, it will be necessary to assess whether a change to the calibration process or resulting calibration factor is warranted.

  11. Distribution of uranium in the Bisbee district, Cochise County, Arizona

    USGS Publications Warehouse

    Wallace, Stewart R.

    1956-01-01

    The Bisbee district has been an important source of copper for many years, and substantial amounts of lead and zinc ore and minor amounts of manganese ore have been mined during certain periods. The copper deposits occur both as low-grade disseminated ore in the Sacramento Hill stock and as massive sulfide (and secondary oxide and carbonate) replacement bodies in Paleozoic limestones that are intruded by the stock and related igneous bodies. The lead-zinc production has come almost entirely from limestone replacement bodies. The disseminated ore exhibits no anomalous radioactivity, and samples from the Lavender pit contain from 0.002 to less than 0.001 percent equivalent uranium. The limestone replacement ores are distinctly radioactive and stoping areas can be readily distinguished from from unmineralized ground on the basis of radioactivity alone. The equivalent uranium content of the copper replacement ores ranges from 0.002 to 0.014 percent and averages about 0.005 percent; the lead-zinc replacement ores average more than 0.007 percent equivalent uranium. Most of the uranium in the copper ores of the district is retained in the smelter slag of a residual concentrate; the slag contains about 0.009 percent equivalent uranium. Uranium carried off each day by acid mine drainage is roughly equal to 1 percent of that being added to the slag dump. Although the total amount of uranium in the district is large, no minable concentrations of ore-grade material are known; samples of relatively high-grade material represent only small fractions of tons at any one locality.

  12. Robust design optimization method for centrifugal impellers under surface roughness uncertainties due to blade fouling

    NASA Astrophysics Data System (ADS)

    Ju, Yaping; Zhang, Chuhua

    2016-03-01

    Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.

  13. 7 CFR 51.1911 - Damaged.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... the fruit. Such scars damage the tomato when they are rough or deep, or when channels extend into the...

  14. Effects of surface polishing on the microstrain behavior of telescope mirror materials

    NASA Technical Reports Server (NTRS)

    Eul, W. A.; Woods, W. W.

    1973-01-01

    Rough ground silicic mirror substrate materials were found in previous investigations to exhibit significant surface yield. This effect was removed by surface etching, a procedure not normally employed in the finishing of optical telescope mirrors. The effects of fine grinding and polishing techniques as well as graded etching are investigated. Torsional shear measurements of yield strain versus stress are made on four candidate mirror substrate materials: polycrystalline silicon, ULE silica 7971, CER-VIT 101, and fused silica 7940. Commonly employed fine grinding and polishing practices are shown to remove a major portion of the surface yield found in rough ground mirror substrate materials.

  15. Using Unmanned Aerial Vehicle (UAV) for spatio-temporal monitoring of soil erosion and roughness in Chania, Crete, Greece

    NASA Astrophysics Data System (ADS)

    Alexakis, Dimitrios; Seiradakis, Kostas; Tsanis, Ioannis

    2016-04-01

    This article presents a remote sensing approach for spatio-temporal monitoring of both soil erosion and roughness using an Unmanned Aerial Vehicle (UAV). Soil erosion by water is commonly known as one of the main reasons for land degradation. Gully erosion causes considerable soil loss and soil degradation. Furthermore, quantification of soil roughness (irregularities of the soil surface due to soil texture) is important and affects surface storage and infiltration. Soil roughness is one of the most susceptible to variation in time and space characteristics and depends on different parameters such as cultivation practices and soil aggregation. A UAV equipped with a digital camera was employed to monitor soil in terms of erosion and roughness in two different study areas in Chania, Crete, Greece. The UAV followed predicted flight paths computed by the relevant flight planning software. The photogrammetric image processing enabled the development of sophisticated Digital Terrain Models (DTMs) and ortho-image mosaics with very high resolution on a sub-decimeter level. The DTMs were developed using photogrammetric processing of more than 500 images acquired with the UAV from different heights above the ground level. As the geomorphic formations can be observed from above using UAVs, shadowing effects do not generally occur and the generated point clouds have very homogeneous and high point densities. The DTMs generated from UAV were compared in terms of vertical absolute accuracies with a Global Navigation Satellite System (GNSS) survey. The developed data products were used for quantifying gully erosion and soil roughness in 3D as well as for the analysis of the surrounding areas. The significant elevation changes from multi-temporal UAV elevation data were used for estimating diachronically soil loss and sediment delivery without installing sediment traps. Concerning roughness, statistical indicators of surface elevation point measurements were estimated and various parameters such as standard deviation of DTM, deviation of residual and standard deviation of prominence were calculated directly from the extracted DTM. Sophisticated statistical filters and elevation indices were developed to quantify both soil erosion and roughness. The applied methodology for monitoring both soil erosion and roughness provides an optimum way of reducing the existing gap between field scale and satellite scale. Keywords : UAV, soil, erosion, roughness, DTM

  16. 12 CFR Appendix A to Part 208 - Capital Adequacy Guidelines for State Member Banks: Risk-Based Measure

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... control over the entity makes it the functional equivalent of a subsidiary), or otherwise require the bank... Practices (Basle Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The... risk equivalent assets, and calculate risk-based capital ratios adjusted for market risk. The risk...

  17. 12 CFR Appendix A to Part 208 - Capital Adequacy Guidelines for State Member Banks: Risk-Based Measure

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... control over the entity makes it the functional equivalent of a subsidiary), or otherwise require the bank... Practices (Basle Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The... risk equivalent assets, and calculate risk-based capital ratios adjusted for market risk. The risk...

  18. 12 CFR Appendix A to Part 208 - Capital Adequacy Guidelines for State Member Banks: Risk-Based Measure

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... control over the entity makes it the functional equivalent of a subsidiary), or otherwise require the bank... Practices (Basle Supervisors' Committee) and endorsed by the Group of Ten Central Bank Governors. The... risk equivalent assets, and calculate risk-based capital ratios adjusted for market risk. The risk...

  19. Land Water Storage within the Congo Basin Inferred from GRACE Satellite Gravity Data

    NASA Technical Reports Server (NTRS)

    Crowley, John W.; Mitrovica, Jerry X.; Bailey, Richard C.; Tamisiea, Mark E.; Davis, James L.

    2006-01-01

    GRACE satellite gravity data is used to estimate terrestrial (surface plus ground) water storage within the Congo Basin in Africa for the period of April, 2002 - May, 2006. These estimates exhibit significant seasonal (30 +/- 6 mm of equivalent water thickness) and long-term trends, the latter yielding a total loss of approximately 280 km(exp 3) of water over the 50-month span of data. We also combine GRACE and precipitation data set (CMAP, TRMM) to explore the relative contributions of the source term to the seasonal hydrological balance within the Congo Basin. We find that the seasonal water storage tends to saturate for anomalies greater than 30-44 mm of equivalent water thickness. Furthermore, precipitation contributed roughly three times the peak water storage after anomalously rainy seasons, in early 2003 and 2005, implying an approximately 60-70% loss from runoff and evapotranspiration. Finally, a comparison of residual land water storage (monthly estimates minus best-fitting trends) in the Congo and Amazon Basins shows an anticorrelation, in agreement with the 'see-saw' variability inferred by others from runoff data.

  20. Absolute response and noise equivalent power of cyclotron resonance-assisted InSb detectors at submillimeter wavelengths

    NASA Technical Reports Server (NTRS)

    Brown, E. R.; Wengler, M. J.; Phillips, T. G.

    1985-01-01

    Spectra are presented of the responsivity and noise equivalent power (NEP) of liquid-helium-cooled InSb detectors as a function of magnetic field in the range 20-110 per cm. The measurements are all made using a Fourier transform spectrometer with thermal sources. The results show a discernable peak in the detector response at the conduction electron cyclotron resonance (CCR) frequency for magnetic fields as low as 3 kG. The magnitude of responsivity at the resonance peaks is roughly constant with magnetic field and is comparable to the low-frequency hot-electron bolometer response. The NEP at the peaks is found to be comparable to the best long wavelength results previously reported. For example, NEP = 4.5 x 10 to the 13th W/(square root of Hz) at 4.2 K, 6 kG, and 40 per cm was measured. The InSb CCR will provide a much improved detector for laboratory spectroscopy, as compared with hot electron bolometers, in the 20-100 per cm range.

  1. Monitoring sodium removal and delivered dialysis by conductivity.

    PubMed

    Locatelli, F; Di Filippo, S; Manzoni, C; Corti, M; Andrulli, S; Pontoriero, G

    1995-11-01

    As cardiovascular stability and the delivery of the prescribed dialysis "dose" seem to be the main factors in determining the morbidity and mortality of hemodialyzer patients today, it is of paramount importance to match hydro-sodium removal with interdialytic load and to verify the delivered dialysis at each session. A specially designed Biofeedback Module (BM--COT Hospal) allows the automatic determination of plasma water conductivity and effective ionic dialysance with no need for blood samples. Using BM, we evaluated the validity of "conductivity kinetic modelling" (CKM) and the possibility that this may substitute "sodium kinetic modelling". Moreover, we evaluated the "in vivo" relationship between ionic dialysance and effective urea clearance. Our results demonstrate that: 1) CKM makes it possible to obtain programmed end-dialysis plasma water conductivity with an error of less than +/- 0.14 mS/cm, roughly equivalent to a sodium concentration of +/- 1.4 mEq/L. 2). Ionic dialysance and effective urea clearance are not equivalent but, as the interrelationship between these is known, the BM allows the routine monitoring of delivered dialysis.

  2. Enhanced decomposition offsets enhanced productivity and soil carbon accumulation in coastal wetlands responding to climate change

    USGS Publications Warehouse

    Kirwan, M.L.; Blum, L.K.

    2011-01-01

    Coastal wetlands are responsible for about half of all carbon burial in oceans, and their persistence as a valuable ecosystem depends largely on the ability to accumulate organic material at rates equivalent to relative sea level rise. Recent work suggests that elevated CO2 and temperature warming will increase organic matter productivity and the ability of marshes to survive sea level rise. However, we find that organic decomposition rates increase by about 12% per degree of warming. Our measured temperature sensitivity is similar to studies from terrestrial systems, twice as high as the response of salt marsh productivity to temperature warming, and roughly equivalent to the productivity response associated with elevated CO2 in C3 marsh plants. Therefore, enhanced CO2 and warmer temperatures may actually make marshes less resilient to sea level rise, and tend to promote a release of soil carbon. Simple projections indicate that elevated temperatures will increase rates of sea level rise more than any acceleration in organic matter accumulation, suggesting the possibility of a positive feedback between climate, sea level rise, and carbon emissions in coastal environments.

  3. 16 CFR 1510.3 - Requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Requirements. 1510.3 Section 1510.3 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS... purposes, the English measurements shall be used. Metric equivalents are included for convenience.) Rattles...

  4. 78 FR 40407 - Structure and Practices of the Video Relay Service Program: Telecommunications Relay Services and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-05

    ...] Structure and Practices of the Video Relay Service Program: Telecommunications Relay Services and Speech-to... telecommunications relay services (TRS) program continues to offer functional equivalence to all eligible users and... Practices of the Video Relay Service Program; Telecommunications Relay Services and Speech-to-Speech...

  5. 3D fault curvature and fractal roughness: Insights for rupture dynamics and ground motions using a Discontinous Galerkin method

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Gabriel, Alice-Agnes

    2017-04-01

    Natural fault geometries are subject to a large degree of uncertainty. Their geometrical structure is not directly observable and may only be inferred from surface traces, or geophysical measurements. Most studies aiming at assessing the potential seismic hazard of natural faults rely on idealised shaped models, based on observable large-scale features. Yet, real faults are wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. Dynamic rupture simulations aim to capture the observed complexity of earthquake sources and ground-motions. From a numerical point of view, incorporating rough faults in such simulations is challenging - it requires optimised codes able to run efficiently on high-performance computers and simultaneously handle complex geometries. Physics-based rupture dynamics hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Moreover, the simulated ground-motions present many similarities with observed ground-motions records. Thus, such simulations may foster our understanding of earthquake source processes, and help deriving more accurate seismic hazard estimates. In this presentation, the software package SeisSol (www.seissol.org), based on an ADER-Discontinuous Galerkin scheme, is used to solve the spontaneous dynamic earthquake rupture problem. The usage of tetrahedral unstructured meshes naturally allows for complicated fault geometries. However, SeisSol's high-order discretisation in time and space is not particularly suited for small-scale fault roughness. We will demonstrate modelling conditions under which SeisSol resolves rupture dynamics on rough faults accurately. The strong impact of the geometric gradient of the fault surface on the rupture process is then shown in 3D simulations. Following, the benefits of explicitly modelling fault curvature and roughness, in distinction to prescribing heterogeneous initial stress conditions on a planar fault, is demonstrated. Furthermore, we show that rupture extend, rupture front coherency and rupture speed are highly dependent on the initial amplitude of stress acting on the fault, defined by the normalized prestress factor R, the ratio of the potential stress drop over the breakdown stress drop. The effects of fault complexity are particularly pronounced for lower R. By low-pass filtering a rough fault at several cut-off wavelengths, we then try to capture rupture complexity using a simplified fault geometry. We find that equivalent source dynamics can only be obtained using a scarcely filtered fault associated with a reduced stress level. To investigate the wavelength-dependent roughness effect, the fault geometry is bandpass-filtered over several spectral ranges. We show that geometric fluctuations cause rupture velocity fluctuations of similar length scale. The impact of fault geometry is especially pronounced when the rupture front velocity is near supershear. Roughness fluctuations significantly smaller than the rupture front characteristic dimension (cohesive zone size) affect only macroscopic rupture properties, thus, posing a minimum length scale limiting the required resolution of 3D fault complexity. Lastly, the effect of fault curvature and roughness on the simulated ground-motions is assessed. Despite employing a simple linear slip weakening friction law, the simulated ground-motions compare well with estimates from ground motions prediction equations, even at relatively high frequencies.

  6. Influence of Si wafer thinning processes on (sub)surface defects

    NASA Astrophysics Data System (ADS)

    Inoue, Fumihiro; Jourdain, Anne; Peng, Lan; Phommahaxay, Alain; De Vos, Joeri; Rebibis, Kenneth June; Miller, Andy; Sleeckx, Erik; Beyne, Eric; Uedono, Akira

    2017-05-01

    Wafer-to-wafer three-dimensional (3D) integration with minimal Si thickness can produce interacting multiple devices with significantly scaled vertical interconnections. Realizing such a thin 3D structure, however, depends critically on the surface and subsurface of the remaining backside Si after the thinning processes. The Si (sub)surface after mechanical grinding has already been characterized fruitfully for a range of few dozen of μm. Here, we expand the characterization of Si (sub)surface to 5 μm thickness after thinning process on dielectric bonded wafers. The subsurface defects and damage layer were investigated after grinding, chemical mechanical polishing (CMP), wet etching and plasma dry etching. The (sub)surface defects were characterized using transmission microscopy, atomic force microscopy, and positron annihilation spectroscopy. Although grinding provides the fastest removal rate of Si, the surface roughness was not compatible with subsequent processing. Furthermore, mechanical damage such as dislocations and amorphous Si cannot be reduced regardless of Si thickness and thin wafer handling systems. The CMP after grinding showed excellent performance to remove this grinding damage, even though the removal amount is 1 μm. For the case of Si thinning towards 5 μm using grinding and CMP, the (sub)surface is atomic scale of roughness without vacancy. For the case of grinding + dry etch, vacancy defects were detected in subsurface around 0.5-2 μm. The finished surface after wet etch remains in the nm scale in the strain region. By inserting a CMP step in between grinding and dry etch it is possible to significantly reduce not only the roughness, but also the remaining vacancies at the subsurface. The surface of grinding + CMP + dry etching gives an equivalent mono vacancy result as to that of grinding + CMP. This combination of thinning processes allows development of extremely thin 3D integration devices with minimal roughness and vacancy surface.

  7. Chemicals from the Practice of Healthcare: Challenges and Unknowns Posed by Residues in the Environment

    EPA Science Inventory

    Medications have unique signatures - real and metaphorical fingerprints, footprints, and shadows. Signatures imparted by manufacturers use distinctive combinations of shapes, colors, and imprints. These serve as rough first tests to aid in visually identifying the types and quant...

  8. The difference between “equivalent” and “not different”

    DOE PAGES

    Anderson-Cook, Christine M.; Borror, Connie M.

    2015-10-27

    Often, experimenters wish to establish that populations of units can be considered equivalent to each other, in order to leverage improved knowledge about one population for characterizing the new population, or to establish the comparability of items. Equivalence tests have existed for many years, but their use in industry seems to have been largely restricted to biomedical applications, such as for assessing the equivalence of two drugs or protocols. We present the fundamentals of equivalence tests, compare them to traditional two-sample and ANOVA tests that are better suited to establishing differences in populations, and propose the use of a graphicalmore » summary to compare p-values across different thresholds of practically important differences.« less

  9. Preliminary wing model tests in the variable density wind tunnel of the National Advisory Committee for Aeronautics

    NASA Technical Reports Server (NTRS)

    Munk, Max M

    1926-01-01

    This report contains the results of a series of tests with three wing models. By changing the section of one of the models and painting the surface of another, the number of models tested was increased to five. The tests were made in order to obtain some general information on the air forces on wing sections at a high Reynolds number and in particular to make sure that the Reynolds number is really the important factor, and not other things like the roughness of the surface and the sharpness of the trailing edge. The few tests described in this report seem to indicate that the air forces at a high Reynolds number are not equivalent to respective air forces at a low Reynolds number (as in an ordinary atmospheric wind tunnel). The drag appears smaller at a high Reynolds number and the maximum lift is increased in some cases. The roughness of the surface and the sharpness of the trailing edge do not materially change the results, so that we feel confident that tests with systematic series of different wing sections will bring consistent results, important and highly useful to the designer.

  10. Evolutionary potential of root chemical defense: genetic correlations with shoot chemistry and plant growth.

    PubMed

    Parker, J D; Salminen, J-P; Agrawal, Anurag A

    2012-08-01

    Root herbivores can affect plant fitness, and roots often contain the same secondary metabolites that act as defenses in shoots, but the ecology and evolution of root chemical defense have been little investigated. Here, we investigated genetic variance, heritability, and correlations among defensive phenolic compounds in shoot vs. root tissues of common evening primrose, Oenothera biennis. Across 20 genotypes, there were roughly similar concentrations of total phenolics in shoots vs. roots, but the allocation of particular phenolics to shoots vs. roots varied along a continuum of genotype growth rate. Slow-growing genotypes allocated 2-fold more of the potential pro-oxidant oenothein B to shoots than roots, whereas fast-growing genotypes had roughly equivalent above and belowground concentrations. Phenolic concentrations in both roots and shoots were strongly heritable, with mostly positive patterns of genetic covariation. Nonetheless, there was genotype-specific variation in the presence/absence of two major ellagitannins (oenothein A and its precursor oenothein B), indicating two different chemotypes based on alterations in this chemical pathway. Overall, the presence of strong genetic variation in root defenses suggests ample scope for the evolution of these compounds as defenses against root herbivores.

  11. Resonance condition and low-frequency quasi-periodic oscillations of the outbursting source H1743-322

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Sandip K.; Mondal, Santanu; Debnath, Dipak

    2015-10-01

    It has long been proposed that low-frequency quasi-periodic oscillations (QPOs) in stellar-mass black holes or their equivalents in supermassive black holes are the result of resonances between infall and cooling timescales. We explicitly compute these two timescales in a generic situation to show that resonances are easily achieved. During an outburst of a transient black hole candidate, the accretion rate of the Keplerian disc as well as the geometry of the Comptonizing cloud change very rapidly. During some period, a resonance condition between the cooling timescale (predominantly by Comptonization) and the infall timescale of the Comptonizing cloud is roughly satisfied. This leads to low-frequency quasi-periodic oscillations (LFQPOs) of the Compton cloud and the consequent oscillation of hard X-rays. In this paper, we explicitly follow black hole candidate H1743-322 during its 2010 outburst. We compute the Compton cooling time and infall time over several days and show that QPOs take place when these two roughly agree within ˜50 per cent, i.e., the resonance condition is generally satisfied. We also confirm that for the sharper LFQPOs (i.e. higher Q-factors) the ratio of the two timescales is very close to 1.

  12. Tailoring optical properties of TiO2-Cr co-sputtered films using swift heavy ions

    NASA Astrophysics Data System (ADS)

    Gupta, Ratnesh; Sen, Sagar; Phase, D. M.; Avasthi, D. K.; Gupta, Ajay

    2018-05-01

    Effect of 100 MeV Au7+ ion irradiation on structure and optical properties of Cr-doped TiO2 films has been studied using X-ray photoelectron spectroscopy, soft X-ray absorption spectroscopy, UV-Visible spectroscopy, X-ray reflectivity, and atomic force microscopy. X-ray reflectivity measurement implied that film thickness reduces as a function of ion fluence while surface roughness increases. The variation in surface roughness is well correlated with AFM results. Ion irradiation decreases the band gap energy of the film. Swift heavy ion irradiation enhances the oxygen vacancies in the film, and the extra electrons in the vacancies act as donor-like states. In valence band spectrum, there is a shift in the Ti3d peak towards lower energies and the shift is equivalent to the band gap energy obtained from UV spectrum. Evidence for band bending is also provided by the corresponding Ti XPS peak which exhibits a shift towards lower energy due to the downward band bending. X-ray absorption studies on O Kand Cr L3,2 edges clearly indicate that swift heavy ion irradiation induces formation of Cr-clusters in TiO2 matrix.

  13. Investigation of rapidly solidified aluminum by using diamond turning and a magnetorheological finishing process

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-Chieh; Hsu, Wei-Yao; Kuo, Ching-Hsiang; Abou-El-Hossein, Khaled; Otieno, Timothy

    2015-08-01

    The metal mirror has been widely used in optical application for a longtime. Especially the aluminum 6061 is often considered the preferred material for manufacturing optical components for ground-based astronomical applications. One reason for using this material is its high specific stiffness and excellent thermal properties. However, a large amount of data exists for this material and commercially available aluminum 6061 using single point diamond turning (SPDT) and polishing process can achieve surface roughness values of approximately 2 to 4 nm, which is adequate for applications that involve the infrared spectral range, but not for the shorter spectral range. A novel aluminum material, fabricated using a rapid solidification process that is equivalent to the conventional aluminum 6061 alloy grade has been used in optical applications in recent years because of its smaller grain size. In this study, the surface quality of the rapid solidification aluminum after single point diamond turning and followed by magnetorheological finish (MRF) process is investigated and compared with conventional aluminum 6061. Both the surface roughness Ra was evaluated using white light interferometers. Finally, indicators such as optimal fabrication parameter combination and optical performance are discussed.

  14. An algorithm for simulating fracture of cohesive-frictional materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nukala, Phani K; Sampath, Rahul S; Barai, Pallab

    Fracture of disordered frictional granular materials is dominated by interfacial failure response that is characterized by de-cohesion followed by frictional sliding response. To capture such an interfacial failure response, we introduce a cohesive-friction random fuse model (CFRFM), wherein the cohesive response of the interface is represented by a linear stress-strain response until a failure threshold, which is then followed by a constant response at a threshold lower than the initial failure threshold to represent the interfacial frictional sliding mechanism. This paper presents an efficient algorithm for simulating fracture of such disordered frictional granular materials using the CFRFM. We note that,more » when applied to perfectly plastic disordered materials, our algorithm is both theoretically and numerically equivalent to the traditional tangent algorithm (Roux and Hansen 1992 J. Physique II 2 1007) used for such simulations. However, the algorithm is general and is capable of modeling discontinuous interfacial response. Our numerical simulations using the algorithm indicate that the local and global roughness exponents ({zeta}{sub loc} and {zeta}, respectively) of the fracture surface are equal to each other, and the two-dimensional crack roughness exponent is estimated to be {zeta}{sub loc} = {zeta} = 0.69 {+-} 0.03.« less

  15. Deployable reflector antenna performance optimization using automated surface correction and array-feed compensation

    NASA Technical Reports Server (NTRS)

    Schroeder, Lyle C.; Bailey, M. C.; Mitchell, John L.

    1992-01-01

    Methods for increasing the electromagnetic (EM) performance of reflectors with rough surfaces were tested and evaluated. First, one quadrant of the 15-meter hoop-column antenna was retrofitted with computer-driven and controlled motors to allow automated adjustment of the reflector surface. The surface errors, measured with metric photogrammetry, were used in a previously verified computer code to calculate control motor adjustments. With this system, a rough antenna surface (rms of approximately 0.180 inch) was corrected in two iterations to approximately the structural surface smoothness limit of 0.060 inch rms. The antenna pattern and gain improved significantly as a result of these surface adjustments. The EM performance was evaluated with a computer program for distorted reflector antennas which had been previously verified with experimental data. Next, the effects of the surface distortions were compensated for in computer simulations by superimposing excitation from an array feed to maximize antenna performance relative to an undistorted reflector. Results showed that a 61-element array could produce EM performance improvements equal to surface adjustments. When both mechanical surface adjustment and feed compensation techniques were applied, the equivalent operating frequency increased from approximately 6 to 18 GHz.

  16. High school physics enrollments by socioeconomic status and type of class

    NASA Astrophysics Data System (ADS)

    White, Susan C.

    2016-01-01

    Since September, we have been examining the relationship between high school physics enrollments by race/ethnicity and socioeconomic status. We have seen that the number of seniors and the number of physics teachers is roughly evenly divided into each type of school: those where students are typically better off economically than their peers at other schools in the area, those where students' economic status is typical for the area, and those where students are worse off. We have seen that even though the number of seniors and the number of physics teachers is roughly equal, the number of students taking physics is not. As we see in the figure, the enrollments in various types of physics classes are not equivalent either. While the total number of students taking Physics First or conceptual physics is about the same, the number of students in advanced classes—honors, AP, or second-year physics—is heavily skewed toward the better off schools. It is hard to know the direction of any cause and effect, but it is clear the students attending better off schools are more likely to take physics and are more likely to take more advanced physics classes in high school.

  17. Testing the equivalence principle in the field of the Earth: Particle physics at masses below 1 μeV\\?

    NASA Astrophysics Data System (ADS)

    Adelberger, E. G.; Stubbs, C. W.; Heckel, B. R.; Su, Y.; Swanson, H. E.; Smith, G.; Gundlach, J. H.; Rogers, W. F.

    1990-11-01

    A sensitive, systematic search for feeble, macroscopic forces arising from the exchange of hypothetical ultra-low-mass bosons was made by observing the differential acceleration of two different test body pairs toward two different sources. Our differential accelerometer-a highly symmetric, continuously rotating torsion balance-incorporated several innovations that effectively suppressed systematic errors. All known sources of systematic error were demonstrated to be negligible in comparison to our fluctuating errors which are roughly 7 times larger than the fundamental limit set by the fact that we observe an oscillator at room temperature with a given damping time. Our 1σ limits on the horizontal differential acceleration of Be/Al or Be/Cu test body pairs in the field of the Earth, Δa⊥=(2.1+/-2.1)×10-11 cm s-2 and Δa⊥=(0.8+/-1.7)×10-11 cm s-2, respectively, set improved bounds on Yukawa interactions mediated by bosons with masses ranging between mbc2~=3×10-18 and mbc2~=1×10-6 eV. For example, our constraints on infinite-range vector interactions with charges of B and of B-L are roughly 10 and 2 times more sensitive than those obtained by Roll, Krotkov, and Dicke using the field of the Sun. Furthermore we set stringent constraints down to λ=1 m, while those of solar experiments are weak for λ<1 AU. In terms of the weak equivalence principle in the field of the Earth, our 1σ result corresponds to mi/mg(Cu)-mi/mg(Be)=(0.2+/-1.0)×10-11. Our results also yield stringent constraints on the nonsymmetric gravitation theory of Moffat and on the anomalous acceleration of antimatter in proposed ``quantum gravity'' models, and have implications for lunar-ranging tests of the strong equivalence principle. Our 1σ limit on the differential acceleration of Be/Al test body pairs toward a 1.5 Mg Pb laboratory source, Δa=(-0.15+/-1.31)×10-10 cm s-2, provides constraints on Yukawa interactions with ranges down to 10 cm, and on interactions whose charge is B-2L.

  18. Perceptual learning improves visual performance in juvenile amblyopia.

    PubMed

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  19. Collaboration, not competition: cost analysis of neonatal nurse practitioner plus neonatologist versus neonatologist-only care models.

    PubMed

    Bosque, Elena

    2015-04-01

    Although advanced practice in neonatal nursing is accepted and supported by the American Academy of Pediatrics and National Association of Neonatal Nurse Practitioners, less than one-half of all states allow independent prescriptive authority by advanced practice nurse practitioners. The purpose of this study was to compare costs of a collaborative practice model that includes neonatal nurse practitioner (NNP) plus neonatologist (Neo) versus a neonatologist only (Neo-Only) practice in Washington state. Published Internet median salary figures from 3 sources were averaged to produce mean ± SD provider salaries, and costs for each care model were calculated in this descriptive, comparative study. Median NNP versus Neo salaries were $99,773 ± $5206 versus $228,871 ± $9654, respectively (P < .0001). The NNP + Neo (5 NNP/3 Neo full-time equivalents [FTEs]) cost $1,185,475 versus Neo-Only (8 Neo FTEs) cost $1,830,960. The NNP + Neo practice model with 8 FTEs suggests a cost savings, with assumed equivalent reimbursement, of $645,485/year. These results may provide the impetus for more states to adopt broader scope of practice licensure for NNPs. These data may provide rationale for analysis of actual costs and outcomes of collaborative practice.

  20. Form drag in rivers due to small-scale natural topographic features: 2. Irregular sequences

    USGS Publications Warehouse

    Kean, J.W.; Smith, J.D.

    2006-01-01

    The size, shape, and spacing of small-scale topographic features found on the boundaries of natural streams, rivers, and floodplains can be quite variable. Consequently, a procedure for determining the form drag on irregular sequences of different-sized topographic features is essential for calculating near-boundary flows and sediment transport. A method for carrying out such calculations is developed in this paper. This method builds on the work of Kean and Smith (2006), which describes the flow field for the simpler case of a regular sequence of identical topographic features. Both approaches model topographic features as two-dimensional elements with Gaussian-shaped cross sections defined in terms of three parameters. Field measurements of bank topography are used to show that (1) the magnitude of these shape parameters can vary greatly between adjacent topographic features and (2) the variability of these shape parameters follows a lognormal distribution. Simulations using an irregular set of topographic roughness elements show that the drag on an individual element is primarily controlled by the size and shape of the feature immediately upstream and that the spatial average of the boundary shear stress over a large set of randomly ordered elements is relatively insensitive to the sequence of the elements. In addition, a method to transform the topography of irregular surfaces into an equivalently rough surface of regularly spaced, identical topographic elements also is given. The methods described in this paper can be used to improve predictions of flow resistance in rivers as well as quantify bank roughness.

  1. Comparison of remote video and diver's direct observations to quantify reef fishes feeding on benthos in coral and rocky reefs.

    PubMed

    Longo, G O; Floeter, S R

    2012-10-01

    This study compared remote underwater video and traditional direct diver observations to assess reef fish feeding impact on benthos across multiple functional groups within different trophic categories (e.g. herbivores, zoobenthivores and omnivores) and in two distinct reef systems: a subtropical rocky reef and a tropical coral reef. The two techniques were roughly equivalent, both detecting the species with higher feeding impact and recording similar bite rates, suggesting that reef fish feeding behaviour at the study areas are not strongly affected by the diver's presence. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  2. Configuring the Long-Baseline Neutrino Experiment

    NASA Astrophysics Data System (ADS)

    Barger, Vernon; Bhattacharya, Atri; Chatterjee, Animesh; Gandhi, Raj; Marfatia, Danny; Masud, Mehedi

    2014-01-01

    We study the neutrino oscillation physics performance of the Long-Baseline Neutrino Experiment in various configurations. In particular, we compare the case of a surface detector at the far site augmented by a near detector, to that with the far site detector placed deep underground but no near detector. In the latter case, information from atmospheric neutrino events is also utilized. For values of θ13 favored by reactor experiments and a 100 kt-yr exposure, we find roughly equivalent sensitivities to the neutrino mass hierarchy, the octant of θ23, and to CP violation. We also find that as the exposure is increased, the near detector helps increase the sensitivity to CP violation substantially more than atmospheric neutrinos.

  3. An estimate of the enroute noise of an advanced turboprop airplane NASA-TM-87302 E-3020 NAS 1.15:87302 HC A02/MF A01

    NASA Technical Reports Server (NTRS)

    Dittmar, J. H.

    1986-01-01

    The enroute noise of an Advanced Turboprop powered aircraft was estimated. The measured noise levels were roughly equivalent in annoyance to the noise 15.24 m from an automobile traveling at 80 km/h. It is felt that these levels would not illicit noise complaints from urban areas during the day but might be a slight annoyance in rural areas or in urban areas at night. Although it is not felt that the enroute noise is a major problem, it is indicated that a reduction in the enroute noise could improve the acceptability of advance turboprop airplanes.

  4. Assessment of candidate-expendable launch vehicles for large payloads

    NASA Technical Reports Server (NTRS)

    1984-01-01

    In recent years the U.S. Air Force and NASA conducted design studies of 3 expendable launch vehicle configurations that could serve as a backup to the space shuttle--the Titan 34D7/Centaur, the Atlas II/Centaur, and the shuttle-derived SRB-X--as well as studies of advanced shuttle-derived launch vehicles with much larger payload capabilities than the shuttle. The 3 candidate complementary launch vehicles are judged to be roughly equivalent in cost, development time, reliability, and payload-to-orbit performance. Advanced shuttle-derived vehicles are considered viable candidates to meet future heavy lift launch requirements; however, they do not appear likely to result in significant reduction in cost-per-pound to orbit.

  5. Student perceptions of methylphenidate abuse at a public liberal arts college.

    PubMed

    Babcock, Q; Byrne, T

    2000-11-01

    With the ever-increasing diagnosis of attention deficit hyperactivity disorder, methylphenidate has become readily accessible in the college environment. Several properties of methylphenidate indicate abuse liability. A survey regarding the recreational use of methylphenidate was distributed to the student body at a public, liberal arts college. More than 16% of the students reported they had tried methylphenidate recreationally, and 12.7% reported they had taken the drug intranasally. Use of the drug was more common among traditional students than among nontraditional students. Among traditional-age students, reports of methylphenidate use were roughly equivalent to reports of cocaine and amphetamine use. Environmental conditions characteristic of college student life may influence the recreational use of the drug.

  6. Longitudinal-bending mode micromotor using multilayer piezoelectric actuator.

    PubMed

    Yao, K; Koc, B; Uchino, K

    2001-07-01

    Longitudinal-bending mode ultrasonic motors with a diameter of 3 mm were fabricated using stacked multilayer piezoelectric actuators, which were self-developed from hard lead zirconate titanate (PZT) ceramic. A bending vibration was converted from a longitudinal vibration with a longitudinal-bending coupler. The motors could be bidirectionally operated by changing driving frequency. Their starting and braking torque were analyzed based on the transient velocity response. With a load of moment of inertia 2.5 x 10(-7) kgm2, the motor showed a maximum starting torque of 127.5 microNm. The braking torque proved to be a constant independent on the motor's driving conditions and was roughly equivalent to the maximum starting torque achievable with our micromotors.

  7. The 1088 A feature toward reddened stars

    NASA Technical Reports Server (NTRS)

    Federman, S. R.

    1986-01-01

    An analysis of the interstellar feature near 1088 A in spectra obtained with the Copernicus satellite suggests that neutral chlorine is the absorber. The mean wavelength, as determined from 15 lines of sight, of 1088.052 + or - 0.023 A compares favorably with the chlorine line at 1088.062 A. A strong correlation with Cl I 1347 A indicates an oscillator strength for 1088 A of 0.04. Above a threshold at N(H2) of roughly 10 to the 19th/sq cm, the equivalent width of 1088 A varies approximately with N(H2). The variation with H2 is similar to the variation of Na I and C I with H2.

  8. Coupling efficiency of laser beam to multimode fiber

    NASA Astrophysics Data System (ADS)

    Niu, Jinfu; Xu, Jianqiu

    2007-06-01

    The coupling efficiency of laser beam to multimode fiber is given by geometrical optics, and the relation between the maximum coupling efficiency and the beam propagation factor M2 is analyzed. An equivalent factor MF2 for the multimode fiber is introduced to characterize the fiber coupling capability. The coupling efficiency of laser beam to multimode fiber is calculated in respect of the ratio M2/MF2 by the overlapping integral theory. The optimal coupling efficiency can be roughly estimated by the ratio of M2 to MF2 but with a large error range. The deviation comes from the lacks of information on the detail of phase and intensity profile in the beam factor M2.

  9. An evaluation of the effectiveness of S-5 scratch courses.

    DOT National Transportation Integrated Search

    1976-01-01

    A study was made of the practice of "scratching" the surface course of several new construction projects in Virginia. It was found that sections with the surface course placed in one lift produced a road roughness value of about 3 in./mile higher tha...

  10. Grassland, shrubland and savanna stewardship: where do we go from here?

    USDA-ARS?s Scientific Manuscript database

    Scientific efforts to understand grasslands, shrublands and savannas and thereby develop sustainable management practices are roughly 100 years old. What have we learned in that time? Several assumptions made by scientists and policymakers early in the 20th century have proved mistaken, resulting in...

  11. Hozhooji Hane' = Blessingway. First Edition.

    ERIC Educational Resources Information Center

    Hathale, Roger; Hadley, Linda

    The Rough Rock Medicinemen Training Program prepared this book to preserve Medicine practice of Navajo practitioners. Materials include Navajo transcriptions and English translations of lectures by Roger Hathale, a well known Medicineman. Written primarily for use by Navajo students at secondary and junior college levels, the book contains a full…

  12. Lateral ring metal elastic wheel absorbs shock loading

    NASA Technical Reports Server (NTRS)

    Galan, L.

    1966-01-01

    Lateral ring metal elastic wheel absorbs practically all shock loading when operated over extremely rough terrain and delivers only a negligible shock residue to associated suspension components. The wheel consists of a rigid aluminum assembly to which lateral titanium ring flexible elements with treads are attached.

  13. SU-E-T-567: Neutron Dose Equivalent Evaluation for Pencil Beam Scanning Proton Therapy with Apertures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, C; Nanjing University of Aeronautics and Astronautics, Nanjing; Schuemann, J

    Purpose: To determine the neutron contamination from the aperture in pencil beam scanning during proton therapy. Methods: A Monte Carlo based proton therapy research platform TOPAS and the UF-series hybrid pediatric phantoms were used to perform this study. First, pencil beam scanning (PBS) treatment pediatric plans with average spot size of 10 mm at iso-center were created and optimized for three patients with and without apertures. Then, the plans were imported into TOPAS. A scripting method was developed to automatically replace the patient CT with a whole body phantom positioned according to the original plan iso-center. The neutron dose equivalentmore » was calculated using organ specific quality factors for two phantoms resembling a 4- and 14-years old patient. Results: The neutron dose equivalent generated by the apertures in PBS is 4–10% of the total neutron dose equivalent for organs near the target, while roughly 40% for organs far from the target. Compared to the neutron dose equivalent caused by PBS without aperture, the results show that the neutron dose equivalent with aperture is reduced in the organs near the target, and moderately increased for those organs located further from the target. This is due to the reduction of the proton dose around the edge of the CTV, which causes fewer neutrons generated in the patient. Conclusion: Clinically, for pediatric patients, one might consider adding an aperture to get a more conformal treatment plan if the spot size is too large. This work shows the somewhat surprising fact that adding an aperture for beam scanning for facilities with large spot sizes reduces instead of increases a potential neutron background in regions near target. Changran Geng is supported by the Chinese Scholarship Council (CSC) and the National Natural Science Foundation of China (Grant No. 11475087)« less

  14. Using System Mass (SM), Equivalent Mass (EM), Equivalent System Mass (ESM) or Life Cycle Mass (LCM) in Advanced Life Support (ALS) Reporting

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2003-01-01

    The Advanced Life Support (ALS) has used a single number, Equivalent System Mass (ESM), for both reporting progress and technology selection. ESM is the launch mass required to provide a space system. ESM indicates launch cost. ESM alone is inadequate for technology selection, which should include other metrics such as Technology Readiness Level (TRL) and Life Cycle Cost (LCC) and also consider perfom.arxe 2nd risk. ESM has proven difficult to implement as a reporting metric, partly because it includes non-mass technology selection factors. Since it will not be used exclusively for technology selection, a new reporting metric can be made easier to compute and explain. Systems design trades-off performance, cost, and risk, but a risk weighted cost/benefit metric would be too complex to report. Since life support has fixed requirements, different systems usually have roughly equal performance. Risk is important since failure can harm the crew, but it is difficult to treat simply. Cost is not easy to estimate, but preliminary space system cost estimates are usually based on mass, which is better estimated than cost. Amass-based cost estimate, similar to ESM, would be a good single reporting metric. The paper defines and compares four mass-based cost estimates, Equivalent Mass (EM), Equivalent System Mass (ESM), Life Cycle Mass (LCM), and System Mass (SM). EM is traditional in life support and includes mass, volume, power, cooling and logistics. ESM is the specifically defined ALS metric, which adds crew time and possibly other cost factors to EM. LCM is a new metric, a mass-based estimate of LCC measured in mass units. SM includes only the factors of EM that are originally measured in mass, the hardware and logistics mass. All four mass-based metrics usually give similar comparisons. SM is by far the simplest to compute and easiest to explain.

  15. Structural, Item, and Test Generalizability of the Psychopathy Checklist-Revised to Offenders with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Morrissey, Catrin; Cooke, David; Michie, Christine; Hollin, Clive; Hogue, Todd; Lindsay, William R.; Taylor, John L.

    2010-01-01

    The Psychopathy Checklist-Revised (PCL-R) is the most widely used measure of psychopathy in forensic clinical practice, but the generalizability of the measure to offenders with intellectual disabilities (ID) has not been clearly established. This study examined the structural equivalence and scalar equivalence of the PCL-R in a sample of 185 male…

  16. 10 CFR Appendix A to Part 40 - Criteria Relating to the Operation of Uranium Mills and the Disposition of Tailings or Wastes...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... associated with the sites, which is equivalent to, to the extent practicable, or more stringent than the... this appendix, the Commission will consider “practicable” and “reasonably achievable” as equivalent... formation, group of formations, or part of a formation capable of yielding a significant amount of ground...

  17. 40 CFR Table 7 to Subpart Eeee of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... equivalent control that meets the requirements in Table 4 to this subpart, item 1.a i. After emptying and... out a leak detection and repair program or equivalent control according to one of the subparts listed... (Non-Gasoline) Pt. 63, Subpt. EEEE, Table 7 Table 7 to Subpart EEEE of Part 63—Initial Compliance With...

  18. 40 CFR Table 7 to Subpart Eeee of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... equivalent control that meets the requirements in Table 4 to this subpart, item 1.a i. After emptying and... out a leak detection and repair program or equivalent control according to one of the subparts listed... (Non-Gasoline) Pt. 63, Subpt. EEEE, Table 7 Table 7 to Subpart EEEE of Part 63—Initial Compliance With...

  19. 40 CFR Table 7 to Subpart Eeee of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... equivalent control that meets the requirements in Table 4 to this subpart, item 1.a i. After emptying and... out a leak detection and repair program or equivalent control according to one of the subparts listed... (Non-Gasoline) Pt. 63, Subpt. EEEE, Table 7 Table 7 to Subpart EEEE of Part 63—Initial Compliance With...

  20. Equivalence between short-time biphasic and incompressible elastic material responses.

    PubMed

    Ateshian, Gerard A; Ellis, Benjamin J; Weiss, Jeffrey A

    2007-06-01

    Porous-permeable tissues have often been modeled using porous media theories such as the biphasic theory. This study examines the equivalence of the short-time biphasic and incompressible elastic responses for arbitrary deformations and constitutive relations from first principles. This equivalence is illustrated in problems of unconfined compression of a disk, and of articular contact under finite deformation, using two different constitutive relations for the solid matrix of cartilage, one of which accounts for the large disparity observed between the tensile and compressive moduli in this tissue. Demonstrating this equivalence under general conditions provides a rationale for using available finite element codes for incompressible elastic materials as a practical substitute for biphasic analyses, so long as only the short-time biphasic response is sought. In practice, an incompressible elastic analysis is representative of a biphasic analysis over the short-term response deltat

  1. Contesting the Equivalency of Continuous Sedation until Death and Physician-assisted Suicide/Euthanasia: A Commentary on LiPuma.

    PubMed

    Raho, Joseph A; Miccinesi, Guido

    2015-10-01

    Patients who are imminently dying sometimes experience symptoms refractory to traditional palliative interventions, and in rare cases, continuous sedation is offered. Samuel H. LiPuma, in a recent article in this Journal, argues that continuous sedation until death is equivalent to physician-assisted suicide/euthanasia based on a higher brain neocortical definition of death. We contest his position that continuous sedation involves killing and offer four objections to the equivalency thesis. First, sedation practices are proportional in a way that physician-assisted suicide/euthanasia is not. Second, continuous sedation may not entirely abolish consciousness. Third, LiPuma's particular version of higher brain neocortical death relies on an implausibly weak construal of irreversibility--a position that is especially problematic in the case of continuous sedation. Finally, we explain why continuous sedation until death is not functionally equivalent to neocortical death and, hence, physician-assisted suicide/euthanasia. Concluding remarks review the differences between these two end-of-life practices. © The Author 2015. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Femtosecond laser ablated durable superhydrophobic PTFE films with micro-through-holes for oil/water separation: Separating oil from water and corrosive solutions

    NASA Astrophysics Data System (ADS)

    Yong, Jiale; Fang, Yao; Chen, Feng; Huo, Jinglan; Yang, Qing; Bian, Hao; Du, Guangqing; Hou, Xun

    2016-12-01

    Separating the mixture of water and oil by the superhydrophobic porous materials has attracted increasing research interests; however, the surface microstructures and chemical composition of those materials are easily destroyed in a harsh environment, resulting in materials losing the superhydrophobicity as well as the oil/water separation function. In this paper, a kind of rough microstructures was formed on polytetrafluoroethylene (PTFE) sheet by femtosecond laser treatment. The rough surfaces showed durable superhydrophobicity and ultralow water adhesion even after storing in various harsh environment for a long time, including strong acid, strong alkali, and high temperature. A micro-through-holes array was further generated on the rough superhydrophobic PTFE film by a subsequent mechanical drilling process. The resultant sample was successfully applied in the field of oil/water separation due to the inverse superhydrophobicity and superoleophilicity. The designed separation system is also very efficient to separate the mixtures of oil and corrosive acid/alkali solutions, exhibiting the strong potential for practical application.

  3. Ingestion of an Oral Hyaluronan Solution Improves Skin Hydration, Wrinkle Reduction, Elasticity, and Skin Roughness: Results of a Clinical Study.

    PubMed

    Göllner, Imke; Voss, Werner; von Hehn, Ulrike; Kammerer, Susanne

    2017-10-01

    Intake of oral supplements with the aim of a cutaneous antiaging effect are increasingly common. Hyaluronic acid (HA) is a promising candidate, as it is the key factor for preserving tissue hydration. In our practice study, we evaluated the effect of an oral HA preparation diluted in a cascade-fermented organic whole food concentrate supplemented with biotin, vitamin C, copper, and zinc (Regulatpro Hyaluron) on skin moisture content, elasticity, skin roughness, and wrinkle depths. Twenty female subjects with healthy skin in the age group of 45 to 60 years took the product once daily for 40 days. Different skin parameters were objectively assessed before the first intake, after 20 and after 40 days. Intake of the HA solution led to a significant increase in skin elasticity, skin hydration, and to a significant decrease in skin roughness and wrinkle depths. The supplement was well tolerated; no side effects were noted throughout the study.

  4. Ingestion of an Oral Hyaluronan Solution Improves Skin Hydration, Wrinkle Reduction, Elasticity, and Skin Roughness: Results of a Clinical Study

    PubMed Central

    Göllner, Imke; Voss, Werner; von Hehn, Ulrike; Kammerer, Susanne

    2017-01-01

    Intake of oral supplements with the aim of a cutaneous antiaging effect are increasingly common. Hyaluronic acid (HA) is a promising candidate, as it is the key factor for preserving tissue hydration. In our practice study, we evaluated the effect of an oral HA preparation diluted in a cascade-fermented organic whole food concentrate supplemented with biotin, vitamin C, copper, and zinc (Regulatpro Hyaluron) on skin moisture content, elasticity, skin roughness, and wrinkle depths. Twenty female subjects with healthy skin in the age group of 45 to 60 years took the product once daily for 40 days. Different skin parameters were objectively assessed before the first intake, after 20 and after 40 days. Intake of the HA solution led to a significant increase in skin elasticity, skin hydration, and to a significant decrease in skin roughness and wrinkle depths. The supplement was well tolerated; no side effects were noted throughout the study. PMID:29228816

  5. Effect of drop volume and surface statistics on the superhydrophobicity of randomly rough substrates

    NASA Astrophysics Data System (ADS)

    Afferrante, L.; Carbone, G.

    2018-01-01

    In this paper, a simple theoretical approach is developed with the aim of evaluating shape, interfacial pressure, apparent contact angle and contact area of liquid drops gently deposed on randomly rough surfaces. This method can be useful to characterize the superhydrophobic properties of rough substrates, and to investigate the contact behavior of impacting drops. We assume that (i) the size of the apparent liquid-solid contact area is much larger than the micromorphology of the substrate, and (ii) a composite interface is always formed at the microscale. Results show apparent contact angle and liquid-solid area fraction are slightly influenced by the drop volume only at relatively high values of the root mean square roughness h rms, whereas the effect of volume is practically negligible at small h rms. The main statistical quantity affecting the superhydrophobic properties is found to be the Wenzel roughness parameter r W, which depends on the average slope of the surface heights. Moreover, transition from the Cassie-Baxter state to the Wenzel one is observed when r W reduces below a certain critical value, and theoretical predictions are found to be in good agreement with experimental data. Finally, the present method can be conveniently exploited to evaluate the occurrence of pinning phenomena in the case of impacting drops, as the Wenzel critical pressure for liquid penetration gives an estimation of the maximum impact pressure tolerated by the surface without pinning occurring.

  6. Space radiation dosimetry in low-Earth orbit and beyond.

    PubMed

    Benton, E R; Benton, E V

    2001-09-01

    Space radiation dosimetry presents one of the greatest challenges in the discipline of radiation protection. This is a result of both the highly complex nature of the radiation fields encountered in low-Earth orbit (LEO) and interplanetary space and of the constraints imposed by spaceflight on instrument design. This paper reviews the sources and composition of the space radiation environment in LEO as well as beyond the Earth's magnetosphere. A review of much of the dosimetric data that have been gathered over the last four decades of human space flight is presented. The different factors affecting the radiation exposures of astronauts and cosmonauts aboard the International Space Station (ISS) are emphasized. Measurements made aboard the Mir Orbital Station have highlighted the importance of both secondary particle production within the structure of spacecraft and the effect of shielding on both crew dose and dose equivalent. Roughly half the dose on ISS is expected to come from trapped protons and half from galactic cosmic rays (GCRs). The dearth of neutron measurements aboard LEO spacecraft and the difficulty inherent in making such measurements have led to large uncertainties in estimates of the neutron contribution to total dose equivalent. Except for a limited number of measurements made aboard the Apollo lunar missions, no crew dosimetry has been conducted beyond the Earth's magnetosphere. At the present time we are forced to rely on model-based estimates of crew dose and dose equivalent when planning for interplanetary missions, such as a mission to Mars. While space crews in LEO are unlikely to exceed the exposure limits recommended by such groups as the NCRP, dose equivalents of the same order as the recommended limits are likely over the course of a human mission to Mars. c2001 Elsevier Science B.V. All rights reserved.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borthakur, Sanchayeeta; Heckman, Timothy; Tumlinson, Jason

    We present a study exploring the nature and properties of the circumgalactic medium (CGM) and its connection to the atomic gas content in the interstellar medium (ISM) of galaxies as traced by the H i 21 cm line. Our sample includes 45 low-z (0.026–0.049) galaxies from the GALEX Arecibo SDSS Survey (Galaxy Evolution Explorer/Arecibo/Sloan Digital Sky Survey). Their CGM was probed via absorption in the spectra of background quasi-stellar objects at impact parameters of 63–231 kpc. The spectra were obtained with the Cosmic Origins Spectrograph aboard the Hubble Space Telescope. We detected neutral hydrogen (Lyα absorption lines) in the CGMmore » of 92% of the galaxies. We find that the radial profile of the CGM as traced by the Lyα equivalent width can be fit as an exponential with a scale length of roughly the virial radius of the dark matter halo. We found no correlation between the orientation of the sightline relative to the galaxy’s major axis and the Lyα equivalent width. The velocity spread of the circumgalactic gas is consistent with that seen in the atomic gas in the ISM. We find a strong correlation (99.8% confidence) between the gas fraction (M(H i)/M{sub ⋆}) and the impact-parameter-corrected Lyα equivalent width. This is stronger than the analogous correlation between corrected Lyα equivalent width and specific star formation rate (SFR)/M{sub ⋆} (97.5% confidence). These results imply a physical connection between the H i disk and the CGM, which is on scales an order of magnitude larger. This is consistent with the picture in which the H i disk is nourished by accretion of gas from the CGM.« less

  8. Dynamic and Geometric Analyses of Nudaurelia capensis ωVirus Maturation Reveal the Energy Landscape of Particle Transitions

    PubMed Central

    Tang, Jinghua; Kearney, Bradley M.; Wang, Qiu; Doerschuk, Peter C.; Baker, Timothy S.; Johnson, John E.

    2014-01-01

    Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T=4, eukaryotic, ssRNA virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diam. = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed Maximum Likelihood Variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e. uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly 2-4 times the variance of the first two particles. Without maturation cleavage the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3Å while the mature particle had an RMSD of 11Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. PMID:24591180

  9. Dynamic and geometric analyses of Nudaurelia capensis ω virus maturation reveal the energy landscape of particle transitions.

    PubMed

    Tang, Jinghua; Kearney, Bradley M; Wang, Qiu; Doerschuk, Peter C; Baker, Timothy S; Johnson, John E

    2014-04-01

    Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T = 4, eukaryotic, single-stranded ribonucleic acid virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diameter = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed maximum likelihood variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e., uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly two to four times the variance of the first two particles. Without maturation cleavage, the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3 Å while the mature particle had an RMSD of 11 Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Modeling dynamic processes at stage of formation of parts previously subjected to high-energy laser effects

    NASA Astrophysics Data System (ADS)

    Efimov, A. E.; Maksarov, V. V.; Timofeev, D. Y.

    2018-03-01

    The present paper states the impact of a technological system on piece’s roughness and shape accuracy via simulation modeling. For this purpose, a theory was formulated and a mathematical model was generated to justify self-oscillations in a system. The method of oscillations eliminations based on workpiece’s high-energy laser irradiation with the purpose of further processing were suggested in compliance with the adopted theory and model. Modeling the behaviour of a system with the transient phenomenon indicated the tendency of reducing self-oscillations in unstable processing modes, which has a positive effect under the conditions of practical implementation over piece’s roughness and accuracy.

  11. Novel Approach to Surface Plasmon Resonance: A Third Dimension in Data Interpretation Through Surface Roughness Changes.

    PubMed

    Manole, Claudiu Constantin; Pîrvu, C; Maury, F; Demetrescu, I

    2016-06-01

    In a Surface Plasmon Resonance (SPR) experiment two key parameters are classically recorded: the time and the angle of SPR reflectivity. This paper brings into focus a third key parameter: SPR reflectivity. The SPR reflectivity is proved to be related to surface roughness changes. Practical investigations on (i) gold anodizing and (ii) polypyrrole film growth in presence of oxalic acid is detailed under potentiostatic conditions. These experimental results reveal the potential of using the SPR technique to investigate real-time changes both on the gold surface, but also in the gold film itself. This extends the versatility of the technique in particular as sensitive in-situ diagnostic tool.

  12. Should non-disclosures be considered as morally equivalent to lies within the doctor-patient relationship?

    PubMed

    Cox, Caitriona L; Fritz, Zoe

    2016-10-01

    In modern practice, doctors who outright lie to their patients are often condemned, yet those who employ non-lying deceptions tend to be judged less critically. Some areas of non-disclosure have recently been challenged: not telling patients about resuscitation decisions; inadequately informing patients about risks of alternative procedures and withholding information about medical errors. Despite this, there remain many areas of clinical practice where non-disclosures of information are accepted, where lies about such information would not be. Using illustrative hypothetical situations, all based on common clinical practice, we explore the extent to which we should consider other deceptive practices in medicine to be morally equivalent to lying. We suggest that there is no significant moral difference between lying to a patient and intentionally withholding relevant information: non-disclosures could be subjected to Bok's 'Test of Publicity' to assess permissibility in the same way that lies are. The moral equivalence of lying and relevant non-disclosure is particularly compelling when the agent's motivations, and the consequences of the actions (from the patient's perspectives), are the same. We conclude that it is arbitrary to claim that there is anything inherently worse about lying to a patient to mislead them than intentionally deceiving them using other methods, such as euphemism or non-disclosure. We should question our intuition that non-lying deceptive practices in clinical practice are more permissible and should thus subject non-disclosures to the same scrutiny we afford to lies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Culturally Responsive Leadership: Best Practice in Integrating Immigrant Students

    ERIC Educational Resources Information Center

    Magno, Cathyrn; Schiff, Margo

    2010-01-01

    Immigration to the US has fluctuated with economic conditions and with American immigration policies. Historically, most immigrants settled in urban areas, while suburban towns and their schools remained almost completely homogeneous and overwhelmingly White. Today, roughly one in five children in the US comes from an immigrant home, altering the…

  14. Motivational Determinants for Adult Learning.

    ERIC Educational Resources Information Center

    Dubin, Samuel S.; George , John L.

    A concern with the motivational behavior for keeping up-to-date, a learning process, is presented. The half-life of a professional's competence is described as the time after completion of professional training when, because of new developments, practicing professionals have become roughly half as competent as they were upon graduation to meet the…

  15. Impact of surface coal mining on soil hydraulic properties

    Treesearch

    X. Liu; J. Q. Wu; P. W. Conrad; S. Dun; C. S. Todd; R. L. McNearny; William Elliot; H. Rhee; P. Clark

    2016-01-01

    Soil erosion is strongly related to soil hydraulic properties. Understanding how surface coal mining affects these properties is therefore important in developing effective management practices to control erosion during reclamation. To determine the impact of mining activities on soil hydraulic properties, soils from undisturbed areas, areas of roughly graded mine...

  16. Seasonal Bias of Retrieved Ice Cloud Optical Properties Based on MISR and MODIS Measurements

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hioki, S.; Yang, P.; Di Girolamo, L.; Fu, D.

    2017-12-01

    The precise estimation of two important cloud optical and microphysical properties, cloud particle optical thickness and cloud particle effective radius, is fundamental in the study of radiative energy budget and hydrological cycle. In retrieving these two properties, an appropriate selection of ice particle surface roughness is important because it substantially affects the single-scattering properties. At present, using a predetermined ice particle shape without spatial and temporal variations is a common practice in satellite-based retrieval. This approach leads to substantial uncertainties in retrievals. The cloud radiances measured by each of the cameras of the Multi-angle Imaging SpectroRadiometer (MISR) instrument are used to estimate spherical albedo values at different scattering angles. By analyzing the directional distribution of estimated spherical albedo values, the degree of ice particle surface roughness is estimated. With an optimal degree of ice particle roughness, cloud optical thickness and effective radius are retrieved based on a bi-spectral shortwave technique in conjunction with two Moderate Resolution Imaging Spectroradiometer (MODIS) bands centered at 0.86 and 2.13 μm. The seasonal biases of retrieved cloud optical and microphysical properties, caused by the uncertainties in ice particle roughness, are investigated by using one year of MISR-MODIS fused data.

  17. Development and Validation of a Photonumeric Scale for Evaluation of Facial Skin Texture

    PubMed Central

    Carruthers, Alastair; Hardas, Bhushan; Murphy, Diane K.; Carruthers, Jean; Jones, Derek; Sykes, Jonathan M.; Creutz, Lela; Marx, Ann; Dill, Sara

    2016-01-01

    BACKGROUND A validated scale is needed for objective and reproducible comparisons of facial skin roughness before and after aesthetic treatment in practice and in clinical studies. OBJECTIVE To describe the development and validation of the 5-point photonumeric Allergan Skin Roughness Scale. METHODS The scale was developed to include an assessment guide, verbal descriptors, morphed images, and real subject images for each grade. The clinical significance of a 1-point score difference was evaluated in a review of image pairs representing varying differences in severity. Interrater and intrarater reliability was evaluated in a live-subject validation study (N = 290) completed during 2 sessions occurring 3 weeks apart. RESULTS A score difference of ≥1 point was shown to reflect a clinically meaningful difference (mean [95% confidence interval] absolute score difference 1.09 [0.96–1.23] for clinically different image pairs and 0.53 [0.38–0.67] for not clinically different pairs). Intrarater agreement between the 2 validation sessions was almost perfect (weighted kappa = 0.83). Interrater agreement was almost perfect during the second rating session (0.81, primary end point). CONCLUSION The Allergan Skin Roughness Scale is a validated and reliable scale for physician rating of midface skin roughness. PMID:27661744

  18. Could Crop Height Affect the Wind Resource at Agriculturally Productive Wind Farm Sites?

    NASA Astrophysics Data System (ADS)

    Vanderwende, Brian; Lundquist, Julie K.

    2016-03-01

    The collocation of cropland and wind turbines in the US Midwest region introduces complex meteorological interactions that could influence both agriculture and wind-power production. Crop management practices may affect the wind resource through alterations of land-surface properties. We use the weather research and forecasting (WRF) model to estimate the impact of crop height variations on the wind resource in the presence of a large turbine array. A hypothetical wind farm consisting of 121 1.8-MW turbines is represented using the WRF model wind-farm parametrization. We represent the impact of selecting soybeans rather than maize by altering the aerodynamic roughness length in a region approximately 65 times larger than that occupied by the turbine array. Roughness lengths of 0.1 and 0.25 m represent the mature soy crop and a mature maize crop, respectively. In all but the most stable atmospheric conditions, statistically significant hub-height wind-speed increases and rotor-layer wind-shear reductions result from switching from maize to soybeans. Based on simulations for the entire month of August 2013, wind-farm energy output increases by 14 %, which would yield a significant monetary gain. Further investigation is required to determine the optimal size, shape, and crop height of the roughness modification to maximize the economic benefit and minimize the cost of such crop-management practices. These considerations must be balanced by other influences on crop choice such as soil requirements and commodity prices.

  19. Could crop height affect the wind resource at agriculturally productive wind farm sites?

    DOE PAGES

    Vanderwende, Brian; Lundquist, Julie K.

    2015-11-07

    The collocation of cropland and wind turbines in the US Midwest region introduces complex meteorological interactions that could influence both agriculture and wind-power production. Crop management practices may affect the wind resource through alterations of land-surface properties. We use the weather research and forecasting (WRF) model to estimate the impact of crop height variations on the wind resource in the presence of a large turbine array. A hypothetical wind farm consisting of 121 1.8-MW turbines is represented using the WRF model wind-farm parametrization. We represent the impact of selecting soybeans rather than maize by altering the aerodynamic roughness length inmore » a region approximately 65 times larger than that occupied by the turbine array. Roughness lengths of 0.1 and 0.25 m represent the mature soy crop and a mature maize crop, respectively. In all but the most stable atmospheric conditions, statistically significant hub-height wind-speed increases and rotor-layer wind-shear reductions result from switching from maize to soybeans. Based on simulations for the entire month of August 2013, wind-farm energy output increases by 14 %, which would yield a significant monetary gain. Further investigation is required to determine the optimal size, shape, and crop height of the roughness modification to maximize the economic benefit and minimize the cost of such crop-management practices. As a result, these considerations must be balanced by other influences on crop choice such as soil requirements and commodity prices.« less

  20. Could crop height affect the wind resource at agriculturally productive wind farm sites?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanderwende, Brian; Lundquist, Julie K.

    The collocation of cropland and wind turbines in the US Midwest region introduces complex meteorological interactions that could influence both agriculture and wind-power production. Crop management practices may affect the wind resource through alterations of land-surface properties. We use the weather research and forecasting (WRF) model to estimate the impact of crop height variations on the wind resource in the presence of a large turbine array. A hypothetical wind farm consisting of 121 1.8-MW turbines is represented using the WRF model wind-farm parametrization. We represent the impact of selecting soybeans rather than maize by altering the aerodynamic roughness length inmore » a region approximately 65 times larger than that occupied by the turbine array. Roughness lengths of 0.1 and 0.25 m represent the mature soy crop and a mature maize crop, respectively. In all but the most stable atmospheric conditions, statistically significant hub-height wind-speed increases and rotor-layer wind-shear reductions result from switching from maize to soybeans. Based on simulations for the entire month of August 2013, wind-farm energy output increases by 14 %, which would yield a significant monetary gain. Further investigation is required to determine the optimal size, shape, and crop height of the roughness modification to maximize the economic benefit and minimize the cost of such crop-management practices. As a result, these considerations must be balanced by other influences on crop choice such as soil requirements and commodity prices.« less

  1. Expert perspectives on Western European prison health services: do ageing prisoners receive equivalent care?

    PubMed

    Bretschneider, Wiebke; Elger, Bernice Simone

    2014-09-01

    Health care in prison and particularly the health care of older prisoners are increasingly important topics due to the growth of the ageing prisoner population. The aim of this paper is to gain insight into the approaches used in the provision of equivalent health care to ageing prisoners and to confront the intuitive definition of equivalent care and the practical and ethical challenges that have been experienced by individuals working in this field. Forty interviews took place with experts working in the prison setting from three Western European countries to discover their views on prison health care. Experts indicated that the provision of equivalent care in prison is difficult mostly due to four factors: variability of care in different prisons, gatekeeper systems, lack of personnel, and delays in providing access. This lack of equivalence can be fixed by allocating adequate budgets and developing standards for health care in prison.

  2. A study of microwave downcoverters operating in the K sub u band

    NASA Technical Reports Server (NTRS)

    Fellers, R. G.; Simpson, T. L.; Tseng, B.

    1982-01-01

    A computer program for parametric amplifier design is developed with special emphasis on practical design considerations for microwave integrated circuit degenerate amplifiers. Precision measurement techniques are developed to obtain a more realistic varactor equivalent circuit. The existing theory of a parametric amplifier is modified to include the equivalent circuit, and microwave properties, such as loss characteristics and circuit discontinuities are investigated.

  3. Effects of Nursing Students' Practices Using Smartphone Videos on Fundamental Nursing Skills, Self-Efficacy, and Learning Satisfaction in South Korea

    ERIC Educational Resources Information Center

    Jeong, HyeSun

    2017-01-01

    This is a quasi-experimental study with a non-equivalent group pre-test and post-test designed to investigate the effects of learning with smartphone video recordings in fundamental nursing practice. General "intramuscular injection" practice for sophomore nursing students was given to the experimental and control groups for two weeks.…

  4. Dual nature of localization in guiding systems with randomly corrugated boundaries: Anderson-type versus entropic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarasov, Yu.V., E-mail: yutarasov@ire.kharkov.ua; Shostenko, L.D.

    A unified theory for the conductance of an infinitely long multimode quantum wire whose finite segment has randomly rough lateral boundaries is developed. It enables one to rigorously take account of all feasible mechanisms of wave scattering, both related to boundary roughness and to contacts between the wire rough section and the perfect leads within the same technical frameworks. The rough part of the conducting wire is shown to act as a mode-specific randomly modulated effective potential barrier whose height is governed essentially by the asperity slope. The mean height of the barrier, which is proportional to the average slopemore » squared, specifies the number of conducting channels. Under relatively small asperity amplitude this number can take on arbitrary small, up to zero, values if the asperities are sufficiently sharp. The consecutive channel cut-off that arises when the asperity sharpness increases can be regarded as a kind of localization, which is not related to the disorder per se but rather is of entropic or (equivalently) geometric origin. The fluctuating part of the effective barrier results in two fundamentally different types of guided wave scattering, viz., inter- and intramode scattering. The intermode scattering is shown to be for the most part very strong except in the cases of (a) extremely smooth asperities, (b) excessively small length of the corrugated segment, and (c) the asperities sharp enough for only one conducting channel to remain in the wire. Under strong intermode scattering, a new set of conducting channels develops in the corrugated waveguide, which have the form of asymptotically decoupled extended modes subject to individual solely intramode random potentials. In view of this fact, two transport regimes only are realizable in randomly corrugated multimode waveguides, specifically, the ballistic and the localized regime, the latter characteristic of one-dimensional random systems. Two kinds of localization are thus shown to coexist in waveguide-like systems with randomly corrugated boundaries, specifically, the entropic localization and the one-dimensional Anderson (disorder-driven) localization. If the particular mode propagates across the rough segment ballistically, the Fabry–Pérot-type oscillations should be observed in the conductance, which are suppressed for the mode transferred in the Anderson-localized regime.« less

  5. A review of factors that affect contact angle and implications for flotation practice.

    PubMed

    Chau, T T; Bruckard, W J; Koh, P T L; Nguyen, A V

    2009-09-30

    Contact angle and the wetting behaviour of solid particles are influenced by many physical and chemical factors such as surface roughness and heterogeneity as well as particle shape and size. A significant amount of effort has been invested in order to probe the correlation between these factors and surface wettability. Some of the key investigations reported in the literature are reviewed here. It is clear from the papers reviewed that, depending on many experimental conditions such as the size of the surface heterogeneities and asperities, surface cleanliness, and the resolution of measuring equipment and data interpretation, obtaining meaningful contact angle values is extremely difficult and such values are reliant on careful experimental control. Surface wetting behaviour depends on not only surface texture (roughness and particle shape), and surface chemistry (heterogeneity) but also on hydrodynamic conditions in the preparation route. The inability to distinguish the effects of each factor may be due to the interplay and/or overlap of two or more factors in each system. From this review, it was concluded that: Surface geometry (and surface roughness of different scales) can be used to tune the contact angle; with increasing surface roughness the apparent contact angle decreases for hydrophilic materials and increases for hydrophobic materials. For non-ideal surfaces, such as mineral surfaces in the flotation process, kinetics plays a more important role than thermodynamics in dictating wettability. Particle size encountered in flotation (10-200 microm) showed no significant effect on contact angle but has a strong effect on flotation rate constant. There is a lack of a rigid quantitative correlation between factors affecting wetting, wetting behaviour and contact angle on minerals; and hence their implication for flotation process. Specifically, universal correlation of contact angle to flotation recovery is still difficult to predict from first principles. Other advanced techniques and measures complementary to contact angle will be essential to establish the link between research and practice in flotation.

  6. 21 CFR 26.7 - Participation in the equivalence assessment and determination.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... determination. 26.7 Section 26.7 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL MUTUAL RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE... EUROPEAN COMMUNITY Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.7...

  7. Human Systems Modeling and Simulation

    DTIC Science & Technology

    2005-12-01

    the formation of new practices. Mimesis is a process of observing manifested behavior, then building a practice that can produce a semblance of what...produce an image of – the behavior of others. Equivalently, mimesis is the foundational learning mechanism. o E.g., we become ourselves, and adapt to

  8. The application of robotics to microlaryngeal laser surgery.

    PubMed

    Buckmire, Robert A; Wong, Yu-Tung; Deal, Allison M

    2015-06-01

    To evaluate the performance of human subjects, using a prototype robotic micromanipulator controller in a simulated, microlaryngeal operative setting. Observational cross-sectional study. Twenty-two human subjects with varying degrees of laser experience performed CO2 laser surgical tasks within a simulated microlaryngeal operative setting using an industry standard manual micromanipulator (MMM) and a prototype robotic micromanipulator controller (RMC). Accuracy, repeatability, and ablation consistency measures were obtained for each human subject across both conditions and for the preprogrammed RMC device. Using the standard MMM, surgeons with >10 previous laser cases performed superior to subjects with fewer cases on measures of error percentage and cumulative error (P = .045 and .03, respectively). No significant differences in performance were observed between subjects using the RMC device. In the programmed (P/A) mode, the RMC performed equivalently or superiorly to experienced human subjects on accuracy and repeatability measures, and nearly an order of magnitude better on measures of ablation consistency. The programmed RMC performed significantly better for repetition error when compared to human subjects with <100 previous laser cases (P = .04). Experienced laser surgeons perform better than novice surgeons on tasks of accuracy and repeatability using the MMM device but roughly equivalently using the novel RMC. Operated in the P/A mode, the RMC performs equivalently or superior to experienced laser surgeons using the industry standard MMM for all measured parameters, and delivers an ablation consistency nearly an order of magnitude better than human laser operators. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  9. Analysis of fluid flow and solute transport through a single fracture with variable apertures intersecting a canister: Comparison between fractal and Gaussian fractures

    NASA Astrophysics Data System (ADS)

    Liu, L.; Neretnieks, I.

    Canisters with spent nuclear fuel will be deposited in fractured crystalline rock in the Swedish concept for a final repository. The fractures intersect the canister holes at different angles and they have variable apertures and therefore locally varying flowrates. Our previous model with fractures with a constant aperture and a 90° intersection angle is now extended to arbitrary intersection angles and stochastically variable apertures. It is shown that the previous basic model can be simply amended to account for these effects. More importantly, it has been found that the distributions of the volumetric and the equivalent flow rates are all close to the Normal for both fractal and Gaussian fractures, with the mean of the distribution of the volumetric flow rate being determined solely by the hydraulic aperture, and that of the equivalent flow rate being determined by the mechanical aperture. Moreover, the standard deviation of the volumetric flow rates of the many realizations increases with increasing roughness and spatial correlation length of the aperture field, and so does that of the equivalent flow rates. Thus, two simple statistical relations can be developed to describe the stochastic properties of fluid flow and solute transport through a single fracture with spatially variable apertures. This obviates, then, the need to simulate each fracture that intersects a canister in great detail, and allows the use of complex fractures also in very large fracture network models used in performance assessment.

  10. The effects of varying injection rates in Osage County, Oklahoma, on the 2016 Mw5.8 Pawnee earthquake

    USGS Publications Warehouse

    Barbour, Andrew J.; Norbeck, Jack H.; Rubinstein, Justin L.

    2017-01-01

    The 2016 Mw 5.8 Pawnee earthquake occurred in a region with active wastewater injection into a basal formation group. Prior to the earthquake, fluid injection rates at most wells were relatively steady, but newly collected data show significant increases in injection rate in the years leading up to earthquake. For the same time period, the total volumes of injected wastewater were roughly equivalent between variable‐rate and constant‐rate wells. To understand the possible influence of these changes in injection, we simulate the variable‐rate injection history and its constant‐rate equivalent in a layered poroelastic half‐space to explore the interplay between pore‐pressure effects and poroelastic effects on the fault leading up to the mainshock. In both cases, poroelastic stresses contribute a significant proportion of Coulomb failure stresses on the fault compared to pore‐pressure increases alone, but the resulting changes in seismicity rate, calculated using a rate‐and‐state frictional model, are many times larger when poroelastic effects are included, owing to enhanced stressing rates. In particular, the variable‐rate simulation predicts more than an order of magnitude increase in seismicity rate above background rates compared to the constant‐rate simulation with equivalent volume. The observed cumulative density of earthquakes prior to the mainshock within 10 km of the injection source exhibits remarkable agreement with seismicity predicted by the variable‐rate injection case.

  11. Applying quantile regression for modeling equivalent property damage only crashes to identify accident blackspots.

    PubMed

    Washington, Simon; Haque, Md Mazharul; Oh, Jutaek; Lee, Dongmin

    2014-05-01

    Hot spot identification (HSID) aims to identify potential sites-roadway segments, intersections, crosswalks, interchanges, ramps, etc.-with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Testing a Method for Quantifying the Output of Implantable Middle Ear Hearing Devices

    PubMed Central

    Rosowski, J.J.; Chien, W.; Ravicz, M.E.; Merchant, S.N.

    2008-01-01

    This report describes tests of a standard practice for quantifying the performance of implantable middle ear hearing devices (also known as implantable hearing aids). The standard and these tests were initiated by the Food and Drug Administration of the United States Government. The tests involved measurements on two hearing devices, one commercially available and the other home built, that were implanted into ears removed from human cadavers. The tests were conducted to investigate the utility of the practice and its outcome measures: the equivalent ear canal sound pressure transfer function that relates electrically driven middle ear velocities to the equivalent sound pressure needed to produce those velocities, and the maximum effective ear canal sound pressure. The practice calls for measurements in cadaveric ears in order to account for the varied anatomy and function of different human middle ears. PMID:17406105

  13. Accounting for the professional work of pathologists performing autopsies.

    PubMed

    Sinard, John H

    2013-02-01

    With an increasing trend toward fee-code-based methods of measuring the clinical professional productivity of pathologists, those pathologists whose clinical activities include the performance of autopsies have been disadvantaged by the lack of generally accepted workload equivalents for autopsy performance and supervision. To develop recommended benchmarks to account for this important and often overlooked professional activity. Based on the professional experience of members of the Autopsy Committee of the College of American Pathologists, a survey of autopsy pathologists, and the limited additional material available in the literature, we developed recommended workload equivalents for the professional work associated with performing an autopsy, which we elected to express as multiples of established Current Procedural Terminology codes. As represented in Table 3 , we recommend that the professional work associated with a full adult autopsy be equivalent to 5.5 × 88309-26. Additional professional credit of 1.5 × 88309-26 should be added for evaluation of the brain and for a detailed clinical-pathologic discussion. The corresponding value for a fetal/neonatal autopsy is 4.0 × 88309-26. Although we recognize that autopsy practices vary significantly from institution to institution, it is hoped that our proposed guidelines will be a valuable starting point that individual practices can then adapt, taking into account the specifics of their practice environment.

  14. Changing practice: are memes the answer?

    PubMed

    Pediani, R; Walsh, M

    Nurses are insistent that they have a great deal more to offer than being merely doctors handmaidens. This article examines how nursing education and practice can be changed by increasing our knowledge of 'memes'--the cultural equivalent of genes--and the ways traditional beliefs are passed down to generations of nurses.

  15. EQUIVALENCE BETWEEN SHORT-TIME BIPHASIC AND INCOMPRESSIBLE ELASTIC MATERIAL RESPONSES

    PubMed Central

    Ateshian, Gerard A.; Ellis, Benjamin J.; Weiss, Jeffrey A.

    2009-01-01

    Porous-permeable tissues have often been modeled using porous media theories such as the biphasic theory. This study examines the equivalence of the short-time biphasic and incompressible elastic responses for arbitrary deformations and constitutive relations from first principles. This equivalence is illustrated in problems of unconfined compression of a disk, and of articular contact under finite deformation, using two different constitutive relations for the solid matrix of cartilage, one of which accounts for the large disparity observed between the tensile and compressive moduli in this tissue. Demonstrating this equivalence under general conditions provides a rationale for using available finite element codes for incompressible elastic materials as a practical substitute for biphasic analyses, so long as only the short-time biphasic response is sought. In practice, an incompressible elastic analysis is representative of a biphasic analysis over the short-term response δt≪Δ2/‖C4‖||K||, where Δ is a characteristic dimension, C4 is the elasticity tensor and K is the hydraulic permeability tensor of the solid matrix. Certain notes of caution are provided with regard to implementation issues, particularly when finite element formulations of incompressible elasticity employ an uncoupled strain energy function consisting of additive deviatoric and volumetric components. PMID:17536908

  16. Practical substrate and apparatus for static and continuous monitoring by surface-enhanced raman spectroscopy

    DOEpatents

    Vo-Dinh, Tuan

    1987-01-01

    A substrate for use in surface-enhanced Raman spectroscopy (SERS) is disclosed, comprising a support, preferably flexible, coated with roughness-imparting microbodies and a metallized overcoating. Also disclosed is apparatus for using the aforesaid substrate in continuous and static SERS trace analyses, especially of organic compounds.

  17. USE OF SUPPLEMENTARY CEMENTITIOUS MATERIALS IN HIGH PERFORMANCE, CO2 SEQUESTERING CONSTRUCTION MATERIAL - PHASE I

    EPA Science Inventory

    Problem Statement:  The worldwide manufacture and use of Portland cement for use in concrete accounts for roughly 5 percent of global CO2 emissions. A common practice in the production of Portland cement concrete (PCC) to reduce CO
  18. Automatic counting and classification of bacterial colonies using hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    Detection and counting of bacterial colonies on agar plates is a routine microbiology practice to get a rough estimate of the number of viable cells in a sample. There have been a variety of different automatic colony counting systems and software algorithms mainly based on color or gray-scale pictu...

  19. The Goetz Plan: A Practical Smoking Cessation Program for College Students

    ERIC Educational Resources Information Center

    Krohn, Franklin B.; Goetz, Kristin M.

    2005-01-01

    Tobacco smoking is responsible for approximately 434,000 deaths per year in the United States (Fact Sheet, 1993). College students represent a large portion of the smoking public. Roughly 70% of college students have tried smoking (Everett & Husten, 1999). There are various methods available to assist in smoking cessation, some being, nicotine…

  20. Fusing Classroom Theory and Practical Experience: Syllabus Construction in a Broadcast Journalism Experience.

    ERIC Educational Resources Information Center

    Reppert, James E.

    This paper contends that it is essential that broadcast journalism courses possess a roughly equal balance between hands-on radio, television, and electronic news gathering assignments and analytical term papers. The importance of students writing and analyzing mass communication issues and personalities cannot be overstated in a highly…

  1. Action Research. Case Studies in TESOL Practice Series.

    ERIC Educational Resources Information Center

    Edge, Julian, Ed.

    Chapter titles in this book include the following: "Attitude and Access: Building a New Teaching/Learning Community in TESOL" (Julian Edge); "Here It Is, Rough Though It May Be: Basic Computer for ESL" (Alison Perkins); "An 'It's Not Action Research Yet, but I'm Getting There' Approach to Teaching Writing" (Neil Cowie); "Early Reflections:…

  2. Crossing the Threshhold: Successful Learning Provision for Homeless People.

    ERIC Educational Resources Information Center

    Cameron, Helen; McKaig, Wendy; Taylor, Sue

    This guide tells the story of a successful collaboration between The City Literary Institute and homelessness agencies to create an arts-based learning program for homeless people in central London. It identifies guidelines and good practice to stimulate similar work in other locations with problems of homelessness and rough sleeping. The guide is…

  3. Sea Surface Salinity and Wind Retrieval Algorithm Using Combined Passive-Active L-Band Microwave Data

    NASA Technical Reports Server (NTRS)

    Yueh, Simon H.; Chaubell, Mario J.

    2011-01-01

    Aquarius is a combined passive/active L-band microwave instrument developed to map the salinity field at the surface of the ocean from space. The data will support studies of the coupling between ocean circulation, the global water cycle, and climate. The primary science objective of this mission is to monitor the seasonal and interannual variation of the large scale features of the surface salinity field in the open ocean with a spatial resolution of 150 kilometers and a retrieval accuracy of 0.2 practical salinity units globally on a monthly basis. The measurement principle is based on the response of the L-band (1.413 gigahertz) sea surface brightness temperatures (T (sub B)) to sea surface salinity. To achieve the required 0.2 practical salinity units accuracy, the impact of sea surface roughness (e.g. wind-generated ripples and waves) along with several factors on the observed brightness temperature has to be corrected to better than a few tenths of a degree Kelvin. To the end, Aquarius includes a scatterometer to help correct for this surface roughness effect.

  4. Health spending in the 1980's: Integration of clinical practice patterns with management

    PubMed Central

    Freeland, Mark S.; Schendler, Carol E.

    1984-01-01

    Health care spending in the United States more than tripled between 1972 and 1982, increasing from $94 billion to $322 billion. This growth substantially outpaced overall growth in the economy. National health expenditures are projected to reach approximately $690 billion in 1990 and consume roughly 12 percent of the gross national product. Government spending for health care is projected to reach $294 billion by 1990, with the Federal Government paying 72 percent. The Medicare prospective payment system and increasing competition in the health services sector are providing incentives to integrate clinical practice patterns with improved management practices. PMID:10310595

  5. Evaluating the Whitening and Microstructural Effects of a Novel Whitening Strip on Porcelain and Composite Dental Materials

    PubMed Central

    Takesh, Thair; Sargsyan, Anik; Lee, Matthew; Anbarani, Afarin; Ho, Jessica; Wilder-Smith, Petra

    2017-01-01

    Aims The aim of this project was to evaluate the effects of 2 different whitening strips on color, microstructure and roughness of tea stained porcelain and composite surfaces. Methods 54 porcelain and 72 composite chips served as samples for timed application of over-the-counter (OTC) test or control dental whitening strips. Chips were divided randomly into three groups of 18 porcelain and 24 composite chips each. Of these groups, 1 porcelain and 1 composite set served as controls. The remaining 2 groups were randomized to treatment with either Oral Essentials® Whitening Strips or Crest® 3D White Whitestrips™. Sample surface structure was examined by light microscopy, profilometry and Scanning Electron Microscopy (SEM). Additionally, a reflectance spectrophotometer was used to assess color changes in the porcelain and composite samples over 24 hours of whitening. Data points were analyzed at each time point using ANOVA. Results In the light microscopy and SEM images, no discrete physical defects were observed in any of the samples at any time points. However, high-resolution SEM images showed an appearance of increased surface roughness in all composite samples. Using profilometry, significantly increased post-whitening roughness was documented in the composite samples exposed to the control bleaching strips. Composite samples underwent a significant and equivalent shift in color following exposure to Crest® 3D White Whitestrips™ and Oral Essentials® Whitening Strips. Conclusions A novel commercial tooth whitening strip demonstrated a comparable beaching effect to a widely used OTC whitening strip. Neither whitening strip caused physical defects in the sample surfaces. However, the control strip caused roughening of the composite samples whereas the test strip did not. PMID:29226023

  6. Validation of a Novel Technique and Evaluation of the Surface Free Energy of Food

    PubMed Central

    Senturk Parreidt, Tugce; Schmid, Markus; Hauser, Carolin

    2017-01-01

    Characterizing the physical properties of a surface is largely dependent on determining the contact angle exhibited by a liquid. Contact angles on the surfaces of rough and irregularly-shaped food samples are difficult to measure using a contact angle meter (goniometer). As a consequence, values for the surface energy and its components can be mismeasured. The aim of this work was to use a novel contact angle measurement method, namely the snake-based ImageJ program, to accurately measure the contact angles of rough and irregular shapes, such as food samples, and so enable more accurate calculation of the surface energy of food materials. In order to validate the novel technique, the contact angles of three different test liquids on four different smooth polymer films were measured using both the ImageJ software with the DropSnake plugin and the widely used contact angle meter. The distributions of the values obtained by the two methods were different. Therefore, the contact angles, surface energies, and polar and dispersive components of plastic films obtained using the ImageJ program and the Drop Shape Analyzer (DSA) were interpreted with the help of simple linear regression analysis. As case studies, the superficial characteristics of strawberry and endive salad epicarp were measured with the ImageJ program and the results were interpreted with the Drop Shape Analyzer equivalent according to our regression models. The data indicated that the ImageJ program can be successfully used for contact angle determination of rough and strongly hydrophobic surfaces, such as strawberry epicarp. However, for the special geometry of droplets on slightly hydrophobic surfaces, such as salad leaves, the program code interpolation part can be altered. PMID:28425932

  7. Evaluation of the Surface Characteristics of Various Implant Abutment Materials Using Confocal Microscopy and White Light Interferometry.

    PubMed

    Park, Jun-Beom; Yang, Seung-Min; Ko, Youngkyung

    2015-12-01

    The purpose of this study was to evaluate the surface characteristics of various implant abutment materials, such as of titanium alloy (Ti6Al4V; Ma), machined cobalt-chrome-molybdenum alloy (CCM), titanium nitride coating on a titanium alloy disc (TiN), anodic oxidized titanium alloy disc (AO), composite resin coating on a titanium alloy disc (Res), and zirconia disc (Zr), using confocal microscopy and white light interferometry. Measurements from the 2 methods were evaluated to see if these methods would give equivalent results. The precision of measurements were evaluated by the coefficient of variation. Five discs each of Ma, CCM, TiN, AO, Res, and Zr were used. The surface roughness was evaluated by confocal laser microscopy and white light interferometry. Confocal microscopy showed that the Res group showed significantly greater Ra, Rq, Rz, Sa, Sq, and Sz values compared with those of the Ma group (P < 0.05). The white light interferometry results showed that the Res group had significantly higher Ra, Rq, Rz, Rt, Sa, Sq, Sz, and Sdr values compared with the Ma group (P < 0.05). All the roughness parameters obtained from the 2 methods differed, and the Sa values of the Zr group from confocal microscopy were greater by 0.163 μm than those obtained by white light interferometry. Least difference was seen in the TiN group where the difference was 0.058 μm. Roughness parameters of different abutment materials varied significantly. Precision of measurement differed according to the characteristics of the material used. White light interferometry could be recommended for measurement of TiN and AO. Confocal microscopy gave more precise measurements for Ma and CCM groups. The optical characteristics of the surface should be considered before choosing the examination method.

  8. Hierarchical Nanoparticle Topography in Amphiphilic Copolymer Films Controlled by Thermodynamics and Dynamics

    PubMed Central

    Caporizzo, M. A.; Ezzibdeh, R. M.

    2016-01-01

    This study systematically investigates how polymer composition changes nanoparticle (NP) grafting and diffusion in solvated random copolymer thin films. By thermal annealing from 135 to 200 °C, thin films with a range of hydrophobicity are generated by varying acrylic acid content from 2% (SAA2) to 29% (SAA29). Poly(styrene-random-tert butyl acrylate) films, 100 nm thick, that are partially converted to poly(styrene-random-acrylic acid), SAA, reversibly swell in ethanol solutions containing amine-functionalized SiO2 nanoparticles with a diameter of 45 nm. The thermodynamics and kinetics of NP grafting are directly controlled by the AA content in the SAA films. At low AA content, namely SAA4, NP attachment saturates at a monolayer, consistent with a low solubility of NPs in SAA4 due to a weakly negative χ parameter. When the AA content exceeds 4%, NPs sink into the film to form multilayers. These films exhibit hierarchical surface roughness with a RMS roughness greater than the NP size. Using a quartz crystal microbalance, NP incorporation in the film is found to saturate after a mass equivalence of about 3 close-packed layers of NPs have been incorporated within the SAA. The kinetics of NP grafting is observed to scale with AA content. The surface roughness is greatest at intermediate times (5–20 min) for SAA13 films, which also exhibit superhydrophobic wetting. Because clustering and aggregation of the NPs within SAA29 films reduce film transparency, SAA13 films provide both maximum hydrophobicity and transparency. The method in this study is widely applicable because it can be applied to many substrate types, can cover large areas, and retains the amine functionality of the particles which allows for subsequent chemical modification. PMID:25689222

  9. Hardware accelerator for molecular dynamics: MDGRAPE-2

    NASA Astrophysics Data System (ADS)

    Susukita, Ryutaro; Ebisuzaki, Toshikazu; Elmegreen, Bruce G.; Furusawa, Hideaki; Kato, Kenya; Kawai, Atsushi; Kobayashi, Yoshinao; Koishi, Takahiro; McNiven, Geoffrey D.; Narumi, Tetsu; Yasuoka, Kenji

    2003-10-01

    We developed MDGRAPE-2, a hardware accelerator that calculates forces at high speed in molecular dynamics (MD) simulations. MDGRAPE-2 is connected to a PC or a workstation as an extension board. The sustained performance of one MDGRAPE-2 board is 15 Gflops, roughly equivalent to the peak performance of the fastest supercomputer processing element. One board is able to calculate all forces between 10 000 particles in 0.28 s (i.e. 310000 time steps per day). If 16 boards are connected to one computer and operated in parallel, this calculation speed becomes ˜10 times faster. In addition to MD, MDGRAPE-2 can be applied to gravitational N-body simulations, the vortex method and smoothed particle hydrodynamics in computational fluid dynamics.

  10. Locating the Gribov horizon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Fei; Qin, Si-Xue; Roberts, Craig D.

    We explore whether a tree-level expression for the gluon two-point function, supposed to express effects of an horizon term introduced to eliminate the Gribov ambiguity, is consistent with the propagator obtained in simulations of lattice-regularised quantum chromodynamics (QCD). In doing so, we insist that the gluon two-point function obey constraints that ensure a minimal level of consistency with parton-like behaviour at ultraviolet momenta. In consequence, we are led to a position which supports a conjecture that the gluon mass and horizon scale are equivalent emergent massscales, each with a value of roughly 0.5 GeV; and wherefrom it appears plausible thatmore » the dynamical generation of a running gluon mass may alone be sufficient to remove the Gribov ambiguity.« less

  11. Locating the Gribov horizon

    DOE PAGES

    Gao, Fei; Qin, Si-Xue; Roberts, Craig D.; ...

    2018-02-08

    We explore whether a tree-level expression for the gluon two-point function, supposed to express effects of an horizon term introduced to eliminate the Gribov ambiguity, is consistent with the propagator obtained in simulations of lattice-regularised quantum chromodynamics (QCD). In doing so, we insist that the gluon two-point function obey constraints that ensure a minimal level of consistency with parton-like behaviour at ultraviolet momenta. In consequence, we are led to a position which supports a conjecture that the gluon mass and horizon scale are equivalent emergent massscales, each with a value of roughly 0.5 GeV; and wherefrom it appears plausible thatmore » the dynamical generation of a running gluon mass may alone be sufficient to remove the Gribov ambiguity.« less

  12. Automation and robotics and related technology issues for Space Station customer servicing

    NASA Technical Reports Server (NTRS)

    Cline, Helmut P.

    1987-01-01

    Several flight servicing support elements are discussed within the context of the Space Station. Particular attention is given to the servicing facility, the mobile servicing center, and the flight telerobotic servicer (FTS). The role that automation and robotics can play in the design and operation of each of these elements is discussed. It is noted that the FTS, which is currently being developed by NASA, will evolve to increasing levels of autonomy to allow for the virtual elimination of routine EVA. Some of the features of the FTS will probably be: dual manipulator arms having reach and dexterity roughly equivalent to that of an EVA-suited astronaut, force reflection capability allowing efficient teleoperation, and capability of operating from a variety of support systems.

  13. Super-Alfvénic translation of a field-reversed configuration into a large-bore dielectric chamber

    NASA Astrophysics Data System (ADS)

    Sekiguchi, J.; Asai, T.; Takahashi, T.

    2018-01-01

    An experimental device to demonstrate additional heating and control methods for a field-reversed configuration (FRC) has been developed. The newly developed device, named FRC Amplification via Translation (FAT), has a field-reversed theta-pinch plasma source and a low-elongation dielectric (transparent quartz) confinement chamber with quasi-static confinement field. In the initial experiments on the FAT device, FRC translation and trapping were successfully demonstrated. Although the typical elongation of the trapped FRC in the confinement region was roughly three, no disruptive global instability, such as tilt, was observed. The FAT device increases the latitude to perform translation-related experiments, such as those concerning inductive current drive, equivalent neutral beam injection effects, and wave applications.

  14. Limits of Kirchhoff's Laws in Plasmonics.

    PubMed

    Razinskas, Gary; Biagioni, Paolo; Hecht, Bert

    2018-01-30

    The validity of Kirchhoff's laws in plasmonic nanocircuitry is investigated by studying a junction of plasmonic two-wire transmission lines. We find that Kirchhoff's laws are valid for sufficiently small values of a phenomenological parameter κ relating the geometrical parameters of the transmission line with the effective wavelength of the guided mode. Beyond such regime, for large values of the phenomenological parameter, increasing deviations occur and the equivalent impedance description (Kirchhoff's laws) can only provide rough, but nevertheless useful, guidelines for the design of more complex plasmonic circuitry. As an example we investigate a system composed of a two-wire transmission line and a nanoantenna as the load. By addition of a parallel stub designed according to Kirchhoff's laws we achieve maximum signal transfer to the nanoantenna.

  15. Classification of billiard motions in domains bounded by confocal parabolas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fokicheva, V V

    2014-08-01

    We consider the billiard dynamical system in a domain bounded by confocal parabolas. We describe such domains in which the billiard problem can be correctly stated. In each such domain we prove the integrability for the system, analyse the arising Liouville foliation, and calculate the invariant of Liouville equivalence--the so-called marked molecule. It turns out that billiard systems in certain parabolic domains have the same closures of solutions (integral trajectories) as the systems of Goryachev-Chaplygin-Sretenskii and Joukowski at suitable energy levels. We also describe the billiard motion in noncompact domains bounded by confocal parabolas, namely, we describe the topology of themore » Liouville foliation in terms of rough molecules. Bibliography: 16 titles.« less

  16. Wind-tunnel test of an articulated helicopter rotor model with several tip shapes

    NASA Technical Reports Server (NTRS)

    Berry, J. D.; Mineck, R. E.

    1980-01-01

    Six interchangeable tip shapes were tested: a square (baseline) tip, an ogee tip, a subwing tip, a swept tip, a winglet tip, and a short ogee tip. In hover at the lower rotational speeds the swept, ogee, and short ogee tips had about the same torque coefficient, and the subwing and winglet tips had a larger torque coefficient than the baseline square tip blades. The ogee and swept tip blades required less torque coefficient at lower rotational speeds and roughly equivalent torque coefficient at higher rotational speeds compared with the baseline square tip blades in forward flight. The short ogee tip required higher torque coefficient at higher lift coefficients than the baseline square tip blade in the forward flight test condition.

  17. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  18. Enhanced decomposition offsets enhanced productivity and soil carbon accumulation in coastal wetlands responding to climate change

    USGS Publications Warehouse

    Kirwan, M.L.; Blum, L.K.

    2011-01-01

    Coastal wetlands are responsible for about half of all carbon burial in oceans, and their persistence as a valuable ecosystem depends largely on the ability to accumulate organic material at rates equivalent to relative sea level rise. Recent work suggests that elevated CO2 and temperature warming will increase organic matter productivity and the ability of marshes to survive sea level rise. However, we find that organic decomposition rates increase by about 12% per degree of warming. Our measured temperature sensitivity is similar to studies from terrestrial systems, twice as high as the response of salt marsh productivity to temperature warming, and roughly equivalent to the productivity response associated with elevated CO2 in C3 marsh plants. Therefore, enhanced CO2 and warmer temperatures may actually make marshes less resilient to sea level rise, and tend to promote a release of soil carbon. Simple projections indicate that elevated temperatures will increase rates of sea level rise more than any acceleration in organic matter accumulation, suggesting the possibility of a positive feedback between climate, sea level rise, and carbon emissions in coastal environments. ?? 2011 Author(s).

  19. [Generalization of money-handling though training in equivalence relationships].

    PubMed

    Vives-Montero, Carmen; Valero-Aguayo, Luis; Ascanio, Lourdes

    2011-02-01

    This research used a matching-to-sample procedure and equivalence learning process with language and verbal tasks. In the study, an application of the equivalence relationship of money was used with several kinds of euro coins presented. The sample consisted of 16 children (8 in the experimental group and 8 in the control group) aged 5 years. The prerequisite behaviors, the identification of coins and the practical use of different euro coins, were assessed in the pre and post phases for both groups. The children in the experimental group performed an equivalence task using the matching-to-sample procedure. This consisted of a stimulus sample and four matching stimuli, using a series of euro coins with equivalent value in each set. The children in the control group did not undergo this training process. The results showed a large variability in the children's data of the equivalence tests. The experimental group showed the greatest pre and post changes in the statistically significant data. They also showed a greater generalization in the identification of money and in the use of euro coins than the control group. The implications for educational training and the characteristics of the procedure used here for coin equivalence are discussed.

  20. The equivalence of computerized and paper-and-pencil psychological instruments: implications for measures of negative affect.

    PubMed

    Schulenberg, S E; Yutrzenka, B A

    1999-05-01

    The use of computerized psychological assessment is a growing practice among contemporary mental health professionals. Many popular and frequently used paper-and-pencil instruments have been adapted into computerized versions. Although equivalence for many instruments has been evaluated and supported, this issue is far from resolved. This literature review deals with recent research findings that suggest that computer aversion negatively impacts computerized assessment, particularly as it relates to measures of negative affect. There is a dearth of equivalence studies that take into account computer aversion's potential impact on the measurement of negative affect. Recommendations are offered for future research in this area.

  1. New equivalent-electrical circuit model and a practical measurement method for human body impedance.

    PubMed

    Chinen, Koyu; Kinjo, Ichiko; Zamami, Aki; Irei, Kotoyo; Nagayama, Kanako

    2015-01-01

    Human body impedance analysis is an effective tool to extract electrical information from tissues in the human body. This paper presents a new measurement method of impedance using armpit electrode and a new equivalent circuit model for the human body. The lowest impedance was measured by using an LCR meter and six electrodes including armpit electrodes. The electrical equivalent circuit model for the cell consists of resistance R and capacitance C. The R represents electrical resistance of the liquid of the inside and outside of the cell, and the C represents high frequency conductance of the cell membrane. We propose an equivalent circuit model which consists of five parallel high frequency-passing CR circuits. The proposed equivalent circuit represents alpha distribution in the impedance measured at a lower frequency range due to ion current of the outside of the cell, and beta distribution at a high frequency range due to the cell membrane and the liquid inside cell. The calculated values by using the proposed equivalent circuit model were consistent with the measured values for the human body impedance.

  2. A decade of neural networks: Practical applications and prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina (Editor); Thakoor, Anil (Editor)

    1994-01-01

    On May 11-13, 1994, JPL's Center for Space Microelectronics Technology (CSMT) hosted a neural network workshop entitled, 'A Decade of Neural Networks: Practical Applications and Prospects,' sponsored by DOD and NASA. The past ten years of renewed activity in neural network research has brought the technology to a crossroads regarding the overall scope of its future practical applicability. The purpose of the workshop was to bring together the sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and development prospects, with emphasis on practical applications. Of the 93 participants, roughly 15% were from government agencies, 30% were from industry, 20% were from universities, and 35% were from Federally Funded Research and Development Centers (FFRDC's).

  3. Backside Wear Analysis of Retrieved Acetabular Liners with a Press-Fit Locking Mechanism in Comparison to Wear Simulation In Vitro.

    PubMed

    Puente Reyna, Ana Laura; Jäger, Marcus; Floerkemeier, Thilo; Frecher, Sven; Delank, Karl-Stefan; Schilling, Christoph; Grupp, Thomas M

    2016-01-01

    Backside wear due to micromotion and poor conformity between the liner and its titanium alloy shell may contribute to the high rates of retroacetabular osteolysis and consequent aseptic loosening. The purpose of our study was to understand the wear process on the backside of polyethylene liners from two acetabular cup systems, whose locking mechanism is based on a press-fit cone in combination with a rough titanium conical inner surface on the fixation area. A direct comparison between in vitro wear simulator tests (equivalent to 3 years of use) and retrieved liners (average 13.1 months in situ) was done in order to evaluate the backside wear characteristics and behavior of these systems. Similar wear scores between in vitro tested and retrieved liners were observed. The results showed that this locking mechanism did not significantly produce wear marks at the backside of the polyethylene liners due to micromotion. In all the analyzed liners, the most common wear modes observed were small scratches at the cranial fixation zone directly below the rough titanium inner surface of the shell. It was concluded that most of the wear marks were produced during the insertion and removal of the liner, rather than during its time in situ.

  4. Chemical etching of stainless steel 301 for improving performance of electrochemical capacitors in aqueous electrolyte

    NASA Astrophysics Data System (ADS)

    Jeżowski, P.; Nowicki, M.; Grzeszkowiak, M.; Czajka, R.; Béguin, F.

    2015-04-01

    The main purpose of the study was to increase the surface roughness of stainless steel 301 current collectors by etching, in order to improve the electrochemical performance of electrical double-layer capacitors (EDLC) in 1 mol L-1 lithium sulphate electrolyte. Etching was realized in 1:3:30 (HNO3:HCl:H2O) solution with times varying up to 10 min. For the considered 15 μm thick foil and a mass loss around 0.4 wt.%, pitting was uniform, with diameter of pits ranging from 100 to 300 nm. Atomic force microscopy (AFM) showed an increase of average surface roughness (Ra) from 5 nm for the as-received stainless steel foil to 24 nm for the pitted material. Electrochemical impedance spectroscopy realized on EDLCs with coated electrodes either on as-received or pitted foil in 1 mol L-1 Li2SO4 gave equivalent distributed resistance (EDR) of 8 Ω and 2 Ω, respectively, demonstrating a substantial improvement of collector/electrode interface after pitting. Correlatively, the EDLCs with pitted collector displayed a better charge propagation and low ohmic losses even at relatively high current of 20 A g-1. Hence, chemical pitting of stainless steel current collectors is an appropriate method for optimising the performance of EDLCs in neutral aqueous electrolyte.

  5. Surface Pre-treatment for Thermally Sprayed ZnAl15 Coatings

    NASA Astrophysics Data System (ADS)

    Bobzin, K.; Öte, M.; Knoch, M. A.

    2017-02-01

    Pre-treatment of substrates is an important step in thermal spraying. It is widely accepted that mechanical interlocking is the dominant adhesion mechanism for most substrate-coating combinations. To prevent premature failure, minimum coating adhesion strength, surface preparation grades, and roughness parameters are often specified. For corrosion-protection coatings for offshore wind turbines, an adhesion strength ≥ 5 MPa is commonly assumed to ensure adhesion over service lifetime. In order to fulfill this requirement, Rz > 80 µm and a preparation grade of Sa3 are common specifications. In this study, the necessity of these requirements is investigated using the widely used combination of twin-wire arc-sprayed ZnAl15 on S355J2 + N as a test case. By using different blasting media and parameters, the correlation between coating adhesion and roughness parameters is analyzed. The adhesion strength of these systems is measured using a test method allowing measurements on real parts. The results are compared to DIN EN 582:1993, the European equivalent of ASTM-C633. In another series of experiments, the influence of surface pre-treatment grades Sa2.5 and Sa3 is considered. By combining the results of these three sets of experiments, a guideline for surface pre-treatment and adhesion testing on real parts is proposed for the considered system.

  6. Superwetting and aptamer functionalized shrink-induced high surface area electrochemical sensors.

    PubMed

    Hauke, A; Kumar, L S Selva; Kim, M Y; Pegan, J; Khine, M; Li, H; Plaxco, K W; Heikenfeld, J

    2017-08-15

    Electrochemical sensing is moving to the forefront of point-of-care and wearable molecular sensing technologies due to the ability to miniaturize the required equipment, a critical advantage over optical methods in this field. Electrochemical sensors that employ roughness to increase their microscopic surface area offer a strategy to combatting the loss in signal associated with the loss of macroscopic surface area upon miniaturization. A simple, low-cost method of creating such roughness has emerged with the development of shrink-induced high surface area electrodes. Building on this approach, we demonstrate here a greater than 12-fold enhancement in electrochemically active surface area over conventional electrodes of equivalent on-chip footprint areas. This two-fold improvement on previous performance is obtained via the creation of a superwetting surface condition facilitated by a dissolvable polymer coating. As a test bed to illustrate the utility of this approach, we further show that electrochemical aptamer-based sensors exhibit exceptional signal strength (signal-to-noise) and excellent signal gain (relative change in signal upon target binding) when deployed on these shrink electrodes. Indeed, the observed 330% gain we observe for a kanamycin sensor is 2-fold greater than that seen on planar gold electrodes. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Experimental and Numerical Study of the Influence of Substrate Surface Preparation on Adhesion Mechanisms of Aluminum Cold Spray Coatings on 300M Steel Substrates

    NASA Astrophysics Data System (ADS)

    Nastic, A.; Vijay, M.; Tieu, A.; Rahmati, S.; Jodoin, B.

    2017-10-01

    The effect of substrate surface topography on the creation of metallurgical bonds and mechanical anchoring points has been studied for the cold spray deposition of pure aluminum on 300M steel substrate material. The coatings adhesion strength showed a significant decrease from 31.0 ± 5.7 MPa on polished substrates to 6.9 ± 2.0 MPa for substrates with roughness of 2.2 ± 0.5 μm. Strengths in the vicinity of 45 MPa were reached for coatings deposited onto forced pulsed waterjet treated surfaces with roughnesses larger than 33.8 μm. Finite element analysis has confirmed the sole presence of mechanical anchoring in coating adhesion strength for all surface treatment except polished surfaces. Grit embedment has been shown to be non-detrimental to coating adhesion for the current deposited material combination. The particle deformation process during impacts has been studied through finite element analysis using the Preston-Tonks-Wallace (PTW) constitutive model. The obtained equivalent plastic strain (PEEQ), temperature, contact pressure and velocity vector were correlated to the particle ability to form metallurgical bonds. Favorable conditions for metallurgical bonding were found to be highest for particles deposited on polished substrates, as confirmed by fracture surface analysis.

  8. High-speed photon-counting x-ray computed tomography system utilizing a multipixel photon counter

    NASA Astrophysics Data System (ADS)

    Sato, Eiichi; Enomoto, Toshiyuki; Watanabe, Manabu; Hitomi, Keitaro; Takahashi, Kiyomi; Sato, Shigehiro; Ogawa, Akiro; Onagawa, Jun

    2009-07-01

    High-speed photon counting is useful for discriminating photon energy and for decreasing absorbed dose for patients in medical radiography, and the counting is usable for constructing an x-ray computed tomography (CT) system. A photon-counting x-ray CT system is of the first generation type and consists of an x-ray generator, a turn table, a translation stage, a two-stage controller, a multipixel photon counter (MPPC) module, a 1.0-mm-thick LSO crystal (scintillator), a counter card (CC), and a personal computer (PC). Tomography is accomplished by repeating the linear scanning and the rotation of an object, and projection curves of the object are obtained by the linear scanning using the detector consisting of a MPPC module and the LSO. The pulses of the event signal from the module are counted by the CC in conjunction with the PC. The lower level of the photon energy is roughly determined by a comparator circuit in the module, and the unit of the level is the photon equivalent (pe). Thus, the average photon energy of the x-ray spectra increases with increasing the lower-level voltage of the comparator. The maximum count rate was approximately 20 Mcps, and energy-discriminated CT was roughly carried out.

  9. Determination of the Wave Parameters from the Statistical Characteristics of the Image of a Linear Test Object

    NASA Astrophysics Data System (ADS)

    Weber, V. L.

    2018-03-01

    We statistically analyze the images of the objects of the "light-line" and "half-plane" types which are observed through a randomly irregular air-water interface. The expressions for the correlation function of fluctuations of the image of an object given in the form of a luminous half-plane are found. The possibility of determining the spatial and temporal correlation functions of the slopes of a rough water surface from these relationships is shown. The problem of the probability of intersection of a small arbitrarily oriented line segment by the contour image of a luminous straight line is solved. Using the results of solving this problem, we show the possibility of determining the values of the curvature variances of a rough water surface. A practical method for obtaining an image of a rectilinear luminous object in the light rays reflected from the rough surface is proposed. It is theoretically shown that such an object can be synthesized by temporal accumulation of the image of a point source of light rapidly moving in the horizontal plane with respect to the water surface.

  10. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  11. Unstable optical resonator loss calculations using the prony method.

    PubMed

    Siegman, A E; Miller, H Y

    1970-12-01

    The eigenvalues for all the significant low-order resonant modes of an unstable optical resonator with circular mirrors are computed using an eigenvalue method called the Prony method. A general equivalence relation is also given, by means of which one can obtain the design parameters for a single-ended unstable resonator of the type usually employed in practical lasers, from the calculated or tabulated values for an equivalent symmetric or double-ended unstable resonator.

  12. Chemiluminescence-based multivariate sensing of local equivalence ratios in premixed atmospheric methane-air flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.

    Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less

  13. Could Crop Height Impact the Wind Resource at Agriculturally Productive Wind Farm Sites?

    NASA Astrophysics Data System (ADS)

    Vanderwende, B. J.; Lundquist, J. K.

    2013-12-01

    The agriculture-intensive United States Midwest and Great Plains regions feature some of the best wind resources in the nation. Collocation of cropland and wind turbines introduces complex meteorological interactions that could affect both agriculture and wind power production. Crop management practices may modify the wind resource through alterations of land-surface properties. In this study, we used the Weather Research and Forecasting (WRF) model to estimate the impact of crop height variations on the wind resource in the presence of a large turbine array. We parameterized a hypothetical array of 121 1.8 MW turbines at the site of the 2011 Crop/Wind-energy Experiment field campaign using the WRF wind farm parameterization. We estimated the impact of crop choices on power production by altering the aerodynamic roughness length in a region approximately 65 times larger than that occupied by the turbine array. Roughness lengths of 10 cm and 25 cm represent a mature soy crop and a mature corn crop respectively. Results suggest that the presence of the mature corn crop reduces hub-height wind speeds and increases rotor-layer wind shear, even in the presence of a large wind farm which itself modifies the flow. During the night, the influence of the surface was dependent on the boundary layer stability, with strong stability inhibiting the surface drag from modifying the wind resource aloft. Further investigation is required to determine the optimal size, shape, and crop height of the roughness modification to maximize the economic benefit and minimize the cost of such crop management practices.

  14. Does college alcohol consumption impact employment upon graduation? Findings from a prospective study.

    PubMed

    Bamberger, Peter A; Koopmann, Jaclyn; Wang, Mo; Larimer, Mary; Nahum-Shani, Inbal; Geisner, Irene; Bacharach, Samuel B

    2018-01-01

    [Correction Notice: An Erratum for this article was reported in Vol 103(1) of Journal of Applied Psychology (see record 2017-44578-001). In the article, the authors incorrectly used the term "probability" instead of the term "odds" when relating to the impact of drinking in college on post-graduation employment. The abstract should note "a roughly 10% reduction in the odds...", and in the 2nd paragraph of the Discussion section, (a) "a roughly 10% lower probability" should be "a roughly 10% lower odds", and (b) "their probability of full-time employment upon graduation is roughly 6% lower than..." should be "their odds of full-time employment upon graduation is roughly 6% lower than..." All versions of this article have been corrected.] Although scholars have extensively studied the impact of academic and vocational factors on college students' employment upon graduation, we still know little as to how students' health-related behaviors influence such outcomes. Focusing on student alcohol use as a widely prevalent, health-related behavior, in the current study, we examined the employment implications of student drinking behavior. Drawing from literature examining the productivity effects of drinking and research on job search, we posited that modal quantity and frequency of alcohol consumption, as well as the frequency of heavy episodic drinking (HED) adversely impact the probability of employment upon graduation. Using data from 827 graduating seniors from 4 geographically diverse universities in the United States collected in the context of a prospective study design, we found modal alcohol consumption to have no adverse effect on the likelihood of employment upon graduation. However, we did find a significant adverse effect for the frequency of heavy drinking, with the data suggesting a roughly 10% reduction in the odds of employment upon graduation among college seniors who reported engaging in the average level of HED. The theoretical and practical implications of these findings are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Writing That's Worth Reading: A Practical Guide For Writers Of Medical Articles Part One: How to Love Librarians and Become Immortal

    PubMed Central

    McCaffery, Margaret

    1980-01-01

    Family practice is an expanding field which is beginning to produce its own literature. In order to make the job of writing an article easier, the author needs a method of organizing material and a checklist of things to remember when submitting the article for publication. This series of articles will cover the process of writing from rough notes to finished product, and will conclude with a description of the review process—what happens to an article after submission. PMID:21293639

  16. Turbulent flame propagation in partially premixed flames

    NASA Technical Reports Server (NTRS)

    Poinsot, T.; Veynante, D.; Trouve, A.; Ruetsch, G.

    1996-01-01

    Turbulent premixed flame propagation is essential in many practical devices. In the past, fundamental and modeling studies of propagating flames have generally focused on turbulent flame propagation in mixtures of homogeneous composition, i.e. a mixture where the fuel-oxidizer mass ratio, or equivalence ratio, is uniform. This situation corresponds to the ideal case of perfect premixing between fuel and oxidizer. In practical situations, however, deviations from this ideal case occur frequently. In stratified reciprocating engines, fuel injection and large-scale flow motions are fine-tuned to create a mean gradient of equivalence ratio in the combustion chamber which provides additional control on combustion performance. In aircraft engines, combustion occurs with fuel and secondary air injected at various locations resulting in a nonuniform equivalence ratio. In both examples, mean values of the equivalence ratio can exhibit strong spatial and temporal variations. These variations in mixture composition are particularly significant in engines that use direct fuel injection into the combustion chamber. In this case, the liquid fuel does not always completely vaporize and mix before combustion occurs, resulting in persistent rich and lean pockets into which the turbulent flame propagates. From a practical point of view, there are several basic and important issues regarding partially premixed combustion that need to be resolved. Two such issues are how reactant composition inhomogeneities affect the laminar and turbulent flame speeds, and how the burnt gas temperature varies as a function of these inhomogeneities. Knowledge of the flame speed is critical in optimizing combustion performance, and the minimization of pollutant emissions relies heavily on the temperature in the burnt gases. Another application of partially premixed combustion is found in the field of active control of turbulent combustion. One possible technique of active control consists of pulsating the fuel flow rate and thereby modulating the equivalence ratio (Bloxsidge et al. 1987). Models of partially premixed combustion would be extremely useful in addressing all these questions related to practical systems. Unfortunately, the lack of a fundamental understanding regarding partially premixed combustion has resulted in an absence of models which accurately capture the complex nature of these flames. Previous work on partially premixed combustion has focused primarily on laminar triple flames. Triple flames correspond to an extreme case where fuel and oxidizer are initially totally separated (Veynante et al. 1994 and Ruetsch et al. 1995). These flames have a nontrivial propagation speed and are believed to be a key element in the stabilization process of jet diffusion flames. Different theories have also been proposed in the literature to describe a turbulent flame propagating in a mixture with variable equivalence ratio (Muller et al. 1994), but few validations are available. The objective of the present study is to provide basic information on the effects of partial premixing in turbulent combustion. In the following, we use direct numerical simulations to study laminar and turbulent flame propagation with variable equivalence ratio.

  17. Scaling of Sediment Dynamics in a Reach-Scale Laboratory Model of a Sand-Bed Stream with Riparian Vegetation

    NASA Astrophysics Data System (ADS)

    Gorrick, S.; Rodriguez, J. F.

    2011-12-01

    A movable bed physical model was designed in a laboratory flume to simulate both bed and suspended load transport in a mildly sinuous sand-bed stream. Model simulations investigated the impact of different vegetation arrangements along the outer bank to evaluate rehabilitation options. Preserving similitude in the 1:16 laboratory model was very important. In this presentation the scaling approach, as well as the successes and challenges of the strategy are outlined. Firstly a near-bankfull flow event was chosen for laboratory simulation. In nature, bankfull events at the field site deposit new in-channel features but cause only small amounts of bank erosion. Thus the fixed banks in the model were not a drastic simplification. Next, and as in other studies, the flow velocity and turbulence measurements were collected in separate fixed bed experiments. The scaling of flow in these experiments was simply maintained by matching the Froude number and roughness levels. The subsequent movable bed experiments were then conducted under similar hydrodynamic conditions. In nature, the sand-bed stream is fairly typical; in high flows most sediment transport occurs in suspension and migrating dunes cover the bed. To achieve similar dynamics in the model equivalent values of the dimensionless bed shear stress and the particle Reynolds number were important. Close values of the two dimensionless numbers were achieved with lightweight sediments (R=0.3) including coal and apricot pips with a particle size distribution similar to that of the field site. Overall the moveable bed experiments were able to replicate the dominant sediment dynamics present in the stream during a bankfull flow and yielded relevant information for the analysis of the effects of riparian vegetation. There was a potential conflict in the strategy, in that grain roughness was exaggerated with respect to nature. The advantage of this strategy is that although grain roughness is exaggerated, the similarity of bedforms and resulting drag can return similar levels of roughness to those in the field site.

  18. Student Performance on Practical Gross Anatomy Examinations Is Not Affected by Assessment Modality

    ERIC Educational Resources Information Center

    Meyer, Amanda J.; Innes, Stanley I.; Stomski, Norman J.; Armson, Anthony J.

    2016-01-01

    Anatomical education is becoming modernized, not only in its teaching and learning, but also in its assessment formats. Traditional "steeplechase" examinations are being replaced with online gross anatomy examinations. The aims of this study were to: (1) determine if online anatomy practical examinations are equivalent to traditional…

  19. Characterization and development of truck load spectra and growth factors for current and future pavement design practices in Louisiana : technical summary report.

    DOT National Transportation Integrated Search

    2011-07-01

    Current roadway pavement design practices follow the standards set by the American Society of State : Highway and Transportation Officials (AASHTO), which require the use of an equivalent single axle load : (ESAL-18 kip single axle load) for design t...

  20. Equivalence of Students' Scores on Timed and Untimed Anatomy Practical Examinations

    ERIC Educational Resources Information Center

    Zhang, Guiyun; Fenderson, Bruce A.; Schmidt, Richard R.; Veloski, J. Jon

    2013-01-01

    Untimed examinations are popular with students because there is a perception that first impressions may be incorrect, and that difficult questions require more time for reflection. In this report, we tested the hypothesis that timed anatomy practical examinations are inherently more difficult than untimed examinations. Students in the Doctor of…

  1. 21 CFR 26.45 - Monitoring continued equivalence.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... MUTUAL RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY SYSTEM AUDIT REPORTS, AND CERTAIN MEDICAL DEVICE PRODUCT EVALUATION REPORTS: UNITED STATES AND THE EUROPEAN...

  2. Cost of Transformation among Primary Care Practices Participating in a Medical Home Pilot.

    PubMed

    Martsolf, Grant R; Kandrack, Ryan; Gabbay, Robert A; Friedberg, Mark W

    2016-07-01

    Medical home initiatives encourage primary care practices to invest in new structural capabilities such as patient registries and information technology, but little is known about the costs of these investments. To estimate costs of transformation incurred by primary care practices participating in a medical home pilot. We interviewed practice leaders in order to identify changes practices had undertaken due to medical home transformation. Based on the principles of activity-based costing, we estimated the costs of additional personnel and other investments associated with these changes. The Pennsylvania Chronic Care Initiative (PACCI), a statewide multi-payer medical home pilot. Twelve practices that participated in the PACCI. One-time and ongoing yearly costs attributed to medical home transformation. Practices incurred median one-time transformation-associated costs of $30,991 per practice (range, $7694 to $117,810), equivalent to $9814 per clinician ($1497 to $57,476) and $8 per patient ($1 to $30). Median ongoing yearly costs associated with transformation were $147,573 per practice (range, $83,829 to $346,603), equivalent to $64,768 per clinician ($18,585 to $93,856) and $30 per patient ($8 to $136). Care management activities accounted for over 60% of practices' transformation-associated costs. Per-clinician and per-patient transformation costs were greater for small and independent practices than for large and system-affiliated practices. Error in interviewee recall could affect estimates. Transformation costs in other medical home interventions may be different. The costs of medical home transformation vary widely, creating potential financial challenges for primary care practices-especially those that are small and independent. Tailored subsidies from payers may help practices make these investments. Agency for Healthcare Research and Quality.

  3. Specialty Payment Model Opportunities and Assessment: Oncology Simulation Report.

    PubMed

    White, Chapin; Chan, Chris; Huckfeldt, Peter J; Kofner, Aaron; Mulcahy, Andrew W; Pollak, Julia; Popescu, Ioana; Timbie, Justin W; Hussey, Peter S

    2015-07-15

    This article describes the results of a simulation analysis of a payment model for specialty oncology services that is being developed for possible testing by the Center for Medicare and Medicaid Innovation at the Centers for Medicare & Medicaid Services (CMS). CMS asked MITRE and RAND to conduct simulation analyses to preview some of the possible impacts of the payment model and to inform design decisions related to the model. The simulation analysis used an episode-level dataset based on Medicare fee-for-service (FFS) claims for historical oncology episodes provided to Medicare FFS beneficiaries in 2010. Under the proposed model, participating practices would continue to receive FFS payments, would also receive per-beneficiary per-month care management payments for episodes lasting up to six months, and would be eligible for performance-based payments based on per-episode spending for attributed episodes relative to a per-episode spending target. The simulation offers several insights into the proposed payment model for oncology: (1) The care management payments used in the simulation analysis-$960 total per six-month episode-represent only 4 percent of projected average total spending per episode (around $27,000 in 2016), but they are large relative to the FFS revenues of participating oncology practices, which are projected to be around $2,000 per oncology episode. By themselves, the care management payments would increase physician practices' Medicare revenues by roughly 50 percent on average. This represents a substantial new outlay for the Medicare program and a substantial new source of revenues for oncology practices. (2) For the Medicare program to break even, participating oncology practices would have to reduce utilization and intensity by roughly 4 percent. (3) The break-even point can be reduced if the care management payments are reduced or if the performance-based payments are reduced.

  4. Chapter 4: Low compaction grading to enhance reforestation success on coal surface mines

    Treesearch

    R. Sweigard; J. Burger; C. Zipper; J. Skousen; C. Barton; P. Angel

    2017-01-01

    This Forest Reclamation Advisory describes final-grading techniques for reclaiming coal surface mines to forest postmining land uses. Final grading that leaves a loose soil and a rough surface increases survival of planted seedlings and forest productivity. Such practices are often less costly than traditional "smooth grading" while meeting the requirements...

  5. A Head Start Control Group. Part of the Final Report.

    ERIC Educational Resources Information Center

    Cunningham, Grover

    A study was conducted to determine if the observed changes in Head Start children were related to the practice effects inherent in a test-retest situation. The "control" group consisted of 64 children who had been eligible for a Head Start program. They roughly matched a group of Head Start (HS) children in IQ scores, age, and…

  6. Paradoxes of Social Networking in a Structured Web 2.0 Language Learning Community

    ERIC Educational Resources Information Center

    Loiseau, Mathieu; Zourou, Katerina

    2012-01-01

    This paper critically inquires into social networking as a set of mechanisms and associated practices developed in a structured Web 2.0 language learning community. This type of community can be roughly described as learning spaces featuring (more or less) structured language learning resources displaying at least some notions of language learning…

  7. Study of overlength on red oak lumber drying quality and rough mill yield

    Treesearch

    Brian Bond; Janice Wiedenbeck

    2006-01-01

    Lumber stacking practices can directly affect drying defects, drying rate, and moisture content uniformity. The effect of overlength on drying is generally thought to be detrimental, yet large volumes of overlength lumber are used by secondary manufacturers. Managers of secondary manufacturing facilities need quantitative information to assist them in determining if...

  8. CROMAX : a crosscut-first computer simulation program to determine cutting yield

    Treesearch

    Pamela J. Giese; Jeanne D. Danielson

    1983-01-01

    CROMAX simulates crosscut-first, then rip operations as commonly practiced in furniture manufacture. This program calculates cutting yields from individual boards based on board size and defect location. Such information can be useful in predicting yield from various grades and grade mixes thereby allowing for better management decisions in the rough mill. The computer...

  9. [Comments on Nigel Wiseman's "A Practical Sictionary of Chinese Medicine"--on Wiseman' s literal translation].

    PubMed

    Xie, Zhu-fan; Liu, Gan-zhong; Lu, Wei-bo; Fang, Tingyu; Zhang, Qingrong; Wang, Tai; Wang, Kui

    2005-10-01

    Comments were made on the word-for-word literal translation method used by Mr. Nigel Wiseman in A Practical Dictionary of Chinese Medicine. He believes that only literal translation can reflect Chinese medical concepts accurately. The so-called "word-for-word" translation is actually "English-word-for-Chinese-character" translation. First, he made a list of Single Characters with English Equivalents, and then he replaced each character of Chinese medical terms with the assigned English equivalent. Many English terms thus produced are confusing. The defect of the word-for-word literal translation stems from the erroneous idea that the single character constitutes the basic element of meaning corresponding to the notion of "word" in English, and the meaning of a disyllabic or polysyllabic Chinese word is simply the addition of the meanings of the two or more characters. Another big mistake is the negligence of the polysemy of Chinese characters. One or two English equivalents can by no means cover all the various meanings of a polysemous character as a monosyllabic word. Various examples were cited from this dictionary to illustrate the mistakes.

  10. Multiple scattering in the remote sensing of natural surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wen-Hao; Weeks, R.; Gillespie, A.R.

    1996-07-01

    Radiosity models predict the amount of light scattered many times (multiple scattering) among scene elements in addition to light interacting with a surface only once (direct reflectance). Such models are little used in remote sensing studies because they require accurate digital terrain models and, typically, large amounts of computer time. We have developed a practical radiosity model that runs relatively quickly within suitable accuracy limits, and have used it to explore problems caused by multiple-scattering in image calibration, terrain correction, and surface roughness estimation for optical images. We applied the radiosity model to real topographic surfaces sampled at two verymore » different spatial scales: 30 m (rugged mountains) and 1 cm (cobbles and gravel on an alluvial fan). The magnitude of the multiple-scattering (MS) effect varies with solar illumination geometry, surface reflectivity, sky illumination and surface roughness. At the coarse scale, for typical illumination geometries, as much as 20% of the image can be significantly affected (>5%) by MS, which can account for as much as {approximately}10% of the radiance from sunlit slopes, and much more for shadowed slopes, otherwise illuminated only by skylight. At the fine scale, radiance from as much as 30-40% of the scene can have a significant MS component, and the MS contribution is locally as high as {approximately}70%, although integrating to the meter scale reduces this limit to {approximately}10%. Because the amount of MS increases with reflectivity as well as roughness, MS effects will distort the shape of reflectance spectra as well as changing their overall amplitude. The change is proportional to surface roughness. Our results have significant implications for determining reflectivity and surface roughness in remote sensing.« less

  11. Robust laser-based detection of Lamb waves using photo-EMF sensors

    NASA Astrophysics Data System (ADS)

    Klein, Marvin B.; Bacher, Gerald D.

    1998-03-01

    Lamb waves are easily generated and detected using laser techniques. It has been shown that both symmetric and antisymmetric modes can be produced, using single-spot and phased array generation. Detection has been demonstrated with Michelson interferometers, but these instruments can not function effectively on rough surfaces. By contrast, the confocal Fabry-Perot interferometer can interrogate rough surfaces, but generally is not practical for operation below 300 kHz. In this paper we will present Lamb wave data on a number of parts using a robust, adaptive receiver based on photo-emf detection. This receiver has useful sensitivity down to at least 100 kHz, can process speckled beams and can be easily configured to measure both out-of-plane and in- plane motion with a single probe beam.

  12. European Heritage Landscapes. An Account of the Conference on Planning and Management in European Naturparke/Parcs Naturels/National Parks (U.K.) and Equivalent Category "C" Reserves (Losehill Hall, Castleton, England, September 26-30, 1977).

    ERIC Educational Resources Information Center

    Smith, Roland

    Presented are the proceedings of the Conference on Planning and Management in European National Parks and equivalent Category "C" reserves held at the Peak National Park Study Center, Castleton, England, in 1977. Fifty-two representatives from 16 countries focused practical solutions to management and planning problems in national parks. (BT)

  13. 21 CFR 26.9 - Equivalence determination.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY SYSTEM AUDIT REPORTS... to in appendix D of this subpart, and a demonstrated pattern of consistent performance in accordance...

  14. 21 CFR 26.9 - Equivalence determination.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY SYSTEM AUDIT REPORTS... to in appendix D of this subpart, and a demonstrated pattern of consistent performance in accordance...

  15. 21 CFR 26.2 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY SYSTEM AUDIT REPORTS, AND CERTAIN... determination of the equivalence of the regulatory systems of the parties, which is the cornerstone of this...

  16. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    NASA Astrophysics Data System (ADS)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  17. An approach for including the stiffness and damping of elastohydrodynamic point contacts in deep groove ball bearing equilibrium models

    NASA Astrophysics Data System (ADS)

    Nonato, Fábio; Cavalca, Katia L.

    2014-12-01

    This work presents a methodology for including the Elastohydrodynamic (EHD) film effects to a lateral vibration model of a deep groove ball bearing by using a novel approximation for the EHD contacts by a set of equivalent nonlinear spring and viscous damper. The fitting of the equivalent contact model used the results of a transient multi-level finite difference EHD algorithm to adjust the dynamic parameters. The comparison between the approximated model and the finite difference simulated results showed a suitable representation of the stationary and dynamic contact behaviors. The linear damping hypothesis could be shown as a rough representation of the actual hysteretic behavior of the EHD contact. Nevertheless, the overall accuracy of the model was not impaired by the use of such approximation. Further on, the inclusion of the equivalent EHD contact model is equated for both the restoring and the dissipative components of the bearing's lateral dynamics. The derived model was used to investigate the effects of the rolling element bearing lubrication on the vibration response of a rotor's lumped parameter model. The fluid film stiffening effect, previously only observable by experimentation, could be quantified using the proposed model, as well as the portion of the bearing damping provided by the EHD fluid film. Results from a laboratory rotor-bearing test rig were used to indirectly validate the proposed contact approximation. A finite element model of the rotor accounting for the lubricated bearing formulation adequately portrayed the frequency content of the bearing orbits observed on the test rig.

  18. Analytical Design of Terminally Guided Missiles.

    DTIC Science & Technology

    1980-01-02

    Equivalent Dominant Poles and Zeros Using Industrial Specifications," Trans. on Industrial Electronics and Control Instrumentation, Vol. IECI-26, No...The relaxation of the sampling period requirement and the flexibility of our new method facilitate the practical industrial implementation and...with the Guidance and Control Directorate, U.S. Army Missile Command, Redstone Arsenal, Alabama 35809. I. INTRODUCTION Most practical industrial circuits

  19. Music Education from Birth to Five: An Examination of Early Childhood Educators' Music Teaching Practices

    ERIC Educational Resources Information Center

    Bolduc, Jonathan; Evrard, Melanie

    2017-01-01

    Children from birth to five are generally enthusiastic about music. However, because many early-childhood educators (ECEs) feel that they have insufficient knowledge to foster musical development, music education practices are not equivalent across ECEs. This study aimed to identify and determine the frequency of music activities used by ECEs. In…

  20. On the design of a radix-10 online floating-point multiplier

    NASA Astrophysics Data System (ADS)

    McIlhenny, Robert D.; Ercegovac, Milos D.

    2009-08-01

    This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.

  1. EG Andromedae: A New Orbit and Additional Evidence for a Photoionized Wind

    NASA Astrophysics Data System (ADS)

    Kenyon, Scott J.; Garcia, Michael R.

    2016-07-01

    We analyze a roughly 20 yr set of spectroscopic observations for the symbiotic binary EG And. Radial velocities derived from echelle spectra are best fit with a circular orbit having an orbital period of P = 483.3 ± 1.6 days and semi-amplitude K = 7.34 ± 0.07 km s-1. Combined with previous data, these observations rule out an elliptical orbit at the 10σ level. Equivalent widths of H I Balmer emission lines and various absorption features vary in phase with the orbital period. Relative to the radius of the red giant primary, the apparent size of the H II region is consistent with a model where a hot secondary star with effective temperature T h ≈ 75,000 K ionizes the wind from the red giant.

  2. Design of Compact Wilkinson Power Divider with Harmonic Suppression using T-Shaped Resonators

    NASA Astrophysics Data System (ADS)

    Siahkamari, Hesam; Yasoubi, Zahra; Jahanbakhshi, Maryam; Mousavi, Seyed Mohammad Hadi; Siahkamari, Payam; Nouri, Mohammad Ehsan; Azami, Sajad; Azadi, Rasoul

    2018-04-01

    A novel scheme of a shrunken Wilkinson power divider with harmonic suppression, using two identical resonators in the conventional Wilkinson power divider is designed. Moreover, the LC equivalent circuit and its relevant formulas are provided. To substantiate the functionality and soundness of design, a microstrip implementation of this design operating at 1 GHz with the second to eighth harmonic suppression, is developed. The proposed circuit is relatively smaller than the conventional circuit, (roughly 55% of the conventional circuit). Simulation and measurement results for the proposed scheme, which are highly consistent with one another, indicate a good insertion loss about 3.1 dB, input return loss of 20 dB and isolation of 20 dB, while sustaining high-power handling capability over the Wilkinson power divider.

  3. A comparison of upper mantle subcontinental electrical conductivity for North America, Europe, and Asia.

    USGS Publications Warehouse

    Campbell, W.H.; Schiffmacher, E.R.

    1986-01-01

    Spherical harmonic analysis coefficients of the external and internal parts of the quiet-day geomagnetic field variations (Sq), separated for the N American, European, Central Asian and E Asian regions, were used to determine conductivity profiles to depths of about 600km by the Schmucker equivalent-substitute conductor method. All 3 regions showed a roughly exponential increase of conductivity with depth. Distinct discontinuities seemed to be evident near 255-300km and near 450-600km. Regional differences in the conductivity profiles were shown by the functional fittings to the data. For depths less than about 275km, the N American conductivities seemed to be significantly higher than the other regions. For depths greater than about 300km, the E Asian conductivities were largest. -Authors

  4. The Iron Abundance of IOTA Herculis From Ultraviolet Iron Lines

    NASA Astrophysics Data System (ADS)

    Grigsby, J.; Mulliss, C.; Baer, G.

    1995-03-01

    We have obtained (Adelman 1992, 1993, private comunication) coadded, high-resolution IUE spectra of Iota Herculis (B3 IV) in both short wavelength (SWP) and long wavelength (LWP) regions. The spectra span the ultraviolet spectrum from 110 - 300 nm and have a SNR of roughly 30 -50; they are described in Adelman et. al. (1993, ApJ 419, 276). Abundance indicators were 54 lines of Fe II and 26 lines of Fe III whose atomic parameters have been measured in the laboratory. LTE synthetic spectra for comparison with observations were produced with the Kurucz model atmosphere and spectral synthesis codes ATLAS9/SYNTHE (Kurucz 1979, ApJS 40,1; Kurucz and Avrett 1981, SAO Special Report 391). Model parameters were chosen from the literature: effective temperature = 17500 K, log g =3.75, v sin i= 11 km/s, and turbulent velocity = 0 km/s. (Peters and Polidan 1985, in IAU Symposium 111, ed. D. S. Hayes et al. (Dordrecht: Reidel), 417). We determined the equivalent widths of the chosen lines by fitting gaussian profiles to the lines and by measuring the equivalent widths of the gaussians. We derived abundances by fitting a straight line to a plot of observed equivalent widths vs. synthetic equivalent widths; we adjusted the iron abundance of the models until a slope of unity was achieved. The abundances derived from the different ionization stages are in agreement: Fe II lines indicate an iron abundance that is 34 +15/-10% the solar value([Fe/H]=-0.47 +0.16-0.15dex), while from Fe III lines we obtain 34 +/- 10% ([Fe/H]=-0.47 +0.11/-0.15 dex). A search of the literature suggests that no previous investigations of this star's iron abundance have found agreement between the different ionization stages. We thank Saul Adelman for his generous assistance, and the Faculty Research Fund Board of Wittenberg University for support of this research.

  5. Sediment movement from forest road systems-roads: a major contributor to erosion and stream sedimentation

    Treesearch

    Johnny M. Grace

    2002-01-01

    Nonpoint source pollution is a major concern related to natural resource management throughout the United States. Undisturbed forest lands typically have minimal erosion, less than 0.13 ton/acre (0.30 ton/hectare), due to the increased cover and surface roughness found in these areas. However, disturbances caused by forest management practices can result in...

  6. Forum: The Future of Instructional Communication. Guns on Campus: Creating Research to Inform Practice

    ERIC Educational Resources Information Center

    Horan, Sean M.; Bryant, Leah E.

    2017-01-01

    This essay encourages research on the presence of guns in college classrooms. Readers are asked to consider that around 11 million individuals have concealed carry permits (Crime Prevention Research Center, 2014), and roughly 20% of states allow concealed guns on campus (e.g., Anderson, 2016). Given the centrality of guns in our culture, and…

  7. Implementation of a roughness element to trip transition in large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Boudet, J.; Monier, J.-F.; Gao, F.

    2015-02-01

    In aerodynamics, the laminar or turbulent regime of a boundary layer has a strong influence on friction or heat transfer. In practical applications, it is sometimes necessary to trip the transition to turbulent, and a common way is by use of a roughness element ( e.g. a step) on the wall. The present paper is concerned with the numerical implementation of such a trip in large-eddy simulations. The study is carried out on a flat-plate boundary layer configuration, with Reynolds number Rex=1.3×106. First, this work brings the opportunity to introduce a practical methodology to assess convergence in large-eddy simulations. Second, concerning the trip implementation, a volume source term is proposed and is shown to yield a smoother and faster transition than a grid step. Moreover, it is easier to implement and more adaptable. Finally, two subgrid-scale models are tested: the WALE model of Nicoud and Ducros ( Flow Turbul. Combust., vol. 62, 1999) and the shear-improved Smagorinsky model of Lévêque et al. ( J. Fluid Mech., vol. 570, 2007). Both models allow transition, but the former appears to yield a faster transition and a better prediction of friction in the turbulent regime.

  8. Faraday instability on patterned surfaces

    NASA Astrophysics Data System (ADS)

    Feng, Jie; Rubinstein, Gregory; Jacobi, Ian; Stone, Howard

    2013-11-01

    We show how micro-scale surface patterning can be used to control the onset of the Faraday instability in thin liquid films. It is well known that when a liquid film on a planar substrate is subject to sufficient vibrational accelerations, the free surface destabilizes, exhibiting a family of non-linear standing waves. This instability remains a canonical problem in the study of spontaneous pattern formation, but also has practical uses. For example, the surface waves induced by the Faraday instability have been studied as a means of enhanced damping for mechanical vibrations (Genevaux et al. 2009). Also the streaming within the unstable layer has been used as a method for distributing heterogeneous cell cultures on growth medium (Takagi et al. 2002). In each of these applications, the roughness of the substrate significantly affects the unstable flow field. We consider the effect of patterned substrates on the onset and behavior of the Faraday instability over a range of pattern geometries and feature heights where the liquid layer is thicker than the pattern height. Also, we describe a physical model for the influence of patterned roughness on the destabilization of a liquid layer in order to improve the design of practical systems which exploit the Faraday instability.

  9. Water Quality Assessment in the Harbin Reach of the Songhuajiang River (China) Based on a Fuzzy Rough Set and an Attribute Recognition Theoretical Model

    PubMed Central

    An, Yan; Zou, Zhihong; Li, Ranran

    2014-01-01

    A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A). Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B), was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system. PMID:24675643

  10. 21 CFR 26.10 - Regulatory authorities not listed as currently equivalent.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SERVICES GENERAL MUTUAL RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY SYSTEM AUDIT REPORTS, AND CERTAIN MEDICAL DEVICE PRODUCT EVALUATION REPORTS: UNITED STATES AND THE...

  11. 14 CFR 29.1323 - Airspeed indicating system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... minimum practicable instrument calibration error when the corresponding pitot and static pressures are... pitot tube or an equivalent means of preventing malfunction due to icing. [Doc. No. 5084, 29 FR 16150...

  12. Accidental needle sticks, the Occupational Safety and Health Administration, and the fallacy of public policy.

    PubMed

    Wolf, Bruce L; Marks, Albert; Fahrenholz, John M

    2006-07-01

    Current Occupational Safety and Health Administration (OSHA) guidelines mandate the use of safety needles when allergy injections are given. Safety needles for intradermal testing remain optional. Whether safety needles reduce the number of accidental needle sticks (ANSs) in the outpatient setting has yet to be proven. To determine the rate of ANSs with new (safety) needles vs old needles used in allergy immunotherapy and intradermal testing. Allergy practices from 22 states were surveyed by e-mail. Seventy practices (28%) responded to the survey. Twice as many ANSs occurred in practices giving immunotherapy when using new needles vs old needles (P < .01). The rate of ANSs was roughly the same for intradermal testing with new needles vs old needles. These findings further question whether OSHA's guidelines for safety needle use in outpatient practice need revision and if allergy practices might be excluded from the requirement to use safety needles.

  13. Mapping Farming Practices in Belgian Intensive Cropping Systems from Sentinel-1 SAR Time Series

    NASA Astrophysics Data System (ADS)

    Chome, G.; Baret, P. V.; Defourny, P.

    2016-08-01

    The environmental impact of the so-called conventional farming system calls for new farming practices reducing negative externalities. Emerging farming practices such as no-till and new inter-cropping management are promising tracks. The development of methods to characterize crop management across an entire region and to understand their spatial dimension offers opportunities to accompany the transition towards a more sustainable agriculture.This research takes advantage of the unmatched polarimetric and temporal resolutions of Sentinel-1 SAR C- band to develop a method to identify farming practices at the parcel level. To this end, the detection of changes in backscattering due to surface roughness modification (tillage, inter-crop cover destruction ...) is used to detect the farming management. The final results are compared to a reference dataset collected through an intensive field campaign. Finally, the performances are discussed in the perspective of practices monitoring of cropping systems through remote sensing.

  14. A practical perspective on remedial ethics: Minnesota Board of Dentistry.

    PubMed

    Mensing, Candace; Shragg, Marshall

    2009-01-01

    The President and Executive Director of the Minnesota Board of Dentistry describe how the Bebeau course in ethics, for dentists referred because of ethical lapses, is used as part of the disciplinary process in the state. It is understood that breaches of ethical standards are a complex phenomenon, often engaged in by practitioners who know that they are doing wrong but nevertheless choose to do so. Typical patterns of transgression that result in referral to Dr. Bebeau's course include inappropriate billing practices, improper relations with staff or patients, questionable advertising, substandard care, "rough behavior," and gaps in infection control.

  15. Restriction of neck flexion using soft cervical collars: a preliminary study

    PubMed Central

    Aker, Peter D; Randoll, Martine; Rheault, Chantal; O’Connor, Sandra

    1991-01-01

    This study investigates the use of dropped neck flexion as a manoeuvre to test the restrictive abilities of two different types of soft collars, an Airway soft cervical collar and a handmade cervical rough. The range of neck flexion of 40 asymptomatic subjects aged 20-29 was assessed, both with and without collar wear, using a Spinal Rangiometer. Dropped neck flexion is described as possibly being more representative of the type of movement that a patient with neck pain will undergo, and hence a more useful manoeuvre to employ when testing for the restrictive abilities of soft cervical collars. The mean dropped flexion was 64 degrees without collar wear, 58 degrees with the Airway soft collar, and 34 degrees with the cervical rough. Only the cervical rough provided both statistically (p < 0.001) and clinically (> 15°) significant restriction of dropped neck flexion. The comfort, preparation time, and ease of application of each of these collars is not addressed in this study, and may reflect on use in clinical practice. This preliminary study provides insight and pilot data for future studies in this area. ImagesFigure 2Figure 3

  16. Investigation of turbulent wedges generated by different single surface roughness elements

    NASA Astrophysics Data System (ADS)

    Traphan, Dominik; Meinlschmidt, Peter; Lutz, Otto; Peinke, Joachim; Gülker, Gerd

    2013-11-01

    It is known that small faults on rotor blades of wind turbines can cause significant power loss. In order to better understand the governing physical effects, in this experimental study, the formation of a turbulent wedge over a flat plate induced by single surface roughness elements is under investigation. The experiments are performed at different ambient pressure gradients, thus allowing conclusions about the formation of a turbulent wedge over an airfoil. With respect to typical initial faults on operating airfoils, the roughness elements are modified in both size and shape (raised or recessed). None intrusive experimental methods, such as stereoscopic PIV and LDA, enable investigations based on temporally and spatially highly resolved velocity measurements. In this way, a spectral analysis of the turbulent boundary layer is performed and differences in coherent structures within the wedge are identified. These findings are correlated with global measurements of the wedge carried out by infrared thermography. This correlation aims to enable distinguishing the cause and main properties of a turbulent wedge by the easy applicable method of infrared thermography, which is of practical relevance in the field of condition monitoring of wind turbines.

  17. Perturbation Theory for Scattering from Multilayers with Randomly Rough Fractal Interfaces: Remote Sensing Applications.

    PubMed

    Imperatore, Pasquale; Iodice, Antonio; Riccio, Daniele

    2017-12-27

    A general, approximate perturbation method, able to provide closed-form expressions of scattering from a layered structure with an arbitrary number of rough interfaces, has been recently developed. Such a method provides a unique tool for the characterization of radar response patterns of natural rough multilayers. In order to show that, here, for the first time in a journal paper, we describe the application of the developed perturbation theory to fractal interfaces; we then employ the perturbative method solution to analyze the scattering from real-world layered structures of practical interest in remote sensing applications. We focus on the dependence of normalized radar cross section on geometrical and physical properties of the considered scenarios, and we choose two classes of natural stratifications: wet paleosoil covered by a low-loss dry sand layer and a sea-ice layer above water with dry snow cover. Results are in accordance with the experimental evidence available in the literature for the low-loss dry sand layer, and they may provide useful indications about the actual ability of remote sensing instruments to perform sub-surface sensing for different sensor and scene parameters.

  18. Tool wear compensation scheme for DTM

    NASA Astrophysics Data System (ADS)

    Sandeep, K.; Rao, U. S.; Balasubramaniam, R.

    2018-04-01

    This paper is aimed to monitor tool wear in diamond turn machining (DTM), assess effects of tool wear on accuracies of the machined component, and develop compensation methodology to enhance size and shape accuracies of a hemispherical cup. In order to find change in the centre and radius of tool with increasing wear of tool, a MATLAB program is used. In practice, x-offsets are readjusted by DTM operator for desired accuracy in the cup and the results of theoretical model show that change in radius and z-offset are insignificant however x-offset is proportional to the tool wear and this is what assumed while resetting tool offset. Since we could not measure the profile of tool; therefore we modeled our program for cup profile data. If we assume no error due to slide and spindle of DTM then any wear in the tool will be reflected in the cup profile. As the cup data contains surface roughness, therefore random noise similar to surface waviness is added. It is observed that surface roughness affects the centre and radius but pattern of shifting of centre with increase in wear of tool remains similar to the ideal condition, i.e. without surface roughness.

  19. Perturbation Theory for Scattering from Multilayers with Randomly Rough Fractal Interfaces: Remote Sensing Applications

    PubMed Central

    2017-01-01

    A general, approximate perturbation method, able to provide closed-form expressions of scattering from a layered structure with an arbitrary number of rough interfaces, has been recently developed. Such a method provides a unique tool for the characterization of radar response patterns of natural rough multilayers. In order to show that, here, for the first time in a journal paper, we describe the application of the developed perturbation theory to fractal interfaces; we then employ the perturbative method solution to analyze the scattering from real-world layered structures of practical interest in remote sensing applications. We focus on the dependence of normalized radar cross section on geometrical and physical properties of the considered scenarios, and we choose two classes of natural stratifications: wet paleosoil covered by a low-loss dry sand layer and a sea-ice layer above water with dry snow cover. Results are in accordance with the experimental evidence available in the literature for the low-loss dry sand layer, and they may provide useful indications about the actual ability of remote sensing instruments to perform sub-surface sensing for different sensor and scene parameters. PMID:29280979

  20. Update to core reporting practices in structural equation modeling.

    PubMed

    Schreiber, James B

    This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling." 1 As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Effects of Multimedia on Knowledge, Understanding, Skills, Practice and Confidence in Environmental Sustainability: A Non-Equivalent Pre-Test-Post-Test, Quasi Experimental Design

    ERIC Educational Resources Information Center

    Jena, Ananta Kumar; Bhattacharjee, Satarupa; Langthasa, Pimily

    2015-01-01

    The study aimed to evaluate the outcomes of the local community members, secondary school students, and the university students participated in the multimedia programme with reference to knowledge, understanding, skills, practice, and confidence in environmental sustainability. About two hundred students participated in this multimedia programme.…

  2. State government regulation of forestry practices applied to nonfederal forests: extent and intensity of agency involvement

    Treesearch

    Paul V. Ellefson; Michael A. Kilgore; James E. Granskog

    2006-01-01

    In 2003, 276 state governmental agencies regulated forestry practices applied to nonfederal forests. Fifty-four percent of these agencies were moderately to extensively involved in such regulation, and 68% engaged in moderate to extensive regulatory coordination with a state's lead forestry agency. The agencies employed an estimates 1,047 full-time equivalents (...

  3. [Applications of habitat equivalency analysis in ecological damage assessment of oil spill incident].

    PubMed

    Yang, Yin; Han, Da-xiong; Wang, Hai-yan

    2011-08-01

    Habitat equivalency analysis (HEA) is one of the methods commonly used by U.S. National Oceanic and Atmospheric Administration in natural resources damage assessment, but rarely applied in China. Based on the theory of HEA and the assessment practices of domestic oil spill incidents, a modification on the HEA was made in this paper, and applied to calculate the habitat value in oil spill incidents. According to the data collected from an oil spill incident in China, the modified HEA was applied in a case study to scale the compensatory-restoration. By introducing the ecological service equivalent factor to transfer various habitats, it was achieved to value of the injured habitats in ecological damage assessment of oil spill incident.

  4. Threshold flux-controlled memristor model and its equivalent circuit implementation

    NASA Astrophysics Data System (ADS)

    Wu, Hua-Gan; Bao, Bo-Cheng; Chen, Mo

    2014-11-01

    Modeling a memristor is an effective way to explore the memristor properties due to the fact that the memristor devices are still not commercially available for common researchers. In this paper, a physical memristive device is assumed to exist whose ionic drift direction is perpendicular to the direction of the applied voltage, upon which, corresponding to the HP charge-controlled memristor model, a novel threshold flux-controlled memristor model with a window function is proposed. The fingerprints of the proposed model are analyzed. Especially, a practical equivalent circuit of the proposed model is realized, from which the corresponding experimental fingerprints are captured. The equivalent circuit of the threshold memristor model is appropriate for various memristors based breadboard experiments.

  5. Fabrication and characterization of medical grade polyurethane composite catheters for near-infrared imaging.

    PubMed

    Stevenson, André T; Reese, Laura M; Hill, Tanner K; McGuire, Jeffrey; Mohs, Aaron M; Shekhar, Raj; Bickford, Lissett R; Whittington, Abby R

    2015-06-01

    Peripherally inserted central catheters (PICCs) are hollow polymeric tubes that transport nutrients, blood and medications to neonates. To determine proper PICC placement, frequent X-ray imaging of neonates is performed. Because X-rays pose severe health risks to neonates, safer alternatives are needed. We hypothesize that near infrared (NIR) polymer composites can be fabricated into catheters by incorporating a fluorescent dye (IRDye 800CW) and visualized using NIR imaging. To fabricate catheters, polymer and dye are dry mixed and pressed, sectioned, and extruded to produce hollow tubes. We analyzed surface roughness, stiffness, dye retention, NIR contrast intensity, and biocompatibility. The extrusion process did not significantly alter the mechanical properties of the polymer composites. Over a period of 23 days, only 6.35 ± 5.08% dye leached out of catheters. The addition of 0.025 wt% dye resulted in a 14-fold contrast enhancement producing clear PICC images at 1 cm under a tissue equivalent. The addition of IRDye 800CW did not alter the biocompatibility of the polymer and did not increase adhesion of cells to the surface. We successfully demonstrated that catheters can be imaged without the use of harmful radiation and still maintain the same properties as the unaltered medical grade equivalent. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Does temporal contiguity moderate contingency learning in a speeded performance task?

    PubMed

    Schmidt, James R; De Houwer, Jan

    2012-01-01

    In four experiments, we varied the time between the onset of distracting nonwords and target colour words in a word-word version of the colour-word contingency learning paradigm. Contingencies were created by pairing a distractor nonword more often with one target colour word than with other colour words. A contingency effect corresponds to faster responses to the target colour word on high-contingency trials (i.e., distractor nonword followed by the target colour word with which it appears most often) than on low-contingency trials (i.e., distractor nonword followed by a target colour word with which it appears only occasionally). Roughly equivalent-sized contingency effects were found at stimulus-onset asynchronies (SOAs) of 50, 250, and 450 ms in Experiment 1, and 50, 500, and 1,000 ms in Experiment 2. In Experiment 3, a contingency effect was observed at SOAs of -50, -200, and -350 ms. In Experiment 4, interstimulus interval (ISI) was varied along with SOA, and learning was equivalent for 200-, 700-, and 1,200-ms SOAs. Together, these experiments suggest that the distracting stimulus does not need to be presented in close temporal contiguity with the response to induce learning. Relations to past research on causal judgement and implications for further contingency learning research are discussed.

  7. Safety and Tolerability of Essential Oil from Cinnamomum zeylanicum Blume Leaves with Action on Oral Candidosis and Its Effect on the Physical Properties of the Acrylic Resin

    PubMed Central

    Oliveira, Julyana de Araújo; da Silva, Ingrid Carla Guedes; Trindade, Leonardo Antunes; Lima, Edeltrudes Oliveira; Carlo, Hugo Lemes; Cavalcanti, Alessandro Leite; de Castro, Ricardo Dias

    2014-01-01

    The anti-Candida activity of essential oil from Cinnamomum zeylanicum Blume, as well as its effect on the roughness and hardness of the acrylic resin used in dental prostheses, was assessed. The safety and tolerability of the test product were assessed through a phase I clinical trial involving users of removable dentures. Minimum inhibitory concentration (MIC) and minimum fungicidal concentrations (MFC) were determined against twelve Candida strains. Acrylic resin specimens were exposed to artificial saliva (GI), C. zeylanicum (GII), and nystatin (GIII) for 15 days. Data were submitted to ANOVA and Tukey posttest (α = 5%). For the phase I clinical trial, 15 healthy patients used solution of C. zeylanicum at MIC (15 days, 3 times a day) and were submitted to clinical and mycological examinations. C. zeylanicum showed anti-Candida activity, with MIC = 625.0 µg/mL being equivalent to MFC. Nystatin caused greater increase in roughness and decreased the hardness of the material (P < 0.0001), with no significant differences between GI and GII. As regards the clinical trial, no adverse clinical signs were observed after intervention. The substance tested had a satisfactory level of safety and tolerability, supporting new advances involving the clinical use of essential oil from C. zeylanicum. PMID:25574178

  8. The Value of Wetlands in Protecting Southeast Louisiana from Hurricane Storm Surges

    PubMed Central

    Barbier, Edward B.; Georgiou, Ioannis Y.; Enchelmeyer, Brian; Reed, Denise J.

    2013-01-01

    The Indian Ocean tsunami in 2004 and Hurricanes Katrina and Rita in 2005 have spurred global interest in the role of coastal wetlands and vegetation in reducing storm surge and flood damages. Evidence that coastal wetlands reduce storm surge and attenuate waves is often cited in support of restoring Gulf Coast wetlands to protect coastal communities and property from hurricane damage. Yet interdisciplinary studies combining hydrodynamic and economic analysis to explore this relationship for temperate marshes in the Gulf are lacking. By combining hydrodynamic analysis of simulated hurricane storm surges and economic valuation of expected property damages, we show that the presence of coastal marshes and their vegetation has a demonstrable effect on reducing storm surge levels, thus generating significant values in terms of protecting property in southeast Louisiana. Simulations for four storms along a sea to land transect show that surge levels decline with wetland continuity and vegetation roughness. Regressions confirm that wetland continuity and vegetation along the transect are effective in reducing storm surge levels. A 0.1 increase in wetland continuity per meter reduces property damages for the average affected area analyzed in southeast Louisiana, which includes New Orleans, by $99-$133, and a 0.001 increase in vegetation roughness decreases damages by $24-$43. These reduced damages are equivalent to saving 3 to 5 and 1 to 2 properties per storm for the average area, respectively. PMID:23536815

  9. The value of wetlands in protecting southeast louisiana from hurricane storm surges.

    PubMed

    Barbier, Edward B; Georgiou, Ioannis Y; Enchelmeyer, Brian; Reed, Denise J

    2013-01-01

    The Indian Ocean tsunami in 2004 and Hurricanes Katrina and Rita in 2005 have spurred global interest in the role of coastal wetlands and vegetation in reducing storm surge and flood damages. Evidence that coastal wetlands reduce storm surge and attenuate waves is often cited in support of restoring Gulf Coast wetlands to protect coastal communities and property from hurricane damage. Yet interdisciplinary studies combining hydrodynamic and economic analysis to explore this relationship for temperate marshes in the Gulf are lacking. By combining hydrodynamic analysis of simulated hurricane storm surges and economic valuation of expected property damages, we show that the presence of coastal marshes and their vegetation has a demonstrable effect on reducing storm surge levels, thus generating significant values in terms of protecting property in southeast Louisiana. Simulations for four storms along a sea to land transect show that surge levels decline with wetland continuity and vegetation roughness. Regressions confirm that wetland continuity and vegetation along the transect are effective in reducing storm surge levels. A 0.1 increase in wetland continuity per meter reduces property damages for the average affected area analyzed in southeast Louisiana, which includes New Orleans, by $99-$133, and a 0.001 increase in vegetation roughness decreases damages by $24-$43. These reduced damages are equivalent to saving 3 to 5 and 1 to 2 properties per storm for the average area, respectively.

  10. Anterior urethral stricture review

    PubMed Central

    Stein, Marshall J.

    2013-01-01

    Male anterior urethral stricture disease is a commonly encountered condition that presents to many urologists. According to a National Practice Survey of Board Certified Urologist in the United States most urologists treat on average 6-20 urethral strictures yearly. Many of those same urologists surveyed treat with repeated dilation or internal urethrotomy, despite continual recurrence of the urethral stricture. In point of fact, the urethroplasty despite its high success rate, is underutilized by many practicing urologists. Roughly half of practicing urologist do not perform urethroplasty in the United States. Clearly, the reconstructive ladder for urethral stricture management that was previously described in the literature may no longer apply in the modern era. The following article reviews the etiology, diagnosis, management and comparisons of treatment options for anterior urethral strictures. PMID:26816721

  11. Evaluating performance in three-dimensional fluorescence microscopy

    PubMed Central

    MURRAY, JOHN M; APPLETON, PAUL L; SWEDLOW, JASON R; WATERS, JENNIFER C

    2007-01-01

    In biological fluorescence microscopy, image contrast is often degraded by a high background arising from out of focus regions of the specimen. This background can be greatly reduced or eliminated by several modes of thick specimen microscopy, including techniques such as 3-D deconvolution and confocal. There has been a great deal of interest and some confusion about which of these methods is ‘better’, in principle or in practice. The motivation for the experiments reported here is to establish some rough guidelines for choosing the most appropriate method of microscopy for a given biological specimen. The approach is to compare the efficiency of photon collection, the image contrast and the signal-to-noise ratio achieved by the different methods at equivalent illumination, using a specimen in which the amount of out of focus background is adjustable over the range encountered with biological samples. We compared spot scanning confocal, spinning disk confocal and wide-field/deconvolution (WFD) microscopes and find that the ratio of out of focus background to in-focus signal can be used to predict which method of microscopy will provide the most useful image. We also find that the precision of measurements of net fluorescence yield is very much lower than expected for all modes of microscopy. Our analysis enabled a clear, quantitative delineation of the appropriate use of different imaging modes relative to the ratio of out-of-focus background to in-focus signal, and defines an upper limit to the useful range of the three most common modes of imaging. PMID:18045334

  12. Multi-GNSS signal-in-space range error assessment - Methodology and results

    NASA Astrophysics Data System (ADS)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  13. 45 CFR 60.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... its subdivision, a Board of Dentistry or its subdivision, or an equivalent body as determined by the... practice dentistry by a State (or who, without authority, holds himself or herself out to be so authorized...

  14. Equivalent circuit models for interpreting impedance perturbation spectroscopy data

    NASA Astrophysics Data System (ADS)

    Smith, R. Lowell

    2004-07-01

    As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.

  15. Newark Kids Count 2007: A City Profile of Child Well-Being

    ERIC Educational Resources Information Center

    Association for Children of New Jersey, 2007

    2007-01-01

    For at least 40 years, Newark's name has been practically synonymous with poverty and crime. Its troubled image had roots in reality. For too many families, Newark has been a hard place to raise children. For too many children, it has been a rough place to grow up. But there are signs of change, as found in "Newark Kids Count 2007," an…

  16. Geomorphic, flood, and groundwater-flow characteristics of Bayfield Peninsula streams, Wisconsin, and implications for brook-trout habitat

    USGS Publications Warehouse

    Fitzpatrick, Faith A.; Peppler, Marie C.; Saad, David A.; Pratt, Dennis M.; Lenz, Bernard N.

    2015-01-01

    Available brook-trout habitat is dependent on the locations of groundwater upwellings, the sizes of flood peaks, and sediment loads. Management practices that focus on reducing or slowing runoff from upland areas and increasing channel roughness have potential to reduce flood peaks, erosion, and sedimentation and improve brook-trout habitat in all Bayfield Peninsula streams.

  17. Preferences at the University of Virginia: Racial and Ethnic Preferences in Undergraduate Admissions, 1996 and 1999.

    ERIC Educational Resources Information Center

    Lerner, Robert; Nagai, Althea K.

    This study contains an analysis of admissions practices at the University of Virginia (UVA), originally reported in 1996 and updated for this report. It contains improved data from 1996 and additional data for 1999. Among the findings are that Whites and Asians admitted to the UVA have roughly the same verbal Scholastic Assessment Test (SAT)…

  18. Evaluating poverty grass (Danthonia spicata L.) for use in tees, fairways, or rough areas in golf courses in the midwest

    Treesearch

    Nadia E. Navarrete-Tindall; Brad Fresenburg; J.W. Van Sambeek

    2007-01-01

    Poverty grass (Danthonia spicata L.), a native, cool-season perennial bunchgrass with wide distribution in the United States, is being evaluated for its suitability for use on golf courses. The goal is to identify practices to improve seed germination and successfully establish field plots as monocultures or with other native species to mimic natural...

  19. Specialty Payment Model Opportunities and Assessment

    PubMed Central

    White, Chapin; Chan, Chris; Huckfeldt, Peter J.; Kofner, Aaron; Mulcahy, Andrew W.; Pollak, Julia; Popescu, Ioana; Timbie, Justin W.; Hussey, Peter S.

    2015-01-01

    Abstract This article describes the results of a simulation analysis of a payment model for specialty oncology services that is being developed for possible testing by the Center for Medicare and Medicaid Innovation at the Centers for Medicare & Medicaid Services (CMS). CMS asked MITRE and RAND to conduct simulation analyses to preview some of the possible impacts of the payment model and to inform design decisions related to the model. The simulation analysis used an episode-level dataset based on Medicare fee-for-service (FFS) claims for historical oncology episodes provided to Medicare FFS beneficiaries in 2010. Under the proposed model, participating practices would continue to receive FFS payments, would also receive per-beneficiary per-month care management payments for episodes lasting up to six months, and would be eligible for performance-based payments based on per-episode spending for attributed episodes relative to a per-episode spending target. The simulation offers several insights into the proposed payment model for oncology: (1) The care management payments used in the simulation analysis—$960 total per six-month episode—represent only 4 percent of projected average total spending per episode (around $27,000 in 2016), but they are large relative to the FFS revenues of participating oncology practices, which are projected to be around $2,000 per oncology episode. By themselves, the care management payments would increase physician practices’ Medicare revenues by roughly 50 percent on average. This represents a substantial new outlay for the Medicare program and a substantial new source of revenues for oncology practices. (2) For the Medicare program to break even, participating oncology practices would have to reduce utilization and intensity by roughly 4 percent. (3) The break-even point can be reduced if the care management payments are reduced or if the performance-based payments are reduced. PMID:28083365

  20. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    PubMed

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  1. MENTAL NURSING. LESSON PLANS PREPARED BY PRACTICAL NURSING INSTRUCTORS FOLLOWING JOINT CONFERENCE HELD AT THE UNIVERSITY OF TENNESSEE, KNOXVILLE.

    ERIC Educational Resources Information Center

    Tennessee State Board for Vocational Education, Murfreesboro. Vocational Curriculum Lab.

    THE LESSON PLANS FOR A UNIT ON MENTAL NURSING IN THE PRACTICAL NURSE EDUCATION PROGRAM WERE DEVELOPED BY A GROUP OF REGISTERED NURSES HOLDING TENNESSEE TEACHING CERTIFICATES. STUDENTS SELECTED FOR THE PROGRAM SHOULD BE HIGH SCHOOL GRADUATES OR EQUIVALENT. THE LESSONS DESIGNED FOR USE BY A REGISTERED NURSE CERTIFIED FOR TEACHING GIVE OBJECTIVES,…

  2. 41 CFR 102-74.185 - What heating and cooling policy must Federal agencies follow in Federal facilities?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... overall energy efficient and economical manner; (b) Maintain temperatures to maximize customer satisfaction by conforming to local commercial equivalent temperature levels and operating practices; (c) Set...

  3. 7 CFR 1450.213 - Levels and rates for establishment payments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... establishing non-woody perennial crops and woody perennial crops specified in the conservation plan, forest stewardship plan, or equivalent plan. (b) The average cost of performing a practice may be determined by CCC...

  4. Model helicopter performance degradation with simulated ice shapes

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Korkan, Kenneth D.

    1987-01-01

    An experimental program using a commercially available model helicopter has been conducted in the Texas A&M University Subsonic Wind Tunnel to investigate main rotor performance degradation due to generic ice. The simulated ice, including both primary and secondary formations, was scaled by chord from previously documented artificial ice accretions. Base and iced performance data were gathered as functions of fuselage incidence, blade collective pitch, main rotor rotational velocity, and freestream velocity. It was observed that the presence of simulated ice tends to decrease the lift to equivalent drag ratio, as well as thrust coefficient for the range of velocity ratios tested. Also, increases in torque coefficient due to the generic ice formations were observed. Evaluation of the data has indicated that the addition of roughness due to secondary ice formations is crucial for proper evaluation of the degradation in main rotor performance.

  5. Integrating Terrain Maps Into a Reactive Navigation Strategy

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Werger, Barry; Seraji, Homayoun

    2006-01-01

    An improved method of processing information for autonomous navigation of a robotic vehicle across rough terrain involves the integration of terrain maps into a reactive navigation strategy. Somewhat more precisely, the method involves the incorporation, into navigation logic, of data equivalent to regional traversability maps. The terrain characteristic is mapped using a fuzzy-logic representation of the difficulty of traversing the terrain. The method is robust in that it integrates a global path-planning strategy with sensor-based regional and local navigation strategies to ensure a high probability of success in reaching a destination and avoiding obstacles along the way. The sensor-based strategies use cameras aboard the vehicle to observe the regional terrain, defined as the area of the terrain that covers the immediate vicinity near the vehicle to a specified distance a few meters away.

  6. 1:6000 Scale (6K) Quadrangles developed by USEPA to Support Reconnaissance, and Tactical and Strategic Planning for Emergency Responses and Homeland Security Events (Region 9 Extract)

    EPA Pesticide Factsheets

    Reference quads for emergency response reconnaissance developed for use by the US Environmental Protection Agency. Grid cells are based on densification of the USGS Quarterquad (1:12,000 scale or 12K) grids for the continental United States, Alaska, Hawaii and Puerto Rico and are roughly equivalent to 1:6000 scale (6K) quadrangles approximately 2 miles long on each side. Note: This file is a regional subset that has been extracted from a national file 6K quad file. Each regional extract includes a 20 mile buffer of tiles around each EPA Region. To access the national layer (size is greater than 80MB), go to https://edg.epa.gov/data/Public/OLEM/6kquads_epa.zip.

  7. Saturn Apollo Program

    NASA Image and Video Library

    1965-03-01

    The S-IC-T stage was hoisted into the S-IC static test stand at the Marshall Space Flight Center. The S-IC-T stage was a static test vehicle not intended for flight. It was ground tested repeatedly over a period of many months to prove the vehicle's propulsion system. The 280,000-pound stage, 138 feet long and 33 feet in diameter, housed the fuel and liquid oxygen tanks that held a total of 4,400,000 pounds of liquid oxygen and kerosene. The two tanks are cornected by a 26-foot-long intertank section. Other parts of the booster included the forward skirt and the thrust structure, on which the engines were to be mounted. Five F-1 engines, each weighing 10 tons, gave the booster a total thrust of 7,500,000 pounds, roughly equivalent to 160 million horsepower.

  8. Saturn Apollo Program

    NASA Image and Video Library

    1965-03-01

    The S-IC-T stage is hoisted into the S-IC static test stand at the Marshall Space Flight Center. The S-IC-T stage is a static test vehicle not intended for flight. It was ground tested repeatedly over a period of many months proving the vehicle's propulsion system. The 280,000-pound stage, 138 feet long and 33 feet in diameter, houses the fuel and liquid oxygen tanks that hold a total of 4,400,000 pounds of liquid oxygen and kerosene. The two tanks are cornected by a 26-foot-long intertank section. Other parts of the booster included the forward skirt and the thrust structure, on which the engines were to be mounted. Five F-1 engines, each weighing 10 tons, gave the booster a total thrust of 7,500,000 pounds, roughly equivalent to 160 million horsepower.

  9. Saturn Apollo Program

    NASA Image and Video Library

    1965-03-01

    The S-IC-T stage was hoisted into the S-IC Static Test Stand at the Marshall Space Flight Center. The S-IC-T stage was a static test vehicle, not intended for flight. It was ground tested repeatedly over a period of many months to prove the vehicle's propulsion system. The 280,000-pound stage, 138 feet long and 33 feet in diameter, housed the fuel and liquid oxygen tanks that held a total of 4,400,000 pounds of liquid oxygen and kerosene. The two tanks were cornected by a 26-foot intertank section. Other parts of the booster included the forward skirt and the thrust structure, on which the engines were to be mounted. Five F-1 engines, each weighing 10 tons, gave the booster a total thrust of 7,500,000 pounds, roughly equivalent to 160 million horsepower.

  10. Using Rasch Analysis to Test the Cross-Cultural Item Equivalence of the Harvard Trauma Questionnaire and the Hopkins Symptom Checklist Across Vietnamese and Cambodian Immigrant Mothers

    PubMed Central

    Choi, Yoonsun; Mericle, Amy; Harachi, Tracy W.

    2012-01-01

    A major challenge in conducting assessments in ethnically and culturally diverse populations, especially using translated instruments, is the possibility that measures developed for a given construct in one particular group may not be assessing the same construct in other groups. Using a Rasch analysis, this study examined the item equivalence of two psychiatric measures, the Harvard Trauma Questionnaire (HTQ), measuring traumatic experience, and the Hopkins Symptom Checklist (HSCL), assessing depression symptoms across Vietnamese- and Cambodian American mothers, using data from the Cross-Cultural Families (CCF) Project. The majority of items were equivalent across the two groups, particularly on the HTQ. However, some items were endorsed differently by the two groups, and thus are not equivalent, suggesting Cambodian and Vietnamese immigrants may manifest certain aspects of trauma and depression differently. Implications of these similarities and differences for practice and the use of IRT in this arena are discussed. PMID:16385149

  11. A profile of Australian nuclear medicine technologist practice.

    PubMed

    Adams, Edwina J; Cox, Jennifer M; Adamson, Barbara J; Schofield, Deborah J

    2008-01-01

    Nuclear medicine in Australia has encountered significant change over the past 30 years, with a move to privately owned practices, technological advances and the transfer of education of the nuclear medicine technologist (NMT) from technical college apprenticeships to university degrees. Currently, shortages of nuclear medicine technologists are reported in some states of Australia. It is not known whether changes in NMT practice or the type of centre in which an NMT works have an influence on retention of staff. The primary objective of this survey was to establish a profile of NMT practice in Australia, with the aim of producing baseline data that could be used in further research to establish levels of retention and job satisfaction. Chief technologists in three states of Australia were invited to respond to a written questionnaire. The questionnaire included data about staffing levels, imaging modalities, procedures performed, and movement of staff. Findings presented will relate to the profile of practice data only. Forty-eight (54%) chief technologists responded to the questionnaire with 73% working in privately owned practices. The majority of centres employ up to two full-time equivalent nuclear medicine technologists and have two gamma cameras and one full-time equivalent nuclear medicine physician. Most centres perform a limited range of studies with bone scans predominating. More than half the centres make some use of a centralized radiopharmacy service. Further research is required to determine how these changes may impact on workplace satisfaction and in turn, on retention.

  12. Comparison of the Pentacam equivalent keratometry reading and IOL Master keratometry measurement in intraocular lens power calculations.

    PubMed

    Karunaratne, Nicholas

    2013-12-01

    To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  13. Surface properties of Ti-6Al-4V alloy part I: Surface roughness and apparent surface free energy.

    PubMed

    Yan, Yingdi; Chibowski, Emil; Szcześ, Aleksandra

    2017-01-01

    Titanium (Ti) and its alloys are the most often used implants material in dental treatment and orthopedics. Topography and wettability of its surface play important role in film formation, protein adhesion, following osseointegration and even duration of inserted implant. In this paper, we prepared Ti-6Al-4V alloy samples using different smoothing and polishing materials as well the air plasma treatment, on which contact angles of water, formamide and diiodomethane were measured. Then the apparent surface free energy was calculated using four different approaches (CAH, LWAB, O-W and Neumann's Equation of State). From LWAB approach the components of surface free energy were obtained, which shed more light on the wetting properties of samples surface. The surface roughness of the prepared samples was investigated with the help of optical profilometer and AFM. It was interesting whether the surface roughness affects the apparent surface free energy. It was found that both polar interactions the electron donor parameter of the energy and the work of water adhesion increased with decreasing roughness of the surfaces. Moreover, short time plasma treatment (1min) caused decrease in the surface hydrophilic character, while longer time (10min) treatment caused significant increase in the polar interactions and the work of water adhesion. Although Ti-6Al-4V alloy has been investigated many times, to our knowledge, so far no paper has been published in which surface roughness and changes in the surface free energy of the alloy were compared in the quantitative way in such large extent. This novel approach deliver better knowledge about the surface properties of differently smoothed and polished samples which may be helpful to facilitate cell adhesion, proliferation and mineralization. Therefore the results obtained present also potentially practical meaning. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Statistics of zero crossings in rough interfaces with fractional elasticity

    NASA Astrophysics Data System (ADS)

    Zamorategui, Arturo L.; Lecomte, Vivien; Kolton, Alejandro B.

    2018-04-01

    We study numerically the distribution of zero crossings in one-dimensional elastic interfaces described by an overdamped Langevin dynamics with periodic boundary conditions. We model the elastic forces with a Riesz-Feller fractional Laplacian of order z =1 +2 ζ , such that the interfaces spontaneously relax, with a dynamical exponent z , to a self-affine geometry with roughness exponent ζ . By continuously increasing from ζ =-1 /2 (macroscopically flat interface described by independent Ornstein-Uhlenbeck processes [Phys. Rev. 36, 823 (1930), 10.1103/PhysRev.36.823]) to ζ =3 /2 (super-rough Mullins-Herring interface), three different regimes are identified: (I) -1 /2 <ζ <0 , (II) 0 <ζ <1 , and (III) 1 <ζ <3 /2 . Starting from a flat initial condition, the mean number of zeros of the discretized interface (I) decays exponentially in time and reaches an extensive value in the system size, or decays as a power-law towards (II) a subextensive or (III) an intensive value. In the steady state, the distribution of intervals between zeros changes from an exponential decay in (I) to a power-law decay P (ℓ ) ˜ℓ-γ in (II) and (III). While in (II) γ =1 -θ with θ =1 -ζ the steady-state persistence exponent, in (III) we obtain γ =3 -2 ζ , different from the exponent γ =1 expected from the prediction θ =0 for infinite super-rough interfaces with ζ >1 . The effect on P (ℓ ) of short-scale smoothening is also analyzed numerically and analytically. A tight relation between the mean interval, the mean width of the interface, and the density of zeros is also reported. The results drawn from our analysis of rough interfaces subject to particular boundary conditions or constraints, along with discretization effects, are relevant for the practical analysis of zeros in interface imaging experiments or in numerical analysis.

  15. PATHOGEN EQUIVALENCY COMMITTEE (MCEARD)

    EPA Science Inventory

    Science Questions:

    MYP Science Question: What is the current state of management practices for biosolids production and application, and how can those be made more effective?

    Research Questions: Are there innovative or alternative sludge disinfection processes that...

  16. Green Power Equivalency Calculator - Calculations and References

    EPA Pesticide Factsheets

    Green power products eligible to be certified by an independent third party against national standards. As a matter of best practice and consumer protection, EPA strongly encourages organizations to purchase these types of certified green power products.

  17. Characterization of LANDSAT-4 TM and MSS Image Quality for Interpretation of Agricultural and Forest Resources

    NASA Technical Reports Server (NTRS)

    Degloria, S. D.; Colwell, R. N.

    1984-01-01

    Systematic analysis of both image and numeric data shows that the overall spectral, spatial, and radiometric quality of the TM data are excellent. Spectral variations in fallow fields are due to the vaiability in soil moisture and surface roughness resulting from the various stages of field preparation for small grains production. Spectrally, the addition of the first TM short wave infrared band (Band 5) significantly enhanced ability to discriminate different crop types. Bands 1, 5, and 6 contain saturated pixels due to high albedo effects, low moisture conditions, and high radiant temperatures of granite and dry, bare soil on south facing slopes, respectively. Spatially, the two fold decrease in interpixel distance and four fold decrease in area per pixel between the TM and MSS allow for improved discrimination of small fields, boundary conditions, road and stream networks in rough terrain, and small forest clearings resulting from various forest management practices.

  18. Angles in the Sky?

    NASA Astrophysics Data System (ADS)

    Behr, Bradford

    2005-09-01

    Tycho Brahe lived and worked in the late 1500s before the telescope was invented. He made highly accurate observations of the positions of planets, stars, and comets using large angle-measuring devices of his own design. You can use his techniques to observe the sky as well. For example, the degree, a common unit of measurement in astronomy, can be measured by holding your fist at arm's length up to the sky. Open your fist and observe the distance across the sky covered by the width of your pinky fingernail. That is, roughly, a degree! After some practice, and knowing that one degree equals four minutes, you can measure elapsed time by measuring the angle of the distance that the Moon appears to have moved and multiplying that number by four. You can also figure distances and sizes of things. These are not precise measurements, but rough estimates that can give you a "close-enough" answer.

  19. Measurement of pattern roughness and local size variation using CD-SEM: current status

    NASA Astrophysics Data System (ADS)

    Fukuda, Hiroshi; Kawasaki, Takahiro; Kawada, Hiroki; Sakai, Kei; Kato, Takashi; Yamaguchi, Satoru; Ikota, Masami; Momonoi, Yoshinori

    2018-03-01

    Measurement of line edge roughness (LER) is discussed from four aspects: edge detection, PSD prediction, sampling strategy, and noise mitigation, and general guidelines and practical solutions for LER measurement today are introduced. Advanced edge detection algorithms such as wave-matching method are shown effective for robustly detecting edges from low SNR images, while conventional algorithm with weak filtering is still effective in suppressing SEM noise and aliasing. Advanced PSD prediction method such as multi-taper method is effective in suppressing sampling noise within a line edge to analyze, while number of lines is still required for suppressing line to line variation. Two types of SEM noise mitigation methods, "apparent noise floor" subtraction method and LER-noise decomposition using regression analysis are verified to successfully mitigate SEM noise from PSD curves. These results are extended to LCDU measurement to clarify the impact of SEM noise and sampling noise on LCDU.

  20. Semi-active suspension for automotive application

    NASA Astrophysics Data System (ADS)

    Venhovens, Paul J. T.; Devlugt, Alex R.

    The theoretical considerations for semi-active damping system evaluation, with respect to semi-active suspension and Kalman filtering, are discussed in terms of the software. Some prototype hardware developments are proposed. A significant improvement in ride comfort performance can be obtained, indicated by root mean square body acceleration values and frequency responses, using a switchable damper system with two settings. Nevertheless the improvement is accompanied by an increase in dynamic tire load variations. The main benefit of semi-active suspensions is the potential of changing the low frequency section of the transfer function. In practice this will support the impression of extra driving stability. It is advisable to apply an adaptive control strategy like the (extended) skyhook version switching more to the 'comfort' setting for straight (and smooth/moderate roughness) road running and switching to 'road holding' for handling maneuvers and possibly rough roads and discrete, severe events like potholes.

  1. Correlation of surface site formation to nanoisland growth in the electrochemical roughening of Pt(111)

    NASA Astrophysics Data System (ADS)

    Jacobse, Leon; Huang, Yi-Fan; Koper, Marc T. M.; Rost, Marcel J.

    2018-03-01

    Platinum plays a central role in a variety of electrochemical devices and its practical use depends on the prevention of electrode degradation. However, understanding the underlying atomic processes under conditions of repeated oxidation and reduction inducing irreversible surface structure changes has proved challenging. Here, we examine the correlation between the evolution of the electrochemical signal of Pt(111) and its surface roughening by simultaneously performing cyclic voltammetry and in situ electrochemical scanning tunnelling microscopy (EC-STM). We identify a `nucleation and early growth' regime of nanoisland formation, and a `late growth' regime after island coalescence, which continues up to at least 170 cycles. The correlation analysis shows that each step site that is created in the `late growth' regime contributes equally strongly to both the electrochemical and the roughness evolution. In contrast, in the `nucleation and early growth' regime, created step sites contribute to the roughness, but not to the electrochemical signal.

  2. Engine Non-Containment: The UK CAA View

    NASA Technical Reports Server (NTRS)

    Gunstone, G. L.

    1977-01-01

    Airworthiness accidents account for roughly one quarter of the total number of accidents to public transport turbojet aircraft. The most reliable, practicable, and cost-effective means of minimizing damage outside the confines of the nacelle is to make the aircraft design invulnerable to any debris which may affect the aircraft. A failure model was developed for use by aircraft builders in measuring the freedom from catastrophe factor of their design.

  3. Neither fixed nor random: weighted least squares meta-regression.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2017-03-01

    Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nahum, T.; Dodiuk, H.; Dotan, A.

    Superhydrophobic surfaces with contact angle (CA) >150 and sliding angle (SA) <10 have been aroused curiosity over the years due to their various applications. Superhydrophobicity can be obtained tailoring the chemistry and the roughness of the surface, mimicking the Lotus flower. Most superhydrophobic surfaces based on secondary bonding lose their roughness in harsh conditions and are unsuitable for practical applications. Photoreactive SiO{sub 2} nanoparticles (NPs) based on benzophenone (BP) can be a very effective tool for formation of reactive species that function as a molecular bridge by covalent bonding between the NP and any polymer matrix with C-C and C-Hmore » bonds. The present work focused on thermoset radiation curing urethane acrylate. Upon UV irradiation reactive excited nπ* triplet benzophenone species are formed and react through hydrogen abstraction to form ketyl radicals which interact with a radicals from the UV irradiated polymer matrix to yield covalent bonding. Roughness was achieved by dipping the substrate in SiO{sub 2}@BPs NPs dispersion followed by irradiation. Fluoroalkylsilane was used to obtain hydrophobic top layer. AFM nano manipulation was used to verify the immobilization of NPs. Evaluation of durability was made using air flow at 300 km/hr. Preliminary results indicate the formation of super hydrophobic surfaces (CA>150 and SA<10) with improved stability.« less

  5. Standardization and validation of a parallel form of the verbal and non-verbal recognition memory test in an Italian population sample.

    PubMed

    Smirni, Daniela; Smirni, Pietro; Di Martino, Giovanni; Cipolotti, Lisa; Oliveri, Massimiliano; Turriziani, Patrizia

    2018-05-04

    In the neuropsychological assessment of several neurological conditions, recognition memory evaluation is requested. Recognition seems to be more appropriate than recall to study verbal and non-verbal memory, because interferences of psychological and emotional disorders are less relevant in the recognition than they are in recall memory paradigms. In many neurological disorders, longitudinal repeated assessments are needed to monitor the effectiveness of rehabilitation programs or pharmacological treatments on the recovery of memory. In order to contain the practice effect in repeated neuropsychological evaluations, it is necessary the use of parallel forms of the tests. Having two parallel forms of the same test, that kept administration procedures and scoring constant, is a great advantage in both clinical practice, for the monitoring of memory disorder, and in experimental practice, to allow the repeated evaluation of memory on healthy and neurological subjects. First aim of the present study was to provide normative values in an Italian sample (n = 160) for a parallel form of a verbal and non-verbal recognition memory battery. Multiple regression analysis revealed significant effects of age and education on recognition memory performance, whereas sex did not reach a significant probability level. Inferential cutoffs have been determined and equivalent scores computed. Secondly, the study aimed to validate the equivalence of the two parallel forms of the Recognition Memory Test. The correlations analyses between the total scores of the two versions of the test and correlation between the three subtasks revealed that the two forms are parallel and the subtasks are equivalent for difficulty.

  6. Effects of language of assessment on the measurement of acculturation: measurement equivalence and cultural frame switching.

    PubMed

    Schwartz, Seth J; Benet-Martínez, Verónica; Knight, George P; Unger, Jennifer B; Zamboanga, Byron L; Des Rosiers, Sabrina E; Stephens, Dionne P; Huang, Shi; Szapocznik, José

    2014-03-01

    The present study used a randomized design, with fully bilingual Hispanic participants from the Miami area, to investigate 2 sets of research questions. First, we sought to ascertain the extent to which measures of acculturation (Hispanic and U.S. practices, values, and identifications) satisfied criteria for linguistic measurement equivalence. Second, we sought to examine whether cultural frame switching would emerge--that is, whether latent acculturation mean scores for U.S. acculturation would be higher among participants randomized to complete measures in English and whether latent acculturation mean scores for Hispanic acculturation would be higher among participants randomized to complete measures in Spanish. A sample of 722 Hispanic students from a Hispanic-serving university participated in the study. Participants were first asked to complete translation tasks to verify that they were fully bilingual. Based on ratings from 2 independent coders, 574 participants (79.5% of the sample) qualified as fully bilingual and were randomized to complete the acculturation measures in either English or Spanish. Theoretically relevant criterion measures--self-esteem, depressive symptoms, and personal identity--were also administered in the randomized language. Measurement equivalence analyses indicated that all of the acculturation measures--Hispanic and U.S. practices, values, and identifications-met criteria for configural, weak/metric, strong/scalar, and convergent validity equivalence. These findings indicate that data generated using acculturation measures can, at least under some conditions, be combined or compared across languages of administration. Few latent mean differences emerged. These results are discussed in terms of the measurement of acculturation in linguistically diverse populations. 2014 APA

  7. Biomechanical Strength of Retrograde Fixation in Proximal Third Scaphoid Fractures.

    PubMed

    Daly, Charles A; Boden, Allison L; Hutton, William C; Gottschalk, Michael B

    2018-04-01

    Current techniques for fixation of proximal pole scaphoid fractures utilize antegrade fixation via a dorsal approach endangering the delicate vascular supply of the dorsal scaphoid. Volar and dorsal approaches demonstrate equivalent clinical outcomes in scaphoid wrist fractures, but no study has evaluated the biomechanical strength for fractures of the proximal pole. This study compares biomechanical strength of antegrade and retrograde fixation for fractures of the proximal pole of the scaphoid. A simulated proximal pole scaphoid fracture was produced in 22 matched cadaveric scaphoids, which were then assigned randomly to either antegrade or retrograde fixation with a cannulated headless compression screw. Cyclic loading and load to failure testing were performed and screw length, number of cycles, and maximum load sustained were recorded. There were no significant differences in average screw length (25.5 mm vs 25.6 mm, P = .934), average number of cyclic loading cycles (3738 vs 3847, P = .552), average load to failure (348 N vs 371 N, P = .357), and number of catastrophic failures observed between the antegrade and retrograde fixation groups (3 in each). Practical equivalence between the 2 groups was calculated and the 2 groups were demonstrated to be practically equivalent (upper threshold P = .010). For this model of proximal pole scaphoid wrist fractures, antegrade and retrograde screw configuration have been proven to be equivalent in terms of biomechanical strength. With further clinical study, we hope surgeons will be able to make their decision for fixation technique based on approaches to bone grafting, concern for tenuous blood supply, and surgeon preference without fear of poor biomechanical properties.

  8. Effects of Language of Assessment on the Measurement of Acculturation: Measurement Equivalence and Cultural Frame Switching

    PubMed Central

    Schwartz, Seth J.; Benet-Martínez, Verónica; Knight, George P.; Unger, Jennifer B.; Zamboanga, Byron L.; Des Rosiers, Sabrina E.; Stephens, Dionne; Huang, Shi; Szapocznik, José

    2014-01-01

    The present study used a randomized design, with fully bilingual Hispanic participants from the Miami area, to investigate two sets of research questions. First, we sought to ascertain the extent to which measures of acculturation (heritage and U.S. practices, values, and identifications) satisfied criteria for linguistic measurement equivalence. Second, we sought to examine whether cultural frame switching would emerge – that is, whether latent acculturation mean scores for U.S. acculturation would be higher among participants randomized to complete measures in English, and whether latent acculturation mean scores for Hispanic acculturation would be higher among participants randomized to complete measures in Spanish. A sample of 722 Hispanic students from a Hispanic-serving university participated in the study. Participants were first asked to complete translation tasks to verify that they were fully bilingual. Based on ratings from two independent coders, 574 participants (79.5% of the sample) qualified as fully bilingual and were randomized to complete the acculturation measures in either English or Spanish. Theoretically relevant criterion measures – self-esteem, depressive symptoms, and personal identity – were also administered in the randomized language. Measurement equivalence analyses indicated that all of the acculturation measures – Hispanic and U.S. practices, values, and identifications – met criteria for configural, weak/metric, strong/scalar, and convergent validity equivalence. These findings indicate that data generated using acculturation measures can, at least under some conditions, be combined or compared across languages of administration. Few latent mean differences emerged. These results are discussed in terms of the measurement of acculturation in linguistically diverse populations. PMID:24188146

  9. MEMS 3-DoF gyroscope design, modeling and simulation through equivalent circuit lumped parameter model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mian, Muhammad Umer, E-mail: umermian@gmail.com; Khir, M. H. Md.; Tang, T. B.

    Pre-fabrication, behavioural and performance analysis with computer aided design (CAD) tools is a common and fabrication cost effective practice. In light of this we present a simulation methodology for a dual-mass oscillator based 3 Degree of Freedom (3-DoF) MEMS gyroscope. 3-DoF Gyroscope is modeled through lumped parameter models using equivalent circuit elements. These equivalent circuits consist of elementary components which are counterpart of their respective mechanical components, used to design and fabricate 3-DoF MEMS gyroscope. Complete designing of equivalent circuit model, mathematical modeling and simulation are being presented in this paper. Behaviors of the equivalent lumped models derived for themore » proposed device design are simulated in MEMSPRO T-SPICE software. Simulations are carried out with the design specifications following design rules of the MetalMUMPS fabrication process. Drive mass resonant frequencies simulated by this technique are 1.59 kHz and 2.05 kHz respectively, which are close to the resonant frequencies found by the analytical formulation of the gyroscope. The lumped equivalent circuit modeling technique proved to be a time efficient modeling technique for the analysis of complex MEMS devices like 3-DoF gyroscopes. The technique proves to be an alternative approach to the complex and time consuming couple field analysis Finite Element Analysis (FEA) previously used.« less

  10. Comparison of alternative designs for reducing complex neurons to equivalent cables.

    PubMed

    Burke, R E

    2000-01-01

    Reduction of the morphological complexity of actual neurons into accurate, computationally efficient surrogate models is an important problem in computational neuroscience. The present work explores the use of two morphoelectrotonic transformations, somatofugal voltage attenuation (AT cables) and signal propagation delay (DL cables), as bases for construction of electrotonically equivalent cable models of neurons. In theory, the AT and DL cables should provide more accurate lumping of membrane regions that have the same transmembrane potential than the familiar equivalent cables that are based only on somatofugal electrotonic distance (LM cables). In practice, AT and DL cables indeed provided more accurate simulations of the somatic transient responses produced by fully branched neuron models than LM cables. This was the case in the presence of a somatic shunt as well as when membrane resistivity was uniform.

  11. Stress distribution in and equivalent width of flanges of wide, thin-wall steel beams

    NASA Technical Reports Server (NTRS)

    Winter, George

    1940-01-01

    The use of different forms of wide-flange, thin-wall steel beams is becoming increasingly widespread. Part of the information necessary for a national design of such members is the knowledge of the stress distribution in and the equivalent width of the flanges of such beams. This problem is analyzed in this paper on the basis of the theory of plane stress. As a result, tables and curves are given from which the equivalent width of any given beam can be read directly for use in practical design. An investigation is given of the limitations of this analysis due to the fact that extremely wide and thin flanges tend to curve out of their plane toward the neutral axis. A summary of test data confirms very satisfactorily the analytical results.

  12. A review on equivalent magnetic noise of magnetoelectric laminate sensors

    PubMed Central

    Wang, Y. J.; Gao, J. Q.; Li, M. H.; Shen, Y.; Hasanyan, D.; Li, J. F.; Viehland, D.

    2014-01-01

    Since the turn of the millennium, multi-phase magnetoelectric (ME) composites have been subject to attention and development, and giant ME effects have been found in laminate composites of piezoelectric and magnetostrictive layers. From an application perspective, the practical usefulness of a magnetic sensor is determined not only by the output signal of the sensor in response to an incident magnetic field, but also by the equivalent magnetic noise generated in the absence of such an incident field. Here, a short review of developments in equivalent magnetic noise reduction for ME sensors is presented. This review focuses on internal noise, the analysis of the noise contributions and a summary of noise reduction strategies. Furthermore, external vibration noise is also discussed. The review concludes with an outlook on future possibilities and scientific challenges in the field of ME magnetic sensors. PMID:24421380

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Shang-Lung; Chu, Tieh-Chi; Lin, Yung-Chien

    Purpose: Polymethylmethacrylate (PMMA) slab is one of the mostly used phantoms for studying breast dosimetry in mammography. The purpose of this study was to evaluate the equivalence between exposure factors acquired from PMMA slabs and patient cases of different age groups of Taiwanese women in mammography. Methods: This study included 3910 craniocaudal screen/film mammograms on Taiwanese women acquired on one mammographic unit. The tube loading, compressed breast thickness (CBT), compression force, tube voltage, and target/filter combination for each mammogram were collected for all patients. The glandularity and the equivalent thickness of PMMA were determined for each breast using the exposuremore » factors of the breast in combination with experimental measurements from breast-tissue-equivalent attenuation slabs. Equivalent thicknesses of PMMA to the breasts of Taiwanese women were then estimated. Results: The average {+-} standard deviation CBT and breast glandularity in this study were 4.2 {+-} 1.0 cm and 54% {+-} 23%, respectively. The average equivalent PMMA thickness was 4.0 {+-} 0.7 cm. PMMA slabs producing equivalent exposure factors as in the breasts of Taiwanese women were determined for the age groups 30-49 yr and 50-69 yr. For the 4-cm PMMA slab, the CBT and glandularity values of the equivalent breast were 4.1 cm and 65%, respectively, for the age group 30-49 yr and 4.4 cm and 44%, respectively, for the age group 50-69 yr. Conclusions: The average thickness of PMMA slabs producing the same exposure factors as observed in a large group of Taiwanese women is less than that reported for American women. The results from this study can provide useful information for determining a suitable thickness of PMMA for mammographic dose survey in Taiwan. The equivalence of PMMA slabs and the breasts of Taiwanese women is provided to allow average glandular dose assessment in clinical practice.« less

  14. Food, energy, and water in an era of disappearing snow

    NASA Astrophysics Data System (ADS)

    Mote, P.; Lettenmaier, D. P.; Li, S.; Xiao, M.

    2017-12-01

    Mountain snowpack stores a significant quantity of water in the western US, accumulating during the wet season and melting during the dry summers and supplying more than 65% of the water used for irrigated agriculture, energy production (both hydropower and thermal), and municipal and industrial uses. The importance of snow to western agriculture is demonstrated by the fact that most snow monitoring is performed by the US Department of Agriculture. In a paper published in 2005, we showed that roughly 70% of monitoring sites showed decreasing trends through 2002. Now, with 14 additional years of data, over 90% of snow monitoring sites with long records across the western US show declines through 2016, of which 33% are significant (vs 5% expected by chance) and 2% are significant and positive (vs 5% expected by chance). Declining trends are observed across all months, states, and climates, but are largest in spring, in the Pacific states, and in locations with mild winter climate. We corroborate and extend these observations using a gridded hydrology model, which also allows a robust estimate of total western snowpack and its decline. Averaged across the western US, the decline in total April 1 snow water equivalent since mid-century is roughly 15-30% or 25-50 km3, comparable in volume to the West's largest man-made reservoir, Lake Mead. In the absence of rapid reductions in emissions of greenhouse gases, these losses will accelerate; snow losses on this scale demonstrate the necessity of rethinking water storage, policy, and usage.

  15. Pseudospectral calculation of helium wave functions, expectation values, and oscillator strength

    NASA Astrophysics Data System (ADS)

    Grabowski, Paul E.; Chernoff, David F.

    2011-10-01

    We show that the pseudospectral method is a powerful tool for finding precise solutions of Schrödinger’s equation for two-electron atoms with general angular momentum. Realizing the method’s full promise for atomic calculations requires special handling of singularities due to two-particle Coulomb interactions. We give a prescription for choosing coordinates and subdomains whose efficacy we illustrate by solving several challenging problems. One test centers on the determination of the nonrelativistic electric dipole oscillator strength for the helium 11S→21P transition. The result achieved, 0.27616499(27), is comparable to the best in the literature. The formally equivalent length, velocity, and acceleration expressions for the oscillator strength all yield roughly the same accuracy. We also calculate a diverse set of helium ground-state expectation values, reaching near state-of-the-art accuracy without the necessity of implementing any special-purpose numerics. These successes imply that general matrix elements are directly and reliably calculable with pseudospectral methods. A striking result is that all the relevant quantities tested in this paper—energy eigenvalues, S-state expectation values and a bound-bound dipole transition between the lowest energy S and P states—converge exponentially with increasing resolution and at roughly the same rate. Each individual calculation samples and weights the configuration space wave function uniquely but all behave in a qualitatively similar manner. These results suggest that the method has great promise for similarly accurate treatment of few-particle systems.

  16. Land Ice Freshwater Budget of the Arctic and North Atlantic Oceans: 1. Data, Methods, and Results

    NASA Astrophysics Data System (ADS)

    Bamber, J. L.; Tedstone, A. J.; King, M. D.; Howat, I. M.; Enderlin, E. M.; van den Broeke, M. R.; Noel, B.

    2018-03-01

    The freshwater budget of the Arctic and sub-polar North Atlantic Oceans has been changing due, primarily, to increased river runoff, declining sea ice and enhanced melting of Arctic land ice. Since the mid-1990s this latter component has experienced a pronounced increase. We use a combination of satellite observations of glacier flow speed and regional climate modeling to reconstruct the land ice freshwater flux from the Greenland ice sheet and Arctic glaciers and ice caps for the period 1958-2016. The cumulative freshwater flux anomaly exceeded 6,300 ± 316 km3 by 2016. This is roughly twice the estimate of a previous analysis that did not include glaciers and ice caps outside of Greenland and which extended only to 2010. From 2010 onward, the total freshwater flux is about 1,300 km3/yr, equivalent to 0.04 Sv, which is roughly 40% of the estimated total runoff to the Arctic for the same time period. Not all of this flux will reach areas of deep convection or Arctic and Sub-Arctic seas. We note, however, that the largest freshwater flux anomalies, grouped by ocean basin, are located in Baffin Bay and Davis Strait. The land ice freshwater flux displays a strong seasonal cycle with summer time values typically around five times larger than the annual mean. This will be important for understanding the impact of these fluxes on fjord circulation, stratification, and the biogeochemistry of, and nutrient delivery to, coastal waters.

  17. Effects of spacing of item repetitions in continuous recognition memory: does item retrieval difficulty promote item retention in older adults?

    PubMed

    Kılıç, Aslı; Hoyer, William J; Howard, Marc W

    2013-01-01

    BACKGROUND/STUDY CONTEXT: Older adults exhibit an age-related deficit in item memory as a function of the length of the retention interval, but older adults and young adults usually show roughly equivalent benefits due to the spacing of item repetitions in continuous memory tasks. The current experiment investigates the seemingly paradoxical effects of retention interval and spacing in young and older adults using a continuous recognition memory procedure. Fifty young adults and 52 older adults gave memory confidence ratings to words that were presented once (P1), twice (P2), or three times (P3), and the effects of the lag length and retention interval were assessed at P2 and at P3, respectively. Response times at P2 were disproportionately longer for older adults than for younger adults as a function of the number of items occurring between P1 and P2, suggestive of age-related loss in item memory. Ratings of confidence in memory responses revealed that older adults remembered fewer items at P2 with a high degree of certainty. Confidence ratings given at P3 suggested that young and older adults derived equivalent benefits from the spacing between P1 and P2. Findings of this study support theoretical accounts that suggest that recursive reminding and/or item retrieval difficulty promote item retention in older adults.

  18. Persistent monolayer-scale chemical ordering in Si{sub 1−x}Ge{sub x} heteroepitaxial films during surface roughening and strain relaxation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amatya, J. M.; Floro, J. A.

    2015-12-28

    Chemical ordering in semiconductor alloys could modify thermal and electronic transport, with potential benefits to thermoelectric properties. Here, metastable ordering that occurs during heteroepitaxial growth of Si{sub 1−x}Ge{sub x} thin film alloys on Si(001) and Ge(001) substrates is investigated. A parametric study was performed to study how strain, surface roughness, and growth parameters affect the order parameter during the alloy growth. The order parameter for the alloy films was carefully quantified using x-ray diffraction, taking into account an often-overlooked issue associated with the presence of multiple spatial variants associated with ordering along equivalent <111> directions. Optimal ordering was observed inmore » the films having the smoothest surfaces. Extended strain relaxation is suggested to reduce the apparent order through creation of anti-phase boundaries. Ordering surprisingly persists even when the film surface extensively roughens to form (105) facets. Growth on deliberately miscut Si(001) surfaces does not affect the volume-averaged order parameter but does impact the relative volume fractions of the equivalent ordered variants in a manner consistent with geometrically necessary changes in step populations. These results provide somewhat self-contradictory implications for the role of step edges in controlling the ordering process, indicating that our understanding is still incomplete.« less

  19. TECHNOLOGIES FORM MONITORING AND ...

    EPA Pesticide Factsheets

    A demonstration of technologies for determining the presence of dioxin and dioxin-like compounds in soil and sediment was conducted under EPA's Superfund Innovative Technology Evaluation Program in Saginaw, Michigan in April 2004. This report describes the evaluation of Wako Pure Chemical Industries's Dioxin ELISA Kit. The kit is an immunoassay technique that reports toxicity equivalents (TEQ) of dioxin/furans. The sample units are in pg/g 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) equivalents (EQ). The technology results were compared to high resolution mass spectrometry (HRMS) TEQ results generated using EPA Method 1613B.The Wako results were biased both positively and negatively relative to HRMS results. The technologys estimated method detection limit was 83-201 pg/g 2,3,7,8-TCDD EQ, but this should be considered a rough estimate. Results from this demonstration suggest that the Wako kit could be an effective screening tool for determining sample results above and below 20 pg/g TEQ, and even more effective as a screen for samples above and below 50 pg/g TEQ, particularly considering the cost to analyze the 209 demonstration samples was significantly less than that of the reference laboratory ($150,294 vs. $213,580), and all samples were analyzed on-site in 9 days (in comparison to the reference laboratory which took 8 months). The objective of this program is to promote the acceptance and use of innovative field technologies by providing well-documented per

  20. Chemical Action of Halogenated Agents in Fire Extinguishing

    NASA Technical Reports Server (NTRS)

    Belles, Frank E.

    1955-01-01

    The action of halogenated agents in preventing flame propagation in fuel-air mixtures in laboratory tests is discussed in terms of a possible chemical mechanism. The mechanism chosen is that of chain-breaking reactions between agent and active particles (hydrogen and oxygen atoms and hydroxyl radicsls). Data from the literature on the flammability peaks of n-heptane agent-air mixtures are treated. Ratings of agent effectiveness in terms of the fuel equivalent of the agent, based on both fuel and agent concentrations at the peak, are proposed as preferable to ratings in terms of agent concentration alone. These fuel-equivalent ratings are roughly correlated by reactivities assigned to halogen and hydrogen atoms in the agent molecules. It is concluded that the presence of hydrogen in agent need not reduce its fire-fighting ability, provided there is enough halogen to make the agent nonflammable. A method is presented for estimating from quenching-distance data a rate constant for the reaction of agent with active particles. A quantitative result is obtained for methyl bromide. This rate constant predicts the observed peak concentration of methyl bromide quite well. However, more data are needed to prove the validity of the method. The assumption that hal.ogenatedagents act mainly by chain-bresking reactions with active particles is consistent with the experimental facts and should help guide the selection of agents for further tests.

  1. A multicolour graph as a complete topological invariant for \\Omega-stable flows without periodic trajectories on surfaces

    NASA Astrophysics Data System (ADS)

    Kruglov, V. E.; Malyshev, D. S.; Pochinka, O. V.

    2018-01-01

    Studying the dynamics of a flow on surfaces by partitioning the phase space into cells with the same limit behaviour of trajectories within a cell goes back to the classical papers of Andronov, Pontryagin, Leontovich and Maier. The types of cells (the number of which is finite) and how the cells adjoin one another completely determine the topological equivalence class of a flow with finitely many special trajectories. If one trajectory is chosen in every cell of a rough flow without periodic orbits, then the cells are partitioned into so-called triangular regions of the same type. A combinatorial description of such a partition gives rise to the three-colour Oshemkov-Sharko graph, the vertices of which correspond to the triangular regions, and the edges to separatrices connecting them. Oshemkov and Sharko proved that such flows are topologically equivalent if and only if the three-colour graphs of the flows are isomorphic, and described an algorithm of distinguishing three-colour graphs. But their algorithm is not efficient with respect to graph theory. In the present paper, we describe the dynamics of Ω-stable flows without periodic trajectories on surfaces in the language of four-colour graphs, present an efficient algorithm for distinguishing such graphs, and develop a realization of a flow from some abstract graph. Bibliography: 17 titles.

  2. Non-LTE line formation of Fe in late-type stars - III. 3D non-LTE analysis of metal-poor stars

    NASA Astrophysics Data System (ADS)

    Amarsi, A. M.; Lind, K.; Asplund, M.; Barklem, P. S.; Collet, R.

    2016-12-01

    As one of the most important elements in astronomy, iron abundance determinations need to be as accurate as possible. We investigate the accuracy of spectroscopic iron abundance analyses using archetypal metal-poor stars. We perform detailed 3D non-LTE radiative transfer calculations based on 3D hydrodynamic STAGGER model atmospheres, and employ a new model atom that includes new quantum-mechanical neutral hydrogen collisional rate coefficients. With the exception of the red giant HD122563, we find that the 3D non-LTE models achieve Fe I/Fe II excitation and ionization balance as well as not having any trends with equivalent width to within modelling uncertainties of 0.05 dex, all without having to invoke any microturbulent broadening; for HD122563 we predict that the current best parallax-based surface gravity is overestimated by 0.5 dex. Using a 3D non-LTE analysis, we infer iron abundances from the 3D model atmospheres that are roughly 0.1 dex higher than corresponding abundances from 1D MARCS model atmospheres; these differences go in the same direction as the non-LTE effects themselves. We make available grids of departure coefficients, equivalent widths and abundance corrections, calculated on 1D MARCS model atmospheres and horizontally and temporally averaged 3D STAGGER model atmospheres.

  3. The practical impact of elastohydrodynamic lubrication

    NASA Technical Reports Server (NTRS)

    Anderson, W. J.

    1978-01-01

    The use of elastohydrodynamics in the analysis of rolling element bearings is discussed. Relationships for minimum film thickness and tractive force were incorporated into computer codes and used for bearing performance prediction. The lambda parameter (ratio of film thickness to composite surface roughness) was shown to be important in predicting bearing life and failure mode. Results indicate that at values of lambda below 3 failure modes other than the classic subsurface initiated fatigue can occur.

  4. Pursuing sustainable productivity with millions of smallholder farmers.

    PubMed

    Cui, Zhenling; Zhang, Hongyan; Chen, Xinping; Zhang, Chaochun; Ma, Wenqi; Huang, Chengdong; Zhang, Weifeng; Mi, Guohua; Miao, Yuxin; Li, Xiaolin; Gao, Qiang; Yang, Jianchang; Wang, Zhaohui; Ye, Youliang; Guo, Shiwei; Lu, Jianwei; Huang, Jianliang; Lv, Shihua; Sun, Yixiang; Liu, Yuanying; Peng, Xianlong; Ren, Jun; Li, Shiqing; Deng, Xiping; Shi, Xiaojun; Zhang, Qiang; Yang, Zhiping; Tang, Li; Wei, Changzhou; Jia, Liangliang; Zhang, Jiwang; He, Mingrong; Tong, Yanan; Tang, Qiyuan; Zhong, Xuhua; Liu, Zhaohui; Cao, Ning; Kou, Changlin; Ying, Hao; Yin, Yulong; Jiao, Xiaoqiang; Zhang, Qingsong; Fan, Mingsheng; Jiang, Rongfeng; Zhang, Fusuo; Dou, Zhengxia

    2018-03-15

    Sustainably feeding a growing population is a grand challenge, and one that is particularly difficult in regions that are dominated by smallholder farming. Despite local successes, mobilizing vast smallholder communities with science- and evidence-based management practices to simultaneously address production and pollution problems has been infeasible. Here we report the outcome of concerted efforts in engaging millions of Chinese smallholder farmers to adopt enhanced management practices for greater yield and environmental performance. First, we conducted field trials across China's major agroecological zones to develop locally applicable recommendations using a comprehensive decision-support program. Engaging farmers to adopt those recommendations involved the collaboration of a core network of 1,152 researchers with numerous extension agents and agribusiness personnel. From 2005 to 2015, about 20.9 million farmers in 452 counties adopted enhanced management practices in fields with a total of 37.7 million cumulative hectares over the years. Average yields (maize, rice and wheat) increased by 10.8-11.5%, generating a net grain output of 33 million tonnes (Mt). At the same time, application of nitrogen decreased by 14.7-18.1%, saving 1.2 Mt of nitrogen fertilizers. The increased grain output and decreased nitrogen fertilizer use were equivalent to US$12.2 billion. Estimated reactive nitrogen losses averaged 4.5-4.7 kg nitrogen per Megagram (Mg) with the intervention compared to 6.0-6.4 kg nitrogen per Mg without. Greenhouse gas emissions were 328 kg, 812 kg and 434 kg CO 2 equivalent per Mg of maize, rice and wheat produced, respectively, compared to 422 kg, 941 kg and 549 kg CO 2 equivalent per Mg without the intervention. On the basis of a large-scale survey (8.6 million farmer participants) and scenario analyses, we further demonstrate the potential impacts of implementing the enhanced management practices on China's food security and sustainability outlook.

  5. Pursuing sustainable productivity with millions of smallholder farmers

    NASA Astrophysics Data System (ADS)

    Cui, Zhenling; Zhang, Hongyan; Chen, Xinping; Zhang, Chaochun; Ma, Wenqi; Huang, Chengdong; Zhang, Weifeng; Mi, Guohua; Miao, Yuxin; Li, Xiaolin; Gao, Qiang; Yang, Jianchang; Wang, Zhaohui; Ye, Youliang; Guo, Shiwei; Lu, Jianwei; Huang, Jianliang; Lv, Shihua; Sun, Yixiang; Liu, Yuanying; Peng, Xianlong; Ren, Jun; Li, Shiqing; Deng, Xiping; Shi, Xiaojun; Zhang, Qiang; Yang, Zhiping; Tang, Li; Wei, Changzhou; Jia, Liangliang; Zhang, Jiwang; He, Mingrong; Tong, Yanan; Tang, Qiyuan; Zhong, Xuhua; Liu, Zhaohui; Cao, Ning; Kou, Changlin; Ying, Hao; Yin, Yulong; Jiao, Xiaoqiang; Zhang, Qingsong; Fan, Mingsheng; Jiang, Rongfeng; Zhang, Fusuo; Dou, Zhengxia

    2018-03-01

    Sustainably feeding a growing population is a grand challenge, and one that is particularly difficult in regions that are dominated by smallholder farming. Despite local successes, mobilizing vast smallholder communities with science- and evidence-based management practices to simultaneously address production and pollution problems has been infeasible. Here we report the outcome of concerted efforts in engaging millions of Chinese smallholder farmers to adopt enhanced management practices for greater yield and environmental performance. First, we conducted field trials across China’s major agroecological zones to develop locally applicable recommendations using a comprehensive decision-support program. Engaging farmers to adopt those recommendations involved the collaboration of a core network of 1,152 researchers with numerous extension agents and agribusiness personnel. From 2005 to 2015, about 20.9 million farmers in 452 counties adopted enhanced management practices in fields with a total of 37.7 million cumulative hectares over the years. Average yields (maize, rice and wheat) increased by 10.8–11.5%, generating a net grain output of 33 million tonnes (Mt). At the same time, application of nitrogen decreased by 14.7–18.1%, saving 1.2 Mt of nitrogen fertilizers. The increased grain output and decreased nitrogen fertilizer use were equivalent to US$12.2 billion. Estimated reactive nitrogen losses averaged 4.5–4.7 kg nitrogen per Megagram (Mg) with the intervention compared to 6.0–6.4 kg nitrogen per Mg without. Greenhouse gas emissions were 328 kg, 812 kg and 434 kg CO2 equivalent per Mg of maize, rice and wheat produced, respectively, compared to 422 kg, 941 kg and 549 kg CO2 equivalent per Mg without the intervention. On the basis of a large-scale survey (8.6 million farmer participants) and scenario analyses, we further demonstrate the potential impacts of implementing the enhanced management practices on China’s food security and sustainability outlook.

  6. The Einstein Slew Survey

    NASA Technical Reports Server (NTRS)

    Elvis, Martin; Plummer, David; Schachter, Jonathan; Fabbiano, G.

    1992-01-01

    A catalog of 819 sources detected in the Einstein IPC Slew Survey of the X-ray sky is presented; 313 of the sources were not previously known as X-ray sources. Typical count rates are 0.1 IPC count/s, roughly equivalent to a flux of 3 x 10 exp -12 ergs/sq cm s. The sources have positional uncertainties of 1.2 arcmin (90 percent confidence) radius, based on a subset of 452 sources identified with previously known pointlike X-ray sources (i.e., extent less than 3 arcmin). Identifications based on a number of existing catalogs of X-ray and optical objects are proposed for 637 of the sources, 78 percent of the survey (within a 3-arcmin error radius) including 133 identifications of new X-ray sources. A public identification data base for the Slew Survey sources will be maintained at CfA, and contributions to this data base are invited.

  7. Chemical composition of individual aerosol particles from working areas in a nickel refinery.

    PubMed

    Höflich, B L; Wentzel, M; Ortner, H M; Weinbruch, S; Skogstad, A; Hetland, S; Thomassen, Y; Chaschin, V P; Nieboer, E

    2000-06-01

    Individual aerosol particles (n = 1170) collected at work stations in a nickel refinery were analyzed by wavelength-dispersive electron-probe microanalysis. By placing arbitrary restrictions on the contents of sulfur and silicon, the particles could be divided into four main groups. Scanning electron images indicated that most of the particles examined were relatively small (< or = 2 microm, equivalent projected area diameter), and that their morphology suggested formation from a melt. There was an absence of well-defined phases and simple stoichiometries, indicating that exposures to pure substances such as nickel subsulfide or specific oxides appeared not to occur. Although the elemental composition of particles varied greatly, a rough association was evident with the known elemental content of the refinery intermediates. The implications of the findings for aerosol speciation measurements, toxicological studies and interpretation of adverse health effects are explored.

  8. On vegetation mapping in Alaska using LANDSAT imagery with primary concerns for method and purpose in satellite image-based vegetation and land-use mapping and the visual interpretation of imagery in photographic format

    NASA Technical Reports Server (NTRS)

    Anderson, J. H. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A simulated color infrared LANDSAT image covering the western Seward Peninsula was used for identifying and mapping vegetation by direct visual examination. The 1:1,083,400 scale print used was prepared by a color additive process using positive transparencies from MSS bands 4, 5, and 7. Seven color classes were recognized. A vegetation map of 3200 sq km area just west of Fairbanks, Alaska was made. Five colors were recognized on the image and identified to vegetation types roughly equivalent to formations in the UNESCO classification: orange - broadleaf deciduous forest; gray - needleleaf evergreen forest; light violet - subarctic alpine tundra vegetation; violet - broadleaf deciduous shrub thicket; and dull violet - bog vegetation.

  9. Spatially-resolved Spectroscopy of the IC443 Pulsar Wind Nebula and Environs

    NASA Technical Reports Server (NTRS)

    Swartz, D. A.; Weisskopf, M. C.; Zavlin, V. E.; Bucciantini, N.; Clarke, T. E.; Karovska, M.; Pavlov, G. G.; O'Dell, S. L.; vanderHorst, A J.; Yukita, M.

    2013-01-01

    Deep Chandra ACIS observations of the region around the putative pulsar, CXOU J061705.3+222117, in the supernova remnant IC443 reveal, for the first time, a ring-like morphology surrounding the pulsar and a jet-like structure oriented roughly north-south across the ring and through the pulsar location. The observations further confirm that (1) the spectrum and flux of the central object are consistent with a rotation-powered pulsar interpretation, (2) the non-thermal surrounding nebula is likely powered by the pulsar wind, and (3) the thermal-dominated spectrum at greater distances is consistent with emission from the supernova remnant. The cometary shape of the nebula, suggesting motion towards the southwest (or, equivalently, flow of ambient medium to the northeast), appears to be subsonic; there is no evidence for a strong bow shock, and the circular ring is not distorted by motion through the ambient medium.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chlistunoff, Jerzy; Pivovar, Bryan

    In this paper, the oxygen reduction reaction (ORR) at the interface between platinum and Nafion 1100 equivalent weight was studied as a function of temperature (20–80 °C), humidity (10–100%), scan rate, the manner in which Nafion film was deposited, and the state of the Pt surface using ultramicroelectrodes employing cyclic voltammetry and chronoamperometry. ORR on smooth electrodes was strongly inhibited under specific conditions dependent on temperature, humidity, and scan rate. From the data presented, we postulate that dynamic changes in the molecular structure of the ionomer at the platinum interface result in differences in ORR voltammetry for films prepared andmore » equilibrated under different conditions. The lack of similar changes for rough, platinized electrodes has been attributed to differences in initial ionomer structure and a higher energy barrier for ionomer restructuring. Finally, these model system studies yield insight into the ionomer-catalyst interface of particular interest for polymer electrolyte fuel cells.« less

  11. Evaluation of integral exposure energy load on aural analyzer of miners

    NASA Technical Reports Server (NTRS)

    Kornilov, A. N.; Larantseva, Y. I.

    1981-01-01

    The individual exposure integral noise load on workers before the beginning of hearing impairment was determined for a group of 20 male miners who had worked with drilling equipment and harvesters for 8 to 20 years before the onset of the disability. Results show that the total exposure energy load of about 4 kw x h sq m, obtained by miners in the examined group, resulted in occupational injury to the auditory organ (cochlear neuritis) in 75% of the cases. The equivalent energy level of noise computed according to the date of total energy load is roughly 99 db A, which significantly exceeds the permissible amount of 85 db A. There is a correlation (r = 0.77) between the integral exposure energy noise on the aural analyzer in the degree of increase in the total threshold for the mean speech range.

  12. Draft reference grid cells for emergency response reconnaissance developed for use by the US Environmental Protection Agency [ER.QUADS6K_EPA

    EPA Pesticide Factsheets

    Draft reference grid cells for emergency response reconnaissance developed for use by the US Environmental Protection Agency. Grid cells are based on densification of the USGS Quarterquad (1:12,000 scale or 12K) grids for the continental United States, Alaska, Hawaii and Puerto Rico and are roughly equivalent to 1:6000 scale (6K) quadrangles approximately 2 miles long on each side. Note: This file is >80MB in size. Regional subsets have been created from this national file that include a 20 mile buffer of tiles around each EPA Region. To access the regional subsets, go to http://geodata.epa.gov/OSWER/6kquads_epa.zip and select the name of the file that corresponds to your region of interest (e.g. 6kquadr1.zip is the name of the file created for EPA Region 1).

  13. Variational calculation of macrostate transition rates

    NASA Astrophysics Data System (ADS)

    Ulitsky, Alex; Shalloway, David

    1998-08-01

    We develop the macrostate variational method (MVM) for computing reaction rates of diffusive conformational transitions in multidimensional systems by a variational coarse-grained "macrostate" decomposition of the Smoluchowski equation. MVM uses multidimensional Gaussian packets to identify and focus computational effort on the "transition region," a localized, self-consistently determined region in conformational space positioned roughly between the macrostates. It also determines the "transition direction" which optimally specifies the projected potential of mean force for mean first-passage time calculations. MVM is complementary to variational transition state theory in that it can efficiently solve multidimensional problems but does not accommodate memory-friction effects. It has been tested on model 1- and 2-dimensional potentials and on the 12-dimensional conformational transition between the isoforms of a microcluster of six-atoms having only van der Waals interactions. Comparison with Brownian dynamics calculations shows that MVM obtains equivalent results at a fraction of the computational cost.

  14. Influence of carbon conductive additives on electrochemical double-layer supercapacitor parameters

    NASA Astrophysics Data System (ADS)

    Kiseleva, E. A.; Zhurilova, M. A.; Kochanova, S. A.; Shkolnikov, E. J.; Tarasenko, A. B.; Zaitseva, O. V.; Uryupina, O. V.; Valyano, G. V.

    2018-01-01

    Electrochemical double-layer capacitors (EDLC) offer energy storage technology, highly demanded for rapid transition processes in transport and stationary applications, concerned with fast power fluctuations. Rough structure of activated carbon, widely used as electrode material because of its high specific area, leads to poor electrode conductivity. Therefore there is the need for conductive additive to decrease internal resistance and to achieve high specific power and high specific energy. Usually carbon blacks are widely used as conductive additive. In this paper electrodes with different conductive additives—two types of carbon blacks and single-walled carbon nanotubes—were prepared and characterized in organic electrolyte-based EDLC cells. Electrodes are based on original wood derived activated carbon produced by potassium hydroxide high-temperature activation at Joint Institute for High Temperatures RAS. Electrodes were prepared from slurry by cold-rolling. For electrode characterization cyclic voltammetry, impedance spectra analysis, equivalent series resistance measurements and galvanostatic charge-discharge were used.

  15. Magsonic™ Carbothermal Technology Compared with the Electrolytic and Pidgeon Processes

    NASA Astrophysics Data System (ADS)

    Prentice, Leon H.; Haque, Nawshad

    A broad technology comparison of carbothermal magnesium production with present technologies has not been previously presented. In this paper a comparative analysis of CSIRO's MagSonic™ process is made with the electrolytic and Pidgeon processes. The comparison covers energy intensity (GJ/tonne Mg), labor intensity (person-hours/tonne Mg), capital intensity (USD/tonne annual Mg installed capacity), and Global Warming Potential (GWP, tonnes CO2-equivalent/tonne Mg). Carbothermal technology is advantageous on all measures except capital intensity (where it is roughly twice the capital cost of a similarly-sized Pidgeon plant). Carbothermal and electrolytic production can have comparatively low environmental impacts, with typical emissions one-sixth those of the Pidgeon process. Despite recent progress, the Pidgeon process depends upon abundant energy and labor combined with few environmental constraints. Pressure is expected to increase on environmental constraints and labor and energy costs over the coming decade. Carbothermal reduction technology appears to be competitive for future production.

  16. EG ANDROMEDAE: A NEW ORBIT AND ADDITIONAL EVIDENCE FOR A PHOTOIONIZED WIND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenyon, Scott J.; Garcia, Michael R., E-mail: skenyon@cfa.harvard.edu, E-mail: michael.r.garcia@nasa.gov

    We analyze a roughly 20 yr set of spectroscopic observations for the symbiotic binary EG And. Radial velocities derived from echelle spectra are best fit with a circular orbit having an orbital period of P = 483.3 ± 1.6 days and semi-amplitude K = 7.34 ± 0.07 km s{sup −1}. Combined with previous data, these observations rule out an elliptical orbit at the 10 σ level. Equivalent widths of H i Balmer emission lines and various absorption features vary in phase with the orbital period. Relative to the radius of the red giant primary, the apparent size of the H ii region is consistent withmore » a model where a hot secondary star with effective temperature T{sub h} ≈ 75,000 K ionizes the wind from the red giant.« less

  17. Near field detector for integrated surface plasmon resonance biosensor applications.

    PubMed

    Bora, Mihail; Celebi, Kemal; Zuniga, Jorge; Watson, Colin; Milaninia, Kaveh M; Baldo, Marc A

    2009-01-05

    Integrated surface plasmon resonance biosensors promise to enable compact and portable biosensing at high sensitivities. To replace the far field detector traditionally used to detect surface plasmons we integrate a near field detector below a functionalized gold film. The evanescent field of a surface plasmon at the aqueous-gold interface is converted into photocurrent by a thin film organic heterojunction diode. We demonstrate that use of the near field detector is equivalent to the traditional far field measurement of reflectivity. The sensor is stable and reversible in an aqueous environment for periods of 6 hrs. For specific binding of neutravidin, the detection limit is 4 microg/cm(2). The sensitivity can be improved by reducing surface roughness of the gold layers and optimization of the device design. From simulations, we predict a maximum sensitivity that is two times lower than a comparable conventional SPR biosensor.

  18. Comparison of finite source and plane wave scattering from corrugated surfaces

    NASA Technical Reports Server (NTRS)

    Levine, D. M.

    1977-01-01

    The choice of a plane wave to represent incident radiation in the analysis of scatter from corrugated surfaces was examined. The physical optics solution obtained for the scattered fields due to an incident plane wave was compared with the solution obtained when the incident radiation is produced by a source of finite size and finite distance from the surface. The two solutions are equivalent if the observer is in the far field of the scatterer and the distance from observer to scatterer is large compared to the radius of curvature at the scatter points, condition not easily satisfied with extended scatterers such as rough surfaces. In general, the two solutions have essential differences such as in the location of the scatter points and the dependence of the scattered fields on the surface properties. The implication of these differences to the definition of a meaningful radar cross section was examined.

  19. Microbial antimony biogeochemistry: Enzymes, regulation, and related metabolic pathways

    USGS Publications Warehouse

    Li, Jingxin; Qian Wang,; Oremland, Ronald S.; Kulp, Thomas R.; Rensing, Christopher; Wang, Gejiao

    2016-01-01

    Antimony (Sb) is a toxic metalloid that occurs widely at trace concentrations in soil, aquatic systems, and the atmosphere. Nowadays, with the development of its new industrial applications and the corresponding expansion of antimony mining activities, the phenomenon of antimony pollution has become an increasingly serious concern. In recent years, research interest in Sb has been growing and reflects a fundamental scientific concern regarding Sb in the environment. In this review, we summarize the recent research on bacterial antimony transformations, especially those regarding antimony uptake, efflux, antimonite oxidation, and antimonate reduction. We conclude that our current understanding of antimony biochemistry and biogeochemistry is roughly equivalent to where that of arsenic was some 20 years ago. This portends the possibility of future discoveries with regard to the ability of microorganisms to conserve energy for their growth from antimony redox reactions and the isolation of new species of “antimonotrophs.”

  20. Defining a successful commercial asteroid mining program

    NASA Astrophysics Data System (ADS)

    Andrews, Dana G.; Bonner, K. D.; Butterworth, A. W.; Calvert, H. R.; Dagang, B. R. H.; Dimond, K. J.; Eckenroth, L. G.; Erickson, J. M.; Gilbertson, B. A.; Gompertz, N. R.; Igbinosun, O. J.; Ip, T. J.; Khan, B. H.; Marquez, S. L.; Neilson, N. M.; Parker, C. O.; Ransom, E. H.; Reeve, B. W.; Robinson, T. L.; Rogers, M.; Schuh, P. M.; Tom, C. J.; Wall, S. E.; Watanabe, N.; Yoo, C. J.

    2015-03-01

    This paper summarizes a commercial Asteroid Mining Architecture synthesized by the Senior Space Design Class at the University of Washington in Winter/Spring Quarters of 2013. The main author was the instructor for that class. These results use design-to-cost development methods and focused infrastructure advancements to identify and characterize a workable space industrialization architecture including space transportation elements, asteroid exploration and mining equipment, and the earth orbit infrastructure needed to make it all work. Cost analysis predicts that for an initial investment in time and money equivalent to that for the US North Slope Oil Field, the yearly world supply of Platinum Group Metals could be increased by 50%, roughly 1500 t of LOX/LH2 propellant/year would be available in LEO, and very low cost solar panels could be assembled at GEO using asteroidal materials. The investment also would have a discounted net present value return on investment of 22% over twenty years.

  1. Evaluation of ERTS imagery for mapping and detection of changes of snowcover land and on glaciers

    NASA Technical Reports Server (NTRS)

    Meier, M. F.

    1973-01-01

    The percentage of snowcover area on specific drainage basins was measured from ERTS imagery by video density slicing with a repeatability of 4 percent of the snowcovered area. Data from ERTS images of the melt season snowcover in the Thunder Creek drainage basin in the North Cascades were combined with existing hydrologic and meteorologic observations to enable calculation of the time distribution of the water stored in this mountain snowpack. Similar data could be used for frequent updating of expected inflow to reservoirs. Equivalent snowline altitudes were determined from area measurements. Snowline altitudes were also determined by combining enlarged ERTS images with maps with an accuracy of about 60 m under favorable conditions. Ability to map snowcover or to determine snowline altitude depends primarily on cloud cover and vegetation and secondarily on slope, terrain roughness, sun angle, radiometric fidelity, and amount of spectral information available.

  2. The influence of ice on southern Lake Michigan coastal erosion

    USGS Publications Warehouse

    Barnes, P.W.; Kempema, E.W.; Reimnitz, E.; McCormick, M.

    1994-01-01

    Coastal ice does not protect the coast but enhances erosion by displacing severe winter wave energy from the beach to the shoreface and by entraining and transporting sediment alongshore and offshore. Three aspects of winter ice in Lake Michigan were studied over a 3-year period and found to have an important influence on coastal sediment dynamics and the coastal sediment budget: (1) the influence of coastal ice on shoreface morphology, (2) the transport of littoral sediments by ice, and (3) the formation of anchor and underwater ice as a frequent and important event entraining and transporting sediment. The nearshore ice complex contains a sediment load (0.2 - 1.2 t/m of coast) that is roughly equivalent to the average amount of sand eroded from the coastal bluffs and to the amount of sand ice- rafted offshore to the deep lake basin each year. -from Authors

  3. Independence and totalness of subspaces in phase space methods

    NASA Astrophysics Data System (ADS)

    Vourdas, A.

    2018-04-01

    The concepts of independence and totalness of subspaces are introduced in the context of quasi-probability distributions in phase space, for quantum systems with finite-dimensional Hilbert space. It is shown that due to the non-distributivity of the lattice of subspaces, there are various levels of independence, from pairwise independence up to (full) independence. Pairwise totalness, totalness and other intermediate concepts are also introduced, which roughly express that the subspaces overlap strongly among themselves, and they cover the full Hilbert space. A duality between independence and totalness, that involves orthocomplementation (logical NOT operation), is discussed. Another approach to independence is also studied, using Rota's formalism on independent partitions of the Hilbert space. This is used to define informational independence, which is proved to be equivalent to independence. As an application, the pentagram (used in discussions on contextuality) is analysed using these concepts.

  4. "Does college alcohol consumption impact employment upon graduation? Findings from a prospective study": Correction to Bamberger et al. (2017).

    PubMed

    2018-01-01

    Reports an error in "Does College Alcohol Consumption Impact Employment Upon Graduation? Findings From a Prospective Study" by Peter A. Bamberger, Jaclyn Koopmann, Mo Wang, Mary Larimer, Inbal Nahum-Shani, Irene Geisner and Samuel B. Bacharach ( Journal of Applied Psychology , Advanced Online Publication, Aug 24, 2017, np). In the article, the authors incorrectly used the term "probability" instead of the term "odds" when relating to the impact of drinking in college on post-graduation employment. The abstract should note "a roughly 10% reduction in the odds...", and in the 2nd paragraph of the Discussion section, (a) "a roughly 10% lower probability" should be "a roughly 10% lower odds", and (b) "their probability of full-time employment upon graduation is roughly 6% lower than..." should be "their odds of full-time employment upon graduation is roughly 6% lower than..." All versions of this article have been corrected. (The following abstract of the original article appeared in record 2017-36105-001.) Although scholars have extensively studied the impact of academic and vocational factors on college students' employment upon graduation, we still know little as to how students' health-related behaviors influence such outcomes. Focusing on student alcohol use as a widely prevalent, health-related behavior, in the current study, we examined the employment implications of student drinking behavior. Drawing from literature examining the productivity effects of drinking and research on job search, we posited that modal quantity and frequency of alcohol consumption, as well as the frequency of heavy episodic drinking (HED) adversely impact the probability of employment upon graduation. Using data from 827 graduating seniors from 4 geographically diverse universities in the United States collected in the context of a prospective study design, we found modal alcohol consumption to have no adverse effect on the likelihood of employment upon graduation. However, we did find a significant adverse effect for the frequency of heavy drinking, with the data suggesting a roughly 10% reduction in the odds of employment upon graduation among college seniors who reported engaging in the average level of HED. The theoretical and practical implications of these findings are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. Machining of bone: Analysis of cutting force and surface roughness by turning process.

    PubMed

    Noordin, M Y; Jiawkok, N; Ndaruhadi, P Y M W; Kurniawan, D

    2015-11-01

    There are millions of orthopedic surgeries and dental implantation procedures performed every year globally. Most of them involve machining of bones and cartilage. However, theoretical and analytical study on bone machining is lagging behind its practice and implementation. This study views bone machining as a machining process with bovine bone as the workpiece material. Turning process which makes the basis of the actually used drilling process was experimented. The focus is on evaluating the effects of three machining parameters, that is, cutting speed, feed, and depth of cut, to machining responses, that is, cutting forces and surface roughness resulted by the turning process. Response surface methodology was used to quantify the relation between the machining parameters and the machining responses. The turning process was done at various cutting speeds (29-156 m/min), depths of cut (0.03 -0.37 mm), and feeds (0.023-0.11 mm/rev). Empirical models of the resulted cutting force and surface roughness as the functions of cutting speed, depth of cut, and feed were developed. Observation using the developed empirical models found that within the range of machining parameters evaluated, the most influential machining parameter to the cutting force is depth of cut, followed by feed and cutting speed. The lowest cutting force was obtained at the lowest cutting speed, lowest depth of cut, and highest feed setting. For surface roughness, feed is the most significant machining condition, followed by cutting speed, and with depth of cut showed no effect. The finest surface finish was obtained at the lowest cutting speed and feed setting. © IMechE 2015.

  6. Application of Matrix Projection Exposure Using a Liquid Crystal Display Panel to Fabricate Thick Resist Molds

    NASA Astrophysics Data System (ADS)

    Fukasawa, Hirotoshi; Horiuchi, Toshiyuki

    2009-08-01

    The patterning characteristics of matrix projection exposure using an analog liquid crystal display (LCD) panel in place of a reticle were investigated, in particular for oblique patterns. In addition, a new method for fabricating practical thick resist molds was developed. At first, an exposure system fabricated in past research was reconstructed. Changes in the illumination optics and the projection lens were the main improvements. Using fly's eye lenses, the illumination light intensity distribution was homogenized. The projection lens was changed from a common camera lens to a higher-grade telecentric lens. In addition, although the same metal halide lamp was used as an exposure light source, the central exposure wavelength was slightly shortened from 480 to 450 nm to obtain higher resist sensitivity while maintaining almost equivalent contrast between black and white. Circular and radial patterns with linewidths of approximately 6 µm were uniformly printed in all directions throughout the exposure field owing to these improvements. The patterns were smoothly printed without accompanying stepwise roughness caused by the cell matrix array. On the bases of these results, a new method of fabricating thick resist molds for electroplating was investigated. It is known that thick resist molds fabricated using the negative resist SU-8 (Micro Chem) are useful because very high aspect patterns are printable and the side walls are perpendicular to the substrate surfaces. However, the most suitable exposure wavelength of SU-8 is 365 nm, and SU-8 is insensitive to light of 450 nm wavelength, which is most appropriate for LCD matrix exposure. For this reason, a novel multilayer resist process was proposed, and micromolds of SU-8 of 50 µm thickness were successfully obtained. As a result, feasibility for fabricating complex resist molds including oblique patterns was demonstrated.

  7. Compact lumped circuit model of discharges in DC accelerator using partial element equivalent circuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.

    2014-07-01

    DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less

  8. 21 CFR 26.32 - Scope.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY SYSTEM AUDIT REPORTS, AND CERTAIN... bodies (CAB's) assessed to be equivalent: (1) Under the U.S. system, surveillance/postmarket and initial/preapproval inspection reports; (2) Under the U.S. system, premarket (510(k)) product evaluation reports; (3...

  9. Psychiatric Resident and Attending Diagnostic and Prescribing Practices

    ERIC Educational Resources Information Center

    Tripp, Adam C.; Schwartz, Thomas L.

    2008-01-01

    Objective: This study investigates whether two patient population groups, under resident or attending treatment, are equivalent or different in the distribution of patient characteristics, diagnoses, or pharmacotherapy. Methods: Demographic data, psychiatric diagnoses, and pharmacotherapy data were collected for 100 random patient charts of…

  10. 40 CFR Table 9 to Subpart Wwww of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... manufacturing parts that meet the following criteria: 1,000 or more reinforcements or the glass equivalent of 1... resin and wet-out area(s), v. convey resin collected from drip-off pans or other devices to reservoirs...

  11. 40 CFR Table 9 to Subpart Wwww of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... manufacturing parts that meet the following criteria: 1,000 or more reinforcements or the glass equivalent of 1... resin and wet-out area(s), v. convey resin collected from drip-off pans or other devices to reservoirs...

  12. 40 CFR Table 9 to Subpart Wwww of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... manufacturing parts that meet the following criteria: 1,000 or more reinforcements or the glass equivalent of 1... resin and wet-out area(s), v. convey resin collected from drip-off pans or other devices to reservoirs...

  13. Workplace Math I: Easing into Math.

    ERIC Educational Resources Information Center

    Wilson, Nancy; Goschen, Claire

    This basic skills learning module includes instruction in performing basic computations, using general numerical concepts such as whole numbers, fractions, decimals, averages, ratios, proportions, percentages, and equivalents in practical situations. The problems are relevant to all aspects of the printing and manufacturing industry, with emphasis…

  14. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  15. Rates of Eolian Rock Abrasion in the Ice-Free Valleys, Antarctica

    NASA Astrophysics Data System (ADS)

    Hallet, B.; Malin, M. C.; Sletten, R. S.

    2016-12-01

    Eolian abrasion is a principal surface process in dry regions of Earth and Mars and there is evidence for wind processes active on Venus and Titan. Rock abrasion also has practical significance in diverse fields ranging from preservation of cultural material (artifacts, monuments) to damage of solar panels and windshields in arid regions. Despite its scientific and practical importance, and there have ben only few studies that define rates of rock abrasion quantitatively under natural conditions. Herein we report abrasion rates that have been exceptionally well characterized through a unique long-term (30+-year) field experiment in the ice-free McMurdo Dry Valleys, Antarctica. In 1983 and 1984, over 5000 rock targets of several lithologies (25.4 mm-diameter and 5 mm-thick disks of dolerite, basalt, tuff and sandstone) were installed at five heights (7,14, 21, 35, and 70 cm) facing the 4 cardinal directions at 10 locations (one additional site contains fewer targets). Sequential collections of rock targets exposed to abrasion enable definition of mass loss after 1, 5, 10, 30 and 31 years of exposure; the latter were retrieved during the 2014-2015 season. The abrasion rates generally show striking consistency for each lithology at any site; the multiple targets permit definition of intrinsic differences in mass loss. The rates vary considerably from site to site owing to differences in availability of transportable sediment, wind regime, and surface roughness, and at each site, owing to target orientation relative to the dominant winds and, secondarily, to height above the ground. For the hardest targets, basalt and dolerite, mass loss in 30+ years ranged from essentially zero at some sites to 1/3 of the deployed mass (2.59 g; equivalent to a rock thickness >1.8 mm) where abrasion was most active (Site 7, Central Wright Valley). The tuff targets showed the greatest mass loss, and in many cases were entirely abraded away by the end of the experiment.Current work is focused on understanding the spatial and directional variation in measured mass losses based on a wealth of information.

  16. Numerical simulation of electroosmotic flow in rough microchannels using the lattice Poisson-Nernst-Planck methods

    NASA Astrophysics Data System (ADS)

    Kamali, Reza; Soloklou, Mohsen Nasiri; Hadidi, Hooman

    2018-05-01

    In this study, coupled Lattice Boltzmann method is applied to solve the dynamic model for an electroosmotic flow and investigate the effects of roughness in a 2-D flat microchannel. In the present model, the Poisson equation is solved for the electrical potential, the Nernst- Planck equation is solved for the ion concentration. In the analysis of electroosmotic flows, when the electric double layers fully overlap or the convective effects are not negligible, the Nernst-Planck equation must be used to find the ionic distribution throughout the microchannel. The effects of surface roughness height, roughness interval spacing and roughness surface potential on flow conditions are investigated for two different configurations of the roughness, when the EDL layers fully overlap through the microchannel. The results show that in both arrangements of roughness in homogeneously charged rough channels, the flow rate decreases by increasing the roughness height. A discrepancy in the mass flow rate is observed when the roughness height is about 0.15 of the channel width, which its average is higher for the asymmetric configuration and this difference grows by increasing the roughness height. In the symmetric roughness arrangement, the mass flow rate increases until the roughness interval space is almost 1.5 times the roughness width and it decreases for higher values of the roughness interval space. For the heterogeneously charged rough channel, when the roughness surface potential ψr is less than channel surface potential ψs , the net charge density increases by getting far from the roughness surface, while in the opposite situation, when ψs is more than ψr , the net charge density decreases from roughness surface to the microchannel middle center. Increasing the roughness surface potential induces stronger electric driving force on the fluid which results in larger velocities in the flow.

  17. Noninvasive evaluation of mental stress using by a refined rough set technique based on biomedical signals.

    PubMed

    Liu, Tung-Kuan; Chen, Yeh-Peng; Hou, Zone-Yuan; Wang, Chao-Chih; Chou, Jyh-Horng

    2014-06-01

    Evaluating and treating of stress can substantially benefits to people with health problems. Currently, mental stress evaluated using medical questionnaires. However, the accuracy of this evaluation method is questionable because of variations caused by factors such as cultural differences and individual subjectivity. Measuring of biomedical signals is an effective method for estimating mental stress that enables this problem to be overcome. However, the relationship between the levels of mental stress and biomedical signals remain poorly understood. A refined rough set algorithm is proposed to determine the relationship between mental stress and biomedical signals, this algorithm combines rough set theory with a hybrid Taguchi-genetic algorithm, called RS-HTGA. Two parameters were used for evaluating the performance of the proposed RS-HTGA method. A dataset obtained from a practice clinic comprising 362 cases (196 male, 166 female) was adopted to evaluate the performance of the proposed approach. The empirical results indicate that the proposed method can achieve acceptable accuracy in medical practice. Furthermore, the proposed method was successfully used to identify the relationship between mental stress levels and bio-medical signals. In addition, the comparison between the RS-HTGA and a support vector machine (SVM) method indicated that both methods yield good results. The total averages for sensitivity, specificity, and precision were greater than 96%, the results indicated that both algorithms produced highly accurate results, but a substantial difference in discrimination existed among people with Phase 0 stress. The SVM algorithm shows 89% and the RS-HTGA shows 96%. Therefore, the RS-HTGA is superior to the SVM algorithm. The kappa test results for both algorithms were greater than 0.936, indicating high accuracy and consistency. The area under receiver operating characteristic curve for both the RS-HTGA and a SVM method were greater than 0.77, indicating a good discrimination capability. In this study, crucial attributes in stress evaluation were successfully recognized using biomedical signals, thereby enabling the conservation of medical resources and elucidating the mapping relationship between levels of mental stress and candidate attributes. In addition, we developed a prototype system for mental stress evaluation that can be used to provide benefits in medical practice. Copyright © 2014. Published by Elsevier B.V.

  18. Making Sense of a Negative Clinical Trial Result: A Bayesian Analysis of a Clinical Trial of Lorazepam and Diazepam for Pediatric Status Epilepticus.

    PubMed

    Chamberlain, Daniel B; Chamberlain, James M

    2017-01-01

    We demonstrate the application of a Bayesian approach to a recent negative clinical trial result. A Bayesian analysis of such a trial can provide a more useful interpretation of results and can incorporate previous evidence. This was a secondary analysis of the efficacy and safety results of the Pediatric Seizure Study, a randomized clinical trial of lorazepam versus diazepam for pediatric status epilepticus. We included the published results from the only prospective pediatric study of status in a Bayesian hierarchic model, and we performed sensitivity analyses on the amount of pooling between studies. We evaluated 3 summary analyses for the results: superiority, noninferiority (margin <-10%), and practical equivalence (within ±10%). Consistent with the original study's classic analysis of study results, we did not demonstrate superiority of lorazepam over diazepam. There is a 95% probability that the true efficacy of lorazepam is in the range of 66% to 80%. For both the efficacy and safety outcomes, there was greater than 95% probability that lorazepam is noninferior to diazepam, and there was greater than 90% probability that the 2 medications are practically equivalent. The results were largely driven by the current study because of the sample sizes of our study (n=273) and the previous pediatric study (n=61). Because Bayesian analysis estimates the probability of one or more hypotheses, such an approach can provide more useful information about the meaning of the results of a negative trial outcome. In the case of pediatric status epilepticus, it is highly likely that lorazepam is noninferior and practically equivalent to diazepam. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  19. a Traffic-Dependent Acoustical Grinding Criterion

    NASA Astrophysics Data System (ADS)

    DINGS, P.; VERHEIJEN, E.; KOOTWIJK-DAMMAN, C.

    2000-03-01

    On most lines of the Dutch railway network, where a substantial amount of block-braked trains have rough wheels, the average wheel roughness dominates over the rail roughness. Therefore, reducing wheel roughness is top priority in the Netherlands. However, for the situations where rail roughness exceeds wheel roughness, this roughness can be lowered at acceptable cost. The high rail roughness is often due to rail corrugation which can be removed by grinding. A method has been developed to assess periodically the rail roughness on each railway line of the network, to compare it with the average wheel roughness for that line and to determine whether a noise reduction can be achieved by grinding the rail. Roughness measurements can be carried out with an instrumented coach. The two axle-boxes of a measurement wheelset are equipped with accelerometers. Together with the train speed and the right frequency filter, the accelerometer signal is used to produce a wavelength spectrum of the rail roughness. To determine the average wheel roughness on a given line, the so-called Acoustical Timetable can be used. This database comprises train types, train intensities and train speeds for each track section in the Netherlands. An average wheel roughness spectrum is known for each type of braking system. The number of trains of each type passing by on a certain track section determine the average roughness. Analysis of the data shows on which track sections the rail roughness exceeds the wheel roughness by a specified level difference. If this track section lies in a residential area, the decision can be made to grind this piece of track to reduce the noise production locally. Using this methodology, the noise production can be kept to a minimum, determined by the local average wheel roughness.

  20. Negotiating for more: the multiple equivalent simultaneous offer.

    PubMed

    Heller, Richard E

    2014-02-01

    Whether a doctor, professional baseball manager, or a politician, having successful negotiation skills is a critical part of being a leader. Building upon prior journal articles on negotiation strategy, the author presents the concept of the multiple equivalent simultaneous offer (MESO). The concept of a MESO is straightforward: as opposed to making a single offer, make multiple offers with several variables. Each offer alters the different variables, such that the end result of each offer is equivalent from the perspective of the party making the offer. Research has found several advantages to the use of MESOs. For example, using MESOs, an offer was more likely to be accepted, and the counterparty was more likely to be satisfied with the negotiated deal. Additional benefits have been documented as well, underscoring why a prepared radiology business leader should understand the theory and practice of MESO. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.

Top