Sample records for area equivalent method

  1. Forecasting petroleum discoveries in sparsely drilled areas: Nigeria and the North Sea

    USGS Publications Warehouse

    Attanasi, E.D.; Root, D.H.

    1988-01-01

    Decline function methods for projecting future discoveries generally capture the crowding effects of wildcat wells on the discovery rate. However, these methods do not accommodate easily situations where exploration areas and horizons are expanding. In this paper, a method is presented that uses a mapping algorithm for separating these often countervailing influences. The method is applied to Nigeria and the North Sea. For an amount of future drilling equivalent to past drilling (825 wildcat wells), future discoveries (in resources found) for Nigeria are expected to decline by 68% per well but still amount to 8.5 billion barrels of oil equivalent (BOE). Similarly, for the total North Sea for an equivalent amount and mix among areas of past drilling (1322 wildcat wells), future discoveries are expected to amount to 17.9 billion BOE, whereas the average discovery rate per well is expected to decline by 71%. ?? 1988 International Association for Mathematical Geology.

  2. Some Examples of the Applications of the Transonic and Supersonic Area Rules to the Prediction of Wave Drag

    NASA Technical Reports Server (NTRS)

    Nelson, Robert L.; Welsh, Clement J.

    1960-01-01

    The experimental wave drags of bodies and wing-body combinations over a wide range of Mach numbers are compared with the computed drags utilizing a 24-term Fourier series application of the supersonic area rule and with the results of equivalent-body tests. The results indicate that the equivalent-body technique provides a good method for predicting the wave drag of certain wing-body combinations at and below a Mach number of 1. At Mach numbers greater than 1, the equivalent-body wave drags can be misleading. The wave drags computed using the supersonic area rule are shown to be in best agreement with the experimental results for configurations employing the thinnest wings. The wave drags for the bodies of revolution presented in this report are predicted to a greater degree of accuracy by using the frontal projections of oblique areas than by using normal areas. A rapid method of computing wing area distributions and area-distribution slopes is given in an appendix.

  3. Evapotranspiration: Mass balance measurements compared with flux estimation methods

    USDA-ARS?s Scientific Manuscript database

    Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP...

  4. Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.

    PubMed

    Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph

    2018-06-01

    There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.

  5. SEPs to finger joint input lack the N20-P20 response that is evoked by tactile inputs: contrast between cortical generators in areas 3b and 2 in humans.

    PubMed

    Desmedt, J E; Ozaki, I

    1991-01-01

    A method using a DC servo motor is described to produce brisk angular movements at finger interphalangeal joints in humans. Small passive flexions of 2 degrees elicited sizable somatosensory evoked potentials (SEPs) starting with a contralateral positive P34 parietal response thought to reflect activation of a radial equivalent dipole generator in area 2 which receives joint inputs. By contrast, electric stimulation of tactile (non-joint) inputs from the distal phalanx evoked the usual contralateral negative N20 reflecting a tangential equivalent dipole generator in area 3b. Finger joint inputs also evoked a precentral positivity equivalent to the P22 of motor area 4, and a large frontal negativity equivalent to N30. It is suggested that natural stimulation allows human SEP components to be differentiated in conjunction with distinct cortical somatotopic projections.

  6. Optimal Sample Size Determinations for the Heteroscedastic Two One-Sided Tests of Mean Equivalence: Design Schemes and Software Implementations

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2017-01-01

    Equivalence assessment is becoming an increasingly important topic in many application areas including behavioral and social sciences research. Although there exist more powerful tests, the two one-sided tests (TOST) procedure is a technically transparent and widely accepted method for establishing statistical equivalence. Alternatively, a direct…

  7. A Formal Approach to Requirements-Based Programming

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.

  8. A Numerical Method for Calculating the Wave Drag of a Configuration from the Second Derivative of the Area Distribution of a Series of Equivalent Bodies of Revolution

    NASA Technical Reports Server (NTRS)

    Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.

    1959-01-01

    A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.

  9. Forecasting petroleum discoveries in sparsely drilled areas: Nigeria and the North Sea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attanasi, E.D.; Root, D.H.

    1988-10-01

    Decline function methods for projecting future discoveries generally capture the crowding effects of wildcat wells on the discovery rate. However, these methods do not accommodate easily situations where exploration areas and horizons are expanding. In this paper, a method is presented that uses a mapping algorithm for separating these often countervailing influences. The method is applied to Nigeria and the North Sea. For an amount of future drilling equivalent to past drilling (825 wildcat wells), future discoveries (in resources found) for Nigeria are expected to decline by 68% per well but still amount to 8.5 billion barrels of oil equivalentmore » (BOE). Similarly, for the total North Sea for an equivalent amount and mix among areas of past drilling (1322 wildcat wells), future discoveries are expected to amount to 17.9 billion BOE, whereas the average discovery rate per well is expected to decline by 71%.« less

  10. Method for measuring dose-equivalent in a neutron flux with an unknown energy spectra and means for carrying out that method

    DOEpatents

    Distenfeld, Carl H.

    1978-01-01

    A method for measuring the dose-equivalent for exposure to an unknown and/or time varing neutron flux which comprises simultaneously exposing a plurality of neutron detecting elements of different types to a neutron flux and combining the measured responses of the various detecting elements by means of a function, whose value is an approximate measure of the dose-equivalent, which is substantially independent of the energy spectra of the flux. Also, a personnel neutron dosimeter, which is useful in carrying out the above method, comprising a plurality of various neutron detecting elements in a single housing suitable for personnel to wear while working in a radiation area.

  11. REGULATORY METHODS PROGRAM SUPPORT FOR NAAQSS

    EPA Science Inventory

    This task supports attainment determinations of the National Ambient Air Quality Standards (NAAQS) for particulate matter (PM) in the areas of development, testing, and improvement of new and current PM Federal Reference Methods (FRMs) and Federal Equivalent Methods (FEMs). The ...

  12. Application of Adjoint Methodology to Supersonic Aircraft Design Using Reversed Equivalent Areas

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2013-01-01

    This paper presents an approach to shape an aircraft to equivalent area based objectives using the discrete adjoint approach. Equivalent areas can be obtained either using reversed augmented Burgers equation or direct conversion of off-body pressures into equivalent area. Formal coupling with CFD allows computation of sensitivities of equivalent area objectives with respect to aircraft shape parameters. The exactness of the adjoint sensitivities is verified against derivatives obtained using the complex step approach. This methodology has the benefit of using designer-friendly equivalent areas in the shape design of low-boom aircraft. Shape optimization results with equivalent area cost functionals are discussed and further refined using ground loudness based objectives.

  13. Performance Prediction Relationships for AM2 Airfield Matting Developed from Full-Scale Accelerated Testing and Laboratory Experimentation

    DTIC Science & Technology

    2018-01-01

    work, the prevailing methods used to predict the performance of AM2 were based on the CBR design procedure for flexible pavements using a small number...suitable for design and evaluation frameworks currently used for airfield pavements and matting systems. DISCLAIMER: The contents of this report...methods used to develop the equivalency curves equated the mat-surfaced area to an equivalent thickness of flexible pavement using the CBR design

  14. Future change in seasonal march of snow water equivalent due to global climate change

    NASA Astrophysics Data System (ADS)

    Hara, M.; Kawase, H.; Ma, X.; Wakazuki, Y.; Fujita, M.; Kimura, F.

    2012-04-01

    Western side of Honshu Island in Japan is one of the heaviest snowfall areas in the world, although the location is relatively lower latitude than other heavy snowfall areas. Snowfall is one of major source for agriculture, industrial, and house-use in Japan. The change in seasonal march of snow water equivalent, e.g., snowmelt season and amount will strongly influence to social-economic activities (ex. Ma et al., 2011). We performed the four numerical experiments including present and future climate simulations and much-snow and less-snow cases using a regional climate model. Pseudo-Global-Warming (PGW) method (Kimura and Kitoh, 2008) is applied for the future climate simulations. NCEP/NCAR reanalysis is used for initial and boundary conditions in present climate simulation and PGW method. MIROC 3.2 medres 2070s output under IPCC SRES A2 scenario and 1990s output under 20c3m scenario used for PGW method. In much-snow cases, Maximum total snow water equivalent over Japan, which is mostly observed in early February, is 49 G ton in the present simulation, the one decreased 26 G ton in the future simulation. The decreasing rate of snow water equivalent due to climate change was 49%. Main cause of the decrease of the total snow water equivalent is strongly affected by the air temperature rise due to global climate change. The difference in present and future precipitation amount is little.

  15. Contact between the acetabulum and dome of a Kerboull-type plate influences the stress on the plate and screw.

    PubMed

    Hara, Katsutoshi; Kaku, Nobuhiro; Tabata, Tomonori; Tsumura, Hiroshi

    2015-07-01

    We used a three-dimensional finite element method to investigate the conditions behind the Kerboull-type (KT) dome. The KT plate dome was divided into five areas, and 14 models were created to examine different conditions of dome contact with the acetabulum. The maximum stress on the KT plate and screws was estimated for each model. Furthermore, to investigate the impact of the contact area with the acetabulum on the KT plate, a multiple regression analysis was conducted using the analysis results. The dome-acetabulum contact area affected the maximum equivalent stress on the KT plate; good contact with two specific areas of the vertical and horizontal beams (Areas 3 and 5) reduced the maximum equivalent stress. The maximum equivalent stress on the hook increased when the hardness of the bone representing the acetabulum varied. Thus, we confirmed the technical importance of providing a plate with a broad area of appropriate support from the bone and cement in the posterior portion of the dome and also proved the importance of supporting the area of the plate in the direction of the load at the center of the cross-plate and near the hook.

  16. The effect of a paraffin screen on the neutron dose at the maze door of a 15 MV linear accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krmar, M.; Kuzmanović, A.; Nikolić, D.

    2013-08-15

    Purpose: The purpose of this study was to explore the effects of a paraffin screen located at various positions in the maze on the neutron dose equivalent at the maze door.Methods: The neutron dose equivalent was measured at the maze door of a room containing a 15 MV linear accelerator for x-ray therapy. Measurements were performed for several positions of the paraffin screen covering only 27.5% of the cross-sectional area of the maze. The neutron dose equivalent was also measured at all screen positions. Two simple models of the neutron source were considered in which the first assumed that themore » source was the cross-sectional area at the inner entrance of the maze, radiating neutrons in an isotropic manner. In the second model the reduction in the neutron dose equivalent at the maze door due to the paraffin screen was considered to be a function of the mean values of the neutron fluence and energy at the screen.Results: The results of this study indicate that the equivalent dose at the maze door was reduced by a factor of 3 through the use of a paraffin screen that was placed inside the maze. It was also determined that the contributions to the dosage from areas that were not covered by the paraffin screen as viewed from the dosimeter, were 2.5 times higher than the contributions from the covered areas. This study also concluded that the contributions of the maze walls, ceiling, and floor to the total neutron dose equivalent were an order of magnitude lower than those from the surface at the far end of the maze.Conclusions: This study demonstrated that a paraffin screen could be used to reduce the neutron dose equivalent at the maze door by a factor of 3. This paper also found that the reduction of the neutron dose equivalent was a linear function of the area covered by the maze screen and that the decrease in the dose at the maze door could be modeled as an exponential function of the product φ·E at the screen.« less

  17. Equivalent complex conductivities representing the effects of T-tubules and folded surface membranes on the electrical admittance and impedance of skeletal muscles measured by external-electrode method

    NASA Astrophysics Data System (ADS)

    Sekine, Katsuhisa

    2017-12-01

    In order to represent the effects of T-tubules and folded surface membranes on the electrical admittance and impedance of skeletal muscles measured by the external-electrode method, analytical relations for the equivalent complex conductivities of hypothetical smooth surface membranes were derived. In the relations, the effects of each tubule were represented by the admittance of a straight cable. The effects of the folding of a surface membrane were represented by the increased area of surface membranes. The equivalent complex conductivities were represented as summation of these effects, and the effects of the T-tubules were different between the transversal and longitudinal directions. The validity of the equivalent complex conductivities was supported by the results of finite-difference method (FDM) calculations made using three-dimensional models in which T-tubules and folded surface membranes were represented explicitly. FDM calculations using the equivalent complex conductivities suggested that the electrically inhomogeneous structure due to the existence of muscle cells with T-tubules was sufficient for explaining the experimental results previously obtained using the external-electrode method. Results of FDM calculations in which the structural changes caused by muscle contractions were taken into account were consistent with the reported experimental results.

  18. Method for detecting water equivalent of snow using secondary cosmic gamma radiation

    DOEpatents

    Condreva, K.J.

    1997-01-14

    Water equivalent of accumulated snow determination by measurement of secondary background cosmic radiation attenuation by the snowpack. By measuring the attenuation of 3-10 MeV secondary gamma radiation it is possible to determine the water equivalent of snowpack. The apparatus is designed to operate remotely to determine the water equivalent of snow in areas which are difficult or hazardous to access during winter, accumulate the data as a function of time and transmit, by means of an associated telemetry system, the accumulated data back to a central data collection point for analysis. The electronic circuitry is designed so that a battery pack can be used to supply power. 4 figs.

  19. Method for detecting water equivalent of snow using secondary cosmic gamma radiation

    DOEpatents

    Condreva, Kenneth J.

    1997-01-01

    Water equivalent of accumulated snow determination by measurement of secondary background cosmic radiation attenuation by the snowpack. By measuring the attentuation of 3-10 MeV secondary gamma radiation it is possible to determine the water equivalent of snowpack. The apparatus is designed to operate remotely to determine the water equivalent of snow in areas which are difficult or hazardous to access during winter, accumulate the data as a function of time and transmit, by means of an associated telemetry system, the accumulated data back to a central data collection point for analysis. The electronic circuitry is designed so that a battery pack can be used to supply power.

  20. Evapotranspiration Measurement and Estimation: Weighing Lysimeter and Neutron Probe Based Methods Compared with Eddy Covariance

    NASA Astrophysics Data System (ADS)

    Evett, S. R.; Gowda, P. H.; Marek, G. W.; Alfieri, J. G.; Kustas, W. P.; Brauer, D. K.

    2014-12-01

    Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP) and soil core sampling techniques), and can be biased with respect to ET from the surrounding area. The area represented by flux sensing methods such as eddy covariance (EC) is typically estimated with a flux footprint/source area model. The dimension, position of, and relative contribution of upwind areas within the source area are mainly influenced by sensor height, wind speed, atmospheric stability and wind direction. Footprints for EC sensors positioned several meters above the canopy are often larger than can be economically covered by mass balance methods. Moreover, footprints move with atmospheric conditions and wind direction to cover different field areas over time while mass balance methods are static in space. Thus, EC systems typically sample a much greater field area over time compared with mass balance methods. Spatial variability of surface cover can thus complicate interpretation of flux estimates from EC systems. The most commonly used flux estimation method is EC; and EC estimates of latent heat energy (representing ET) and sensible heat fluxes combined are typically smaller than the available energy from net radiation and soil heat flux (commonly referred to as lack of energy balance closure). Reasons for this are the subject of ongoing research. We compare ET from LYS, NP and EC methods applied to field crops for three years at Bushland, Texas (35° 11' N, 102° 06' W, 1170 m elevation above MSL) to illustrate the potential problems with and comparative advantages of all three methods. In particular, we examine how networks of neutron probe access tubes can be representative of field areas large enough to be equivalent in size to EC footprints, and how the ET data from these methods can address bias and accuracy issues.

  1. Method to determine the position-dependant metal correction factor for dose-rate equivalent laser testing of semiconductor devices

    DOEpatents

    Horn, Kevin M.

    2013-07-09

    A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.

  2. Mixed-Fidelity Approach for Design of Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Li, Wu; Shields, Elwood; Geiselhart, Karl

    2011-01-01

    This paper documents a mixed-fidelity approach for the design of low-boom supersonic aircraft with a focus on fuselage shaping.A low-boom configuration that is based on low-fidelity analysis is used as the baseline. The fuselage shape is modified iteratively to obtain a configuration with an equivalent-area distribution derived from computational fluid dynamics analysis that attempts to match a predetermined low-boom target area distribution and also yields a low-boom ground signature. The ground signature of the final configuration is calculated by using a state-of-the-art computational-fluid-dynamics-based boom analysis method that generates accurate midfield pressure distributions for propagation to the ground with ray tracing. The ground signature that is propagated from a midfield pressure distribution has a shaped ramp front, which is similar to the ground signature that is propagated from the computational fluid dynamics equivalent-area distribution. This result supports the validity of low-boom supersonic configuration design by matching a low-boom equivalent-area target, which is easier to accomplish than matching a low-boom midfield pressure target.

  3. 29 CFR 1910.106 - Flammable and combustible liquids.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determination methods specified in this subparagraph. (15) Hotel shall mean buildings or groups of buildings... long bolts or equivalent, may be calculated provided that the opening pressure is actually measured... aboveground tanks—(a) Drainage and diked areas. The area surrounding a tank or a group of tanks shall be...

  4. Laser-induced breakdown spectroscopy for in-cylinder equivalence ratio measurements in laser-ignited natural gas engines.

    PubMed

    Joshi, Sachin; Olsen, Daniel B; Dumitrescu, Cosmin; Puzinauskas, Paulius V; Yalin, Azer P

    2009-05-01

    In this contribution we present the first demonstration of simultaneous use of laser sparks for engine ignition and laser-induced breakdown spectroscopy (LIBS) measurements of in-cylinder equivalence ratios. A 1064 nm neodynium yttrium aluminum garnet (Nd:YAG) laser beam is used with an optical spark plug to ignite a single cylinder natural gas engine. The optical emission from the combustion initiating laser spark is collected through the optical spark plug and cycle-by-cycle spectra are analyzed for H(alpha)(656 nm), O(777 nm), and N(742 nm, 744 nm, and 746 nm) neutral atomic lines. The line area ratios of H(alpha)/O(777), H(alpha)/N(746), and H(alpha)/N(tot) (where N(tot) is the sum of areas of the aforementioned N lines) are correlated with equivalence ratios measured by a wide band universal exhaust gas oxygen (UEGO) sensor. Experiments are performed for input laser energy levels of 21 mJ and 26 mJ, compression ratios of 9 and 11, and equivalence ratios between 0.6 and 0.95. The results show a linear correlation (R(2) > 0.99) of line intensity ratio with equivalence ratio, thereby suggesting an engine diagnostic method for cylinder resolved equivalence ratio measurements.

  5. A Mixed-Fidelity Approach for Design of Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Li, Wu; Shields, Elwood; Geiselhart, Karl A.

    2010-01-01

    This paper documents a mixed-fidelity approach for the design of low-boom supersonic aircraft as a viable approach for designing a practical low-boom supersonic configuration. A low-boom configuration that is based on low-fidelity analysis is used as the baseline. Tail lift is included to help tailor the aft portion of the ground signature. A comparison of low- and high-fidelity analysis results demonstrates the necessity of using computational fluid dynamics (CFD) analysis in a low-boom supersonic configuration design process. The fuselage shape is modified iteratively to obtain a configuration with a CFD equivalent-area distribution that matches a predetermined low-boom target distribution. The mixed-fidelity approach can easily refine the low-fidelity low-boom baseline into a low-boom configuration with the use of CFD equivalent-area analysis. The ground signature of the final configuration is calculated by using a state-of-the-art CFD-based boom analysis method that generates accurate midfield pressure distributions for propagation to the ground with ray tracing. The ground signature that is propagated from a midfield pressure distribution has a shaped ramp front, which is similar to the ground signature that is propagated from the CFD equivalent-area distribution. This result confirms the validity of the low-boom supersonic configuration design by matching a low-boom equivalent-area target, which is easier to accomplish than matching a low-boom midfield pressure target.

  6. Comparison of Peak-area Ratios and Percentage Peak Area Derived from HPLC-evaporative Light Scattering and Refractive Index Detectors for Palm Oil and its Fractions.

    PubMed

    Ping, Bonnie Tay Yen; Aziz, Haliza Abdul; Idris, Zainab

    2018-01-01

    High-Performance Liquid Chromatography (HPLC) methods via evaporative light scattering (ELS) and refractive index (RI) detectors are used by the local palm oil industry to monitor the TAG profiles of palm oil and its fractions. The quantitation method used is based on area normalization of the TAG components and expressed as percentage area. Although not frequently used, peak-area ratios based on TAG profiles are a possible qualitative method for characterizing the TAG of palm oil and its fractions. This paper aims to compare these two detectors in terms of peak-area ratio, percentage peak area composition, and TAG elution profiles. The triacylglycerol (TAG) composition for palm oil and its fractions were analysed under similar HPLC conditions i.e. mobile phase and column. However, different sample concentrations were used for the detectors while remaining within the linearity limits of the detectors. These concentrations also gave a good baseline resolved separation for all the TAGs components. The results of the ELSD method's percentage area composition for the TAGs of palm oil and its fractions differed from those of RID. This indicates an unequal response of TAGs for palm oil and its fractions using the ELSD, also affecting the peak area ratios. They were found not to be equivalent to those obtained using the HPLC-RID. The ELSD method showed a better baseline separation for the TAGs components, with a more stable baseline as compared with the corresponding HPLC-RID. In conclusion, the percentage area compositions and peak-area ratios for palm oil and its fractions as derived from HPLC-ELSD and RID were not equivalent due to different responses of TAG components to the ELSD detector. The HPLC-RID has a better accuracy for percentage area composition and peak-area ratio because the TAG components response equally to the detector.

  7. Future Change of Snow Water Equivalent over Japan

    NASA Astrophysics Data System (ADS)

    Hara, M.; Kawase, H.; Kimura, F.; Fujita, M.; Ma, X.

    2012-12-01

    Western side of Honshu Island and Hokkaido Island in Japan are ones of the heaviest snowfall areas in the world. Although a heavy snowfall often brings disaster, snow is one of the major sources for agriculture, industrial, and house-use in Japan. Even during the winter, the monthly mean of the surface air temperature often exceeds 0 C in large parts of the heavy snow areas along the Sea of Japan. Thus, snow cover may be seriously reduced in these areas as a result of the global warming, which is caused by an increase in greenhouse gases. The change in seasonal march of snow water equivalent, e.g., snowmelt season and amount will strongly influence to social-economic activities. We performed a series of numerical experiments including present and future climate simulations and much-snow and less-snow cases using a regional climate model. Pseudo-Global-Warming (PGW) method (Kimura and Kitoh, 2008) is applied for the future climate simulations. MIROC 3.2 medres 2070s output under IPCC SRES A2 scenario and 1990s output under 20c3m scenario used for PGW method. The precipitation, snow depth, and surface air temperature of the hindcast simulations show good agreement with the AMeDAS station data. In much-snow cases, The decreasing rate of maximum total snow water equivalent over Japan due to climate change was 49%. Main cause of the decrease of the total snow water equivalent is the air temperature rise due to global climate change. The difference in the precipitation amount between the present and the future simulations is small.

  8. Measurement equivalence: a glossary for comparative population health research.

    PubMed

    Morris, Katherine Ann

    2018-03-06

    Comparative population health studies are becoming more common and are advancing solutions to crucial public health problems, but decades-old measurement equivalence issues remain without a common vocabulary to identify and address the biases that contribute to non-equivalence. This glossary defines sources of measurement non-equivalence. While drawing examples from both within-country and between-country studies, this glossary also defines methods of harmonisation and elucidates the unique opportunities in addition to the unique challenges of particular harmonisation methods. Its primary objective is to enable population health researchers to more clearly articulate their measurement assumptions and the implications of their findings for policy. It is also intended to provide scholars and policymakers across multiple areas of inquiry with tools to evaluate comparative research and thus contribute to urgent debates on how to ameliorate growing health disparities within and between countries. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Digital photography and transparency-based methods for measuring wound surface area.

    PubMed

    Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh

    2013-04-01

    To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.

  10. Leveling data in geochemical mapping: scope of application, pros and cons of existing methods

    NASA Astrophysics Data System (ADS)

    Pereira, Benoît; Vandeuren, Aubry; Sonnet, Philippe

    2017-04-01

    Geochemical mapping successfully met a range of needs from mineral exploration to environmental management. In Europe and around the world numerous geochemical datasets already exist. These datasets may originate from geochemical mapping projects or from the collection of sample analyses requested by environmental protection regulatory bodies. Combining datasets can be highly beneficial for establishing geochemical maps with increased resolution and/or coverage area. However this practice requires assessing the equivalence between datasets and, if needed, applying data leveling to remove possible biases between datasets. In the literature, several procedures for assessing dataset equivalence and leveling data are proposed. Daneshfar & Cameron (1998) proposed a method for the leveling of two adjacent datasets while Pereira et al. (2016) proposed two methods for the leveling of datasets that contain records located within the same geographical area. Each discussed method requires its own set of assumptions (underlying populations of data, spatial distribution of data, etc.). Here we propose to discuss the scope of application, pros, cons and practical recommendations for each method. This work is illustrated with several case studies in Wallonia (Southern Belgium) and in Europe involving trace element geochemical datasets. References: Daneshfar, B. & Cameron, E. (1998), Leveling geochemical data between map sheets, Journal of Geochemical Exploration 63(3), 189-201. Pereira, B.; Vandeuren, A.; Govaerts, B. B. & Sonnet, P. (2016), Assessing dataset equivalence and leveling data in geochemical mapping, Journal of Geochemical Exploration 168, 36-48.

  11. Method of passive ranging from infrared image sequence based on equivalent area

    NASA Astrophysics Data System (ADS)

    Yang, Weiping; Shen, Zhenkang

    2007-11-01

    The information of range between missile and targets is important not only to missile controlling component, but also to automatic target recognition, so studying the technique of passive ranging from infrared images has important theoretic and practical meanings. Here we tried to get the range between guided missile and target and help to identify targets or dodge a hit. The issue of distance between missile and target is currently a hot and difficult research content. As all know, infrared imaging detector can not range so that it restricts the functions of the guided information processing system based on infrared images. In order to break through the technical puzzle, we investigated the principle of the infrared imaging, after analysing the imaging geometric relationship between the guided missile and the target, we brought forward the method of passive ranging based on equivalent area and provided mathematical analytic formulas. Validating Experiments demonstrate that the presented method has good effect, the lowest relative error can reach 10% in some circumstances.

  12. Measurement and interpretation of skin prick test results.

    PubMed

    van der Valk, J P M; Gerth van Wijk, R; Hoorn, E; Groenendijk, L; Groenendijk, I M; de Jong, N W

    2015-01-01

    There are several methods to read skin prick test results in type-I allergy testing. A commonly used method is to characterize the wheal size by its 'average diameter'. A more accurate method is to scan the area of the wheal to calculate the actual size. In both methods, skin prick test (SPT) results can be corrected for histamine-sensitivity of the skin by dividing the results of the allergic reaction by the histamine control. The objectives of this study are to compare different techniques of quantifying SPT results, to determine a cut-off value for a positive SPT for histamine equivalent prick -index (HEP) area, and to study the accuracy of predicting cashew nut reactions in double-blind placebo-controlled food challenge (DBPCFC) tests with the different SPT methods. Data of 172 children with cashew nut sensitisation were used for the analysis. All patients underwent a DBPCFC with cashew nut. Per patient, the average diameter and scanned area of the wheal size were recorded. In addition, the same data for the histamine-induced wheal were collected for each patient. The accuracy in predicting the outcome of the DBPCFC using four different SPT readings (i.e. average diameter, area, HEP-index diameter, HEP-index area) were compared in a Receiver-Operating Characteristic (ROC) plot. Characterizing the wheal size by the average diameter method is inaccurate compared to scanning method. A wheal average diameter of 3 mm is generally considered as a positive SPT cut-off value and an equivalent HEP-index area cut-off value of 0.4 was calculated. The four SPT methods yielded a comparable area under the curve (AUC) of 0.84, 0.85, 0.83 and 0.83, respectively. The four methods showed comparable accuracy in predicting cashew nut reactions in a DBPCFC. The 'scanned area method' is theoretically more accurate in determining the wheal area than the 'average diameter method' and is recommended in academic research. A HEP-index area of 0.4 is determined as cut-off value for a positive SPT. However, in clinical practice, the 'average diameter method' is also useful, because this method provides similar accuracy in predicting cashew nut allergic reactions in the DBPCFC. Trial number NTR3572.

  13. Sample size determination for equivalence assessment with multiple endpoints.

    PubMed

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  14. Evaluation of area-based collagen scoring by nonlinear microscopy in chronic hepatitis C-induced liver fibrosis

    PubMed Central

    Sevrain, David; Dubreuil, Matthieu; Dolman, Grace Elizabeth; Zaitoun, Abed; Irving, William; Guha, Indra Neil; Odin, Christophe; Le Grand, Yann

    2015-01-01

    In this paper we analyze a fibrosis scoring method based on measurement of the fibrillar collagen area from second harmonic generation (SHG) microscopy images of unstained histological slices from human liver biopsies. The study is conducted on a cohort of one hundred chronic hepatitis C patients with intermediate to strong Metavir and Ishak stages of liver fibrosis. We highlight a key parameter of our scoring method to discriminate between high and low fibrosis stages. Moreover, according to the intensity histograms of the SHG images and simple mathematical arguments, we show that our area-based method is equivalent to an intensity-based method, despite saturation of the images. Finally we propose an improvement of our scoring method using very simple image processing tools. PMID:25909005

  15. Evaluation of area-based collagen scoring by nonlinear microscopy in chronic hepatitis C-induced liver fibrosis.

    PubMed

    Sevrain, David; Dubreuil, Matthieu; Dolman, Grace Elizabeth; Zaitoun, Abed; Irving, William; Guha, Indra Neil; Odin, Christophe; Le Grand, Yann

    2015-04-01

    In this paper we analyze a fibrosis scoring method based on measurement of the fibrillar collagen area from second harmonic generation (SHG) microscopy images of unstained histological slices from human liver biopsies. The study is conducted on a cohort of one hundred chronic hepatitis C patients with intermediate to strong Metavir and Ishak stages of liver fibrosis. We highlight a key parameter of our scoring method to discriminate between high and low fibrosis stages. Moreover, according to the intensity histograms of the SHG images and simple mathematical arguments, we show that our area-based method is equivalent to an intensity-based method, despite saturation of the images. Finally we propose an improvement of our scoring method using very simple image processing tools.

  16. Inverse Design of Low-Boom Supersonic Concepts Using Reversed Equivalent-Area Targets

    NASA Technical Reports Server (NTRS)

    Li, Wu; Rallabhand, Sriam

    2011-01-01

    A promising path for developing a low-boom configuration is a multifidelity approach that (1) starts from a low-fidelity low-boom design, (2) refines the low-fidelity design with computational fluid dynamics (CFD) equivalent-area (Ae) analysis, and (3) improves the design with sonic-boom analysis by using CFD off-body pressure distributions. The focus of this paper is on the third step of this approach, in which the design is improved with sonic-boom analysis through the use of CFD calculations. A new inverse design process for off-body pressure tailoring is formulated and demonstrated with a low-boom supersonic configuration that was developed by using the mixed-fidelity design method with CFD Ae analysis. The new inverse design process uses the reverse propagation of the pressure distribution (dp/p) from a mid-field location to a near-field location, converts the near-field dp/p into an equivalent-area distribution, generates a low-boom target for the reversed equivalent area (Ae,r) of the configuration, and modifies the configuration to minimize the differences between the configuration s Ae,r and the low-boom target. The new inverse design process is used to modify a supersonic demonstrator concept for a cruise Mach number of 1.6 and a cruise weight of 30,000 lb. The modified configuration has a fully shaped ground signature that has a perceived loudness (PLdB) value of 78.5, while the original configuration has a partially shaped aft signature with a PLdB of 82.3.

  17. A Methodological Approach to Small Area Estimation for the Behavioral Risk Factor Surveillance System

    PubMed Central

    Xu, Fang; Wallace, Robyn C.; Garvin, William; Greenlund, Kurt J.; Bartoli, William; Ford, Derek; Eke, Paul; Town, G. Machell

    2016-01-01

    Public health researchers have used a class of statistical methods to calculate prevalence estimates for small geographic areas with few direct observations. Many researchers have used Behavioral Risk Factor Surveillance System (BRFSS) data as a basis for their models. The aims of this study were to 1) describe a new BRFSS small area estimation (SAE) method and 2) investigate the internal and external validity of the BRFSS SAEs it produced. The BRFSS SAE method uses 4 data sets (the BRFSS, the American Community Survey Public Use Microdata Sample, Nielsen Claritas population totals, and the Missouri Census Geographic Equivalency File) to build a single weighted data set. Our findings indicate that internal and external validity tests were successful across many estimates. The BRFSS SAE method is one of several methods that can be used to produce reliable prevalence estimates in small geographic areas. PMID:27418213

  18. Interactive Inverse Design Optimization of Fuselage Shape for Low-Boom Supersonic Concepts

    NASA Technical Reports Server (NTRS)

    Li, Wu; Shields, Elwood; Le, Daniel

    2008-01-01

    This paper introduces a tool called BOSS (Boom Optimization using Smoothest Shape modifications). BOSS utilizes interactive inverse design optimization to develop a fuselage shape that yields a low-boom aircraft configuration. A fundamental reason for developing BOSS is the need to generate feasible low-boom conceptual designs that are appropriate for further refinement using computational fluid dynamics (CFD) based preliminary design methods. BOSS was not developed to provide a numerical solution to the inverse design problem. Instead, BOSS was intended to help designers find the right configuration among an infinite number of possible configurations that are equally good using any numerical figure of merit. BOSS uses the smoothest shape modification strategy for modifying the fuselage radius distribution at 100 or more longitudinal locations to find a smooth fuselage shape that reduces the discrepancies between the design and target equivalent area distributions over any specified range of effective distance. For any given supersonic concept (with wing, fuselage, nacelles, tails, and/or canards), a designer can examine the differences between the design and target equivalent areas, decide which part of the design equivalent area curve needs to be modified, choose a desirable rate for the reduction of the discrepancies over the specified range, and select a parameter for smoothness control of the fuselage shape. BOSS will then generate a fuselage shape based on the designer's inputs in a matter of seconds. Using BOSS, within a few hours, a designer can either generate a realistic fuselage shape that yields a supersonic configuration with a low-boom ground signature or quickly eliminate any configuration that cannot achieve low-boom characteristics with fuselage shaping alone. A conceptual design case study is documented to demonstrate how BOSS can be used to develop a low-boom supersonic concept from a low-drag supersonic concept. The paper also contains a study on how perturbations in the equivalent area distribution affect the ground signature shape and how new target area distributions for low-boom signatures can be constructed using superposition of equivalent area distributions derived from the Seebass-George-Darden (SGD) theory.

  19. 29 CFR 1910.1001 - Asbestos.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... regulated areas. Building/facility owner is the legal entity, including a lessee, which exercises control... education, training, and experience to anticipate, recognize, evaluate and develop controls for occupational... appendix A to this section, or by an equivalent method. (2) Excursion limit. The employer shall ensure that...

  20. 29 CFR 1910.1001 - Asbestos.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... regulated areas. Building/facility owner is the legal entity, including a lessee, which exercises control... education, training, and experience to anticipate, recognize, evaluate and develop controls for occupational... appendix A to this section, or by an equivalent method. (2) Excursion limit. The employer shall ensure that...

  1. An Equivalent Fracture Modeling Method

    NASA Astrophysics Data System (ADS)

    Li, Shaohua; Zhang, Shujuan; Yu, Gaoming; Xu, Aiyun

    2017-12-01

    3D fracture network model is built based on discrete fracture surfaces, which are simulated based on fracture length, dip, aperture, height and so on. The interesting area of Wumishan Formation of Renqiu buried hill reservoir is about 57 square kilometer and the thickness of target strata is more than 2000 meters. In addition with great fracture density, the fracture simulation and upscaling of discrete fracture network model of Wumishan Formation are very intense computing. In order to solve this problem, a method of equivalent fracture modeling is proposed. First of all, taking the fracture interpretation data obtained from imaging logging and conventional logging as the basic data, establish the reservoir level model, and then under the constraint of reservoir level model, take fault distance analysis model as the second variable, establish fracture density model by Sequential Gaussian Simulation method. Increasing the width, height and length of fracture, at the same time decreasing its density in order to keep the similar porosity and permeability after upscaling discrete fracture network model. In this way, the fracture model of whole interesting area can be built within an accepted time.

  2. Stress intensity factors for long, deep surface flaws in plates under extensional fields

    NASA Technical Reports Server (NTRS)

    Harms, A. E.; Smith, C. W.

    1973-01-01

    Using a singular solution for a part circular crack, a Taylor Series Correction Method (TSCM) was verified for extracting stress intensity factors from photoelastic data. Photoelastic experiments were then conducted on plates with part circular and flat bottomed cracks for flaw depth to thickness ratios of 0.25, 0.50 and 0.75 and for equivalent flaw depth to equivalent ellipse length values ranging from 0.066 to 0.319. Experimental results agreed well with the Smith theory but indicated that the use of the ''equivalent'' semi-elliptical flaw results was not valid for a/2c less than 0.20. Best overall agreement for the moderate (a/t approximately 0.5) to deep flaws (a/t approximatelly 0.75) and a/2c greater than 0.15 was found with a semi-empirical theory, when compared on the basis of equivalent flaw depth and area.

  3. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  4. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  5. 77 FR 55832 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method for monitoring ambient air quality. SUMMARY: Notice is... part 53, a new equivalent method for measuring concentrations of PM 2.5 in the ambient air. FOR FURTHER...

  6. Metrics for Assessing the Quality of Groundwater Used for Public Supply, CA, USA: Equivalent-Population and Area.

    PubMed

    Belitz, Kenneth; Fram, Miranda S; Johnson, Tyler D

    2015-07-21

    Data from 11,000 public supply wells in 87 study areas were used to assess the quality of nearly all of the groundwater used for public supply in California. Two metrics were developed for quantifying groundwater quality: area with high concentrations (km(2) or proportion) and equivalent-population relying upon groundwater with high concentrations (number of people or proportion). Concentrations are considered high if they are above a human-health benchmark. When expressed as proportions, the metrics are area-weighted and population-weighted detection frequencies. On a statewide-scale, about 20% of the groundwater used for public supply has high concentrations for one or more constituents (23% by area and 18% by equivalent-population). On the basis of both area and equivalent-population, trace elements are more prevalent at high concentrations than either nitrate or organic compounds at the statewide-scale, in eight of nine hydrogeologic provinces, and in about three-quarters of the study areas. At a statewide-scale, nitrate is more prevalent than organic compounds based on area, but not on the basis of equivalent-population. The approach developed for this paper, unlike many studies, recognizes the importance of appropriately weighting information when changing scales, and is broadly applicable to other areas.

  7. 77 FR 22282 - Milk in the Northeast and Other Marketing Areas; Determination of Equivalent Price Series

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-13

    ... DEPARTMENT OF AGRICULTURE Agricultural Marketing Service [Doc. No. AMS-DA-10-0089; DA-11-01] Milk in the Northeast and Other Marketing Areas; Determination of Equivalent Price Series AGENCY: Agricultural Marketing Service, USDA. ACTION: Determination of equivalent price series. SUMMARY: It has been...

  8. 75 FR 45627 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-03

    ... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR part 53, one new equivalent method for measuring concentrations of lead (Pb) in total...

  9. 76 FR 62402 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ... Monitoring Reference and Equivalent Methods; Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR Part 53, one new equivalent method for measuring concentrations of ozone (O 3 ) in the...

  10. 75 FR 51039 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... Monitoring Reference and Equivalent Methods: Designation of Two New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of two new equivalent methods for monitoring ambient air... accordance with 40 CFR Part 53, two new equivalent methods for measuring concentrations of PM 10 and sulfur...

  11. 75 FR 22126 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-27

    ... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR Part 53, one new equivalent method for measuring concentrations of ozone (O 3 ) in the...

  12. 75 FR 30022 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR Part 53, one new equivalent method for measuring concentrations of lead (Pb) in total...

  13. 75 FR 9894 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR part 53, one new equivalent method for measuring concentrations of lead (Pb) in total...

  14. Ultrasound instrumentation for the 7 inch Mach seven tunnel

    NASA Technical Reports Server (NTRS)

    Mazel, D. S.; Mielke, R. R.

    1985-01-01

    The use of an Apple II+ microcomputer to collect data during the operation of the 7 inch Mach Seven Tunnel is discussed. A method by which the contamination of liquid oxygen is monitored with sound speed techniques is investigated. The electrical equivalent of a transducer bonded to a high pressure fill plug is studied. The three areas are briefly explained and data gathered for each area are presented.

  15. Alcohol Advertising on Boston's Massachusetts Bay Transportation Authority Transit System: An Assessment of Youths' and Adults' Exposure

    PubMed Central

    Nyborn, Justin A.; Wukitsch, Kimberly; Nhean, Siphannay

    2009-01-01

    Objectives. We investigated the frequency with which alcohol advertisements appeared on Massachusetts Bay Transportation Authority (MBTA) transit lines in Boston, MA, and we calculated adult and youths' exposure to the ads. Methods. We measured the nature and extent of alcohol advertisements on 4 Boston transit lines on 2 separate weekdays 1 month apart in June and July of 2008. We calculated weekday ad exposure for all passengers (all ages) and for Boston Public School student passengers (aged 11–18 years). Results. Alcohol ads were viewed an estimated 1 212 960 times across all Boston-area transit passengers during an average weekday, reaching the equivalent of 42.7% of that population. Alcohol ads were viewed an estimated 18 269 times by Boston Public School student transit passengers during an average weekday, reaching the equivalent of 54.1% of that population. Conclusions. Advertisers reached the equivalent of half of all Boston Public School transit passengers aged 11 to 18 years and the equivalent of nearly half of all transit passengers in the Boston area with an alcohol advertisement each day. Because of the high exposure of underage youths to alcohol advertisements, we recommend that the MBTA prohibit alcohol advertising on the Boston transit system. PMID:19890170

  16. 47 CFR 54.101 - Supported services for rural, insular and high cost areas.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Dual tone multi-frequency signaling or its functional equivalent. “Dual tone multi-frequency” (DTMF) is a method of signaling that facilitates the transportation of signaling through the network... systems; (6) Access to operator services. “Access to operator services” is defined as access to any...

  17. Establishing Statistical Equivalence of Data from Different Sampling Approaches for Assessment of Bacterial Phenotypic Antimicrobial Resistance

    PubMed Central

    2018-01-01

    ABSTRACT To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli. These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. PMID:29475868

  18. Establishing Statistical Equivalence of Data from Different Sampling Approaches for Assessment of Bacterial Phenotypic Antimicrobial Resistance.

    PubMed

    Shakeri, Heman; Volkova, Victoriya; Wen, Xuesong; Deters, Andrea; Cull, Charley; Drouillard, James; Müller, Christian; Moradijamei, Behnaz; Jaberi-Douraki, Majid

    2018-05-01

    To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. Copyright © 2018 Shakeri et al.

  19. Ecological Equivalence Assessment Methods: What Trade-Offs between Operationality, Scientific Basis and Comprehensiveness?

    PubMed

    Bezombes, Lucie; Gaucherand, Stéphanie; Kerbiriou, Christian; Reinert, Marie-Eve; Spiegelberger, Thomas

    2017-08-01

    In many countries, biodiversity compensation is required to counterbalance negative impacts of development projects on biodiversity by carrying out ecological measures, called offset when the goal is to reach "no net loss" of biodiversity. One main issue is to ensure that offset gains are equivalent to impact-related losses. Ecological equivalence is assessed with ecological equivalence assessment methods taking into account a range of key considerations that we summarized as ecological, spatial, temporal, and uncertainty. When equivalence assessment methods take into account all considerations, we call them "comprehensive". Equivalence assessment methods should also aim to be science-based and operational, which is challenging. Many equivalence assessment methods have been developed worldwide but none is fully satisfying. In the present study, we examine 13 equivalence assessment methods in order to identify (i) their general structure and (ii) the synergies and trade-offs between equivalence assessment methods characteristics related to operationality, scientific-basis and comprehensiveness (called "challenges" in his paper). We evaluate each equivalence assessment methods on the basis of 12 criteria describing the level of achievement of each challenge. We observe that all equivalence assessment methods share a general structure, with possible improvements in the choice of target biodiversity, the indicators used, the integration of landscape context and the multipliers reflecting time lags and uncertainties. We show that no equivalence assessment methods combines all challenges perfectly. There are trade-offs between and within the challenges: operationality tends to be favored while scientific basis are integrated heterogeneously in equivalence assessment methods development. One way of improving the challenges combination would be the use of offset dedicated data-bases providing scientific feedbacks on previous offset measures.

  20. An equivalent domain integral for analysis of two-dimensional mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies subjected to mixed mode loading is presented. The total and product integrals consist of the sum of an area or domain integral and line integrals on the crack faces. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all the problems analyzed.

  1. 10 CFR 72.106 - Controlled area of an ISFSI or MRS.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... controlled area may not receive from any design basis accident the more limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose...

  2. 10 CFR 72.106 - Controlled area of an ISFSI or MRS.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... controlled area may not receive from any design basis accident the more limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose...

  3. 10 CFR 72.106 - Controlled area of an ISFSI or MRS.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... controlled area may not receive from any design basis accident the more limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose...

  4. 10 CFR 72.106 - Controlled area of an ISFSI or MRS.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... controlled area may not receive from any design basis accident the more limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose...

  5. 10 CFR 72.106 - Controlled area of an ISFSI or MRS.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... controlled area may not receive from any design basis accident the more limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose...

  6. Length and area equivalents for interpreting wildland resource maps

    Treesearch

    Elliot L. Amidon; Marilyn S. Whitfield

    1969-01-01

    Map users must refer to an appropriate scale in interpreting wildland resource maps. Length and area equivalents for nine map scales commonly used have been computed. For each scale a 1-page table consists of map-to-ground equivalents, buffer strip or road widths, and cell dimensions required for a specified acreage. The conversion factors are stored in a Fortran...

  7. A new method for testing the scale-factor performance of fiber optical gyroscope

    NASA Astrophysics Data System (ADS)

    Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin

    2015-10-01

    Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.

  8. Attributing Success Factors of Senior-Level Nonacademic Deans or Title Equivalent at Selected Colleges and Universities in the Greater Los Angeles Area

    ERIC Educational Resources Information Center

    Gravagne, Michael D.

    2013-01-01

    Purpose: To determine attributing success factors in the professional development of senior-level nonacademic deans or title equivalent at selected colleges and universities in the greater Los Angeles area. Methodology. An open-ended questionnaire was sent out to 17 senior-level student affairs officers (SSAOs) or title equivalent at selected…

  9. A physical optics/equivalent currents model for the RCS of trihedral corner reflectors

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Polycarpou, Anastasis C.

    1993-01-01

    The scattering in the interior regions of both square and triangular trihedral corner reflectors is examined. The theoretical model presented combines geometrical and physical optics (GO and PO), used to account for reflection terms, with equivalent edge currents (EEC), used to account for first-order diffractions from the edges. First-order, second-order, and third-order reflection terms are included. Calculating the first-order reflection terms involves integrating over the entire surface of the illuminated plate. Calculating the second- and third-order reflection terms, however, is much more difficult because the illuminated area is an arbitrary polygon whose shape is dependent upon the incident angles. The method for determining the area of integration is detailed. Extensive comparisons between the high-frequency model, Finite-Difference Time-Domain (FDTD) and experimental data are used for validation of the radar cross section (RCS) of both square and triangular trihedral reflectors.

  10. Influence of beam efficiency through the patient-specific collimator on secondary neutron dose equivalent in double scattering and uniform scanning modes of proton therapy.

    PubMed

    Hecksel, D; Anferov, V; Fitzek, M; Shahnazi, K

    2010-06-01

    Conventional proton therapy facilities use double scattering nozzles, which are optimized for delivery of a few fixed field sizes. Similarly, uniform scanning nozzles are commissioned for a limited number of field sizes. However, cases invariably occur where the treatment field is significantly different from these fixed field sizes. The purpose of this work was to determine the impact of the radiation field conformity to the patient-specific collimator on the secondary neutron dose equivalent. Using a WENDI-II neutron detector, the authors experimentally investigated how the neutron dose equivalent at a particular point of interest varied with different collimator sizes, while the beam spreading was kept constant. The measurements were performed for different modes of dose delivery in proton therapy, all of which are available at the Midwest Proton Radiotherapy Institute (MPRI): Double scattering, uniform scanning delivering rectangular fields, and uniform scanning delivering circular fields. The authors also studied how the neutron dose equivalent changes when one changes the amplitudes of the scanned field for a fixed collimator size. The secondary neutron dose equivalent was found to decrease linearly with the collimator area for all methods of dose delivery. The relative values of the neutron dose equivalent for a collimator with a 5 cm diameter opening using 88 MeV protons were 1.0 for the double scattering field, 0.76 for rectangular uniform field, and 0.6 for the circular uniform field. Furthermore, when a single circle wobbling was optimized for delivery of a uniform field 5 cm in diameter, the secondary neutron dose equivalent was reduced by a factor of 6 compared to the double scattering nozzle. Additionally, when the collimator size was kept constant, the neutron dose equivalent at the given point of interest increased linearly with the area of the scanned proton beam. The results of these experiments suggest that the patient-specific collimator is a significant contributor to the secondary neutron dose equivalent to a distant organ at risk. Improving conformity of the radiation field to the patient-specific collimator can significantly reduce secondary neutron dose equivalent to the patient. Therefore, it is important to increase the number of available generic field sizes in double scattering systems as well as in uniform scanning nozzles.

  11. Equivalence relations in individuals with language limitations and mental retardation.

    PubMed Central

    O'Donnell, Jennifer; Saunders, Kathryn J

    2003-01-01

    The study of equivalence relations exhibited by individuals with mental retardation and language limitations holds the promise of providing information of both theoretical and practical significance. We reviewed the equivalence literature with this population, defined in terms of subjects having moderate, severe, or profound mental retardation. The literature includes 55 such individuals, most of whom showed positive outcomes on equivalence tests. The results to date suggest that naming skills are not necessary for positive equivalence test outcomes. Thus far, however, relatively few subjects with minimal language have been studied. Moreover, we suggest that the scientific contributions of studies in this area would be enhanced with better documentation of language skills and other subject characteristics. With recent advances in laboratory procedures for establishing the baseline performances necessary for equivalence tests, this research area is poised for rapid growth. PMID:13677612

  12. Orthodontic soft-tissue parameters: a comparison of cone-beam computed tomography and the 3dMD imaging system.

    PubMed

    Metzger, Tasha E; Kula, Katherine S; Eckert, George J; Ghoneima, Ahmed A

    2013-11-01

    Orthodontists rely heavily on soft-tissue analysis to determine facial esthetics and treatment stability. The aim of this retrospective study was to determine the equivalence of soft-tissue measurements between the 3dMD imaging system (3dMD, Atlanta, Ga) and the segmented skin surface images derived from cone-beam computed tomography. Seventy preexisting 3dMD facial photographs and cone-beam computed tomography scans taken within minutes of each other for the same subjects were registered in 3 dimensions and superimposed using Vultus (3dMD) software. After reliability studies, 28 soft-tissue measurements were recorded with both imaging modalities and compared to analyze their equivalence. Intraclass correlation coefficients and Bland-Altman plots were used to assess interexaminer and intraexaminer repeatability and agreement. Summary statistics were calculated for all measurements. To demonstrate equivalence of the 2 methods, the difference needed a 95% confidence interval contained entirely within the equivalence limits defined by the repeatability results. Statistically significant differences were reported for the vermilion height, mouth width, total facial width, mouth symmetry, soft-tissue lip thickness, and eye symmetry. There are areas of nonequivalence between the 2 imaging methods; however, the differences are clinically acceptable from the orthodontic point of view. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  13. The impact of municipal solid waste treatment methods on greenhouse gas emissions in Lahore, Pakistan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batool, Syeda Adila; Chuadhry, Muhammad Nawaz

    2009-01-15

    The contribution of existing municipal solid waste management to emission of greenhouse gases and the alternative scenarios to reduce emissions were analyzed for Data Ganj Bukhsh Town (DGBT) in Lahore, Pakistan using the life cycle assessment methodology. DGBT has a population of 1,624,169 people living in 232,024 dwellings. Total waste generated is 500,000 tons per year with an average per capita rate of 0.84 kg per day. Alternative scenarios were developed and evaluated according to the environmental, economic, and social atmosphere of the study area. Solid waste management options considered include the collection and transportation of waste, collection of recyclablesmore » with single and mixed material bank container systems (SMBCS, MMBCS), material recovery facilities (MRF), composting, biogasification and landfilling. A life cycle inventory (LCI) of the six scenarios along with the baseline scenario was completed; this helped to quantify the CO{sub 2} equivalents, emitted and avoided, for energy consumption, production, fuel consumption, and methane (CH{sub 4}) emissions. LCI results showed that the contribution of the baseline scenario to the global warming potential as CO{sub 2} equivalents was a maximum of 838,116 tons. The sixth scenario had a maximum reduction of GHG emissions in terms of CO{sub 2} equivalents of -33,773 tons, but the most workable scenario for the current situation in the study area is scenario 5. It saves 25% in CO{sub 2} equivalents compared to the baseline scenario.« less

  14. Equivalent damage: A critical assessment

    NASA Technical Reports Server (NTRS)

    Laflen, J. R.; Cook, T. S.

    1982-01-01

    Concepts in equivalent damage were evaluated to determine their applicability to the life prediction of hot path components of aircraft gas turbine engines. Equivalent damage was defined as being those effects which influence the crack initiation life-time beyond the damage that is measured in uniaxial, fully-reversed sinusoidal and isothermal experiments at low homologous temperatures. Three areas of equivalent damage were examined: mean stress, cumulative damage, and multiaxiality. For each area, a literature survey was conducted to aid in selecting the most appropriate theories. Where possible, data correlations were also used in the evaluation process. A set of criteria was developed for ranking the theories in each equivalent damage regime. These criteria considered aspects of engine utilization as well as the theoretical basis and correlative ability of each theory. In addition, consideration was given to the complex nature of the loading cycle at fatigue critical locations of hot path components; this loading includes non-proportional multiaxial stressing, combined temperature and strain fluctuations, and general creep-fatigue interactions. Through applications of selected equivalent damage theories to some suitable data sets it was found that there is insufficient data to allow specific recommendations of preferred theories for general applications. A series of experiments and areas of further investigations were identified.

  15. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 Pt. 53...

  16. A method of inversion of satellite magnetic anomaly data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.

    1977-01-01

    A method of finding a first approximation to a crustal magnetization distribution from inversion of satellite magnetic anomaly data is described. Magnetization is expressed as a Fourier Series in a segment of spherical shell. Input to this procedure is an equivalent source representation of the observed anomaly field. Instability of the inversion occurs when high frequency noise is present in the input data, or when the series is carried to an excessively high wave number. Preliminary results are given for the United States and adjacent areas.

  17. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  18. Assessment of doses caused by electrons in thin layers of tissue-equivalent materials, using MCNP.

    PubMed

    Heide, Bernd

    2013-10-01

    Absorbed doses caused by electron irradiation were calculated with Monte Carlo N-Particle transport code (MCNP) for thin layers of tissue-equivalent materials. The layers were so thin that the calculation of energy deposition was on the border of the scope of MCNP. Therefore, in this article application of three different methods of calculation of energy deposition is discussed. This was done by means of two scenarios: in the first one, electrons were emitted from the centre of a sphere of water and also recorded in that sphere; and in the second, an irradiation with the PTB Secondary Standard BSS2 was modelled, where electrons were emitted from an (90)Sr/(90)Y area source and recorded inside a cuboid phantom made of tissue-equivalent material. The speed and accuracy of the different methods were of interest. While a significant difference in accuracy was visible for one method in the first scenario, the difference in accuracy of the three methods was insignificant for the second one. Considerable differences in speed were found for both scenarios. In order to demonstrate the need for calculating the dose in thin small zones, a third scenario was constructed and simulated as well. The third scenario was nearly equal to the second one, but a pike of lead was assumed to be inside the phantom in addition. A dose enhancement (caused by the pike of lead) of ∼113 % was recorded for a thin hollow cylinder at a depth of 0.007 cm, which the basal-skin layer is referred to in particular. Dose enhancements between 68 and 88 % were found for a slab with a radius of 0.09 cm for all depths. All dose enhancements were hardly noticeable for a slab with a cross-sectional area of 1 cm(2), which is usually applied to operational radiation protection.

  19. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  20. Periodic equivalence ratio modulation method and apparatus for controlling combustion instability

    DOEpatents

    Richards, George A.; Janus, Michael C.; Griffith, Richard A.

    2000-01-01

    The periodic equivalence ratio modulation (PERM) method and apparatus significantly reduces and/or eliminates unstable conditions within a combustion chamber. The method involves modulating the equivalence ratio for the combustion device, such that the combustion device periodically operates outside of an identified unstable oscillation region. The equivalence ratio is modulated between preselected reference points, according to the shape of the oscillation region and operating parameters of the system. Preferably, the equivalence ratio is modulated from a first stable condition to a second stable condition, and, alternatively, the equivalence ratio is modulated from a stable condition to an unstable condition. The method is further applicable to multi-nozzle combustor designs, whereby individual nozzles are alternately modulated from stable to unstable conditions. Periodic equivalence ratio modulation (PERM) is accomplished by active control involving periodic, low frequency fuel modulation, whereby low frequency fuel pulses are injected into the main fuel delivery. Importantly, the fuel pulses are injected at a rate so as not to affect the desired time-average equivalence ratio for the combustion device.

  1. 10 CFR 60.136 - Preclosure controlled area.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The eye dose equivalent shall not exceed 0.15 Sv (15 rem), and the shallow dose...

  2. 10 CFR 60.136 - Preclosure controlled area.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The eye dose equivalent shall not exceed 0.15 Sv (15 rem), and the shallow dose...

  3. 10 CFR 60.136 - Preclosure controlled area.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The eye dose equivalent shall not exceed 0.15 Sv (15 rem), and the shallow dose...

  4. 10 CFR 60.136 - Preclosure controlled area.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The eye dose equivalent shall not exceed 0.15 Sv (15 rem), and the shallow dose...

  5. 10 CFR 60.136 - Preclosure controlled area.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... limiting of a total effective dose equivalent of 0.05 Sv (5 rem), or the sum of the deep-dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The eye dose equivalent shall not exceed 0.15 Sv (15 rem), and the shallow dose...

  6. Comparison of three techniques in measuring progressive addition lenses.

    PubMed

    Huang, Ching-Yao; Raasch, Thomas W; Yi, Allen Y; Sheedy, James E; Andre, Brett; Bullimore, Mark A

    2012-11-01

    To measure progressive addition lenses (PALs) by three techniques and to compare the differences across techniques. Five contemporary PALs (Varilux Comfort Enhanced, Varilux Physio Enhanced, Hoya Lifestyle, Shamir Autograph, and Zeiss individual) with plano distance power and a +2.00 diopters (D) add were evaluated under the condition of lateral displacement of the lens (no rotation and no tilt) using three methods. A Hartmann-Shack wavefront sensor (HSWFS) on a custom-built optical bench was used to capture and measure wavefront aberrations. A Rotlex Class Plus lens analyzer operating as a moiré interferometer was used to measure spherical and cylindrical powers. A coordinate measuring machine (CMM) was used to measure front and back surfaces of PALs and converted to desired optical properties. The data were analyzed with MATLAB programs. Contour plots of spherical equivalent power, cylindrical power, and higher-order aberrations (HOAs) in all PALs were generated to compare their differences. The differences in spherical equivalent and cylinder at distance, near, and progressive corridor areas among the HSWFS, Rotlex, and CMM methods were close to zero in all five PALs. The maximum differences are approximately 0.50 D and located below the near power zone and the edge areas of the lens when comparing the HSWFS and CMM with the Rotlex. HOAs measured both by the HSWFS and CMM were highest in the corridor area and the area surrounding the near zone in all PALs. The HOAs measured by the CMM were lower than those from the HSWFS by 0.02 to 0.04 μm. The three measurement methods are comparable for measuring spherical and cylindrical power across PALs. The non-optical method, CMM, can be used to evaluate the optical properties of a PAL by measuring front and back surface height measurements, although its estimates of HOAs are lower than those from the HSWFS.

  7. Examination of the equivalence of self-report survey-based paper-and-pencil and internet data collection methods.

    PubMed

    Weigold, Arne; Weigold, Ingrid K; Russell, Elizabeth J

    2013-03-01

    Self-report survey-based data collection is increasingly carried out using the Internet, as opposed to the traditional paper-and-pencil method. However, previous research on the equivalence of these methods has yielded inconsistent findings. This may be due to methodological and statistical issues present in much of the literature, such as nonequivalent samples in different conditions due to recruitment, participant self-selection to conditions, and data collection procedures, as well as incomplete or inappropriate statistical procedures for examining equivalence. We conducted 2 studies examining the equivalence of paper-and-pencil and Internet data collection that accounted for these issues. In both studies, we used measures of personality, social desirability, and computer self-efficacy, and, in Study 2, we used personal growth initiative to assess quantitative equivalence (i.e., mean equivalence), qualitative equivalence (i.e., internal consistency and intercorrelations), and auxiliary equivalence (i.e., response rates, missing data, completion time, and comfort completing questionnaires using paper-and-pencil and the Internet). Study 1 investigated the effects of completing surveys via paper-and-pencil or the Internet in both traditional (i.e., lab) and natural (i.e., take-home) settings. Results indicated equivalence across conditions, except for auxiliary equivalence aspects of missing data and completion time. Study 2 examined mailed paper-and-pencil and Internet surveys without contact between experimenter and participants. Results indicated equivalence between conditions, except for auxiliary equivalence aspects of response rate for providing an address and completion time. Overall, the findings show that paper-and-pencil and Internet data collection methods are generally equivalent, particularly for quantitative and qualitative equivalence, with nonequivalence only for some aspects of auxiliary equivalence. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  8. 40 CFR 53.11 - Cancellation of reference or equivalent method designation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Cancellation of reference or equivalent method designation. 53.11 Section 53.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General...

  9. Simulation of water-use conservation scenarios for the Mississippi Delta using an existing regional groundwater flow model

    USGS Publications Warehouse

    Barlow, Jeannie R.B.; Clark, Brian R.

    2011-01-01

    The Mississippi River alluvial plain in northwestern Mississippi (referred to as the Delta), once a floodplain to the Mississippi River covered with hardwoods and marshland, is now a highly productive agricultural region of large economic importance to Mississippi. Water for irrigation is supplied primarily by the Mississippi River Valley alluvial aquifer, and although the alluvial aquifer has a large reserve, there is evidence that the current rate of water use from the alluvial aquifer is not sustainable. Using an existing regional groundwater flow model, conservation scenarios were developed for the alluvial aquifer underlying the Delta region in northwestern Mississippi to assess where the implementation of water-use conservation efforts would have the greatest effect on future water availability-either uniformly throughout the Delta, or focused on a cone of depression in the alluvial aquifer underlying the central part of the Delta. Five scenarios were simulated with the Mississippi Embayment Regional Aquifer Study groundwater flow model: (1) a base scenario in which water use remained constant at 2007 rates throughout the entire simulation; (2) a 5-percent 'Delta-wide' conservation scenario in which water use across the Delta was decreased by 5 percent; (3) a 5-percent 'cone-equivalent' conservation scenario in which water use within the area of the cone of depression was decreased by 11 percent (a volume equivalent to the 5-percent Delta-wide conservation scenario); (4) a 25-percent Delta-wide conservation scenario in which water use across the Delta was decreased by 25 percent; and (5) a 25-percent cone-equivalent conservation scenario in which water use within the area of the cone of depression was decreased by 55 percent (a volume equivalent to the 25-percent Delta-wide conservation scenario). The Delta-wide scenarios result in greater average water-level improvements (relative to the base scenario) for the entire Delta area than the cone-equivalent scenarios; however, the cone-equivalent scenarios result in greater average water-level improvements within the area of the cone of depression because of focused conservation efforts within that area. Regardless of where conservation is located, the greatest average improvements in water level occur within the area of the cone of depression because of the corresponding large area of unsaturated aquifer material within the area of the cone of depression and the hydraulic gradient, which slopes from the periphery of the Delta towards the area of the cone of depression. Of the four conservation scenarios, the 25-percent cone-equivalent scenario resulted in the greatest increase in storage relative to the base scenario with a 32-percent improvement over the base scenario across the entire Delta and a 60-percent improvement within the area of the cone of depression. Overall, the results indicate that focusing conservation efforts within the area of the cone of depression, rather than distributing conservation efforts uniformly across the Delta, results in greater improvements in the amount of storage within the alluvial aquifer. Additionally, as the total amount of conservation increases (that is, from 5 to 25 percent), the difference in storage improvement between the Delta-wide and cone-equivalent scenarios also increases, resulting in greater gains in storage in the cone-equivalent scenario than in the Delta-wide scenario for the same amount of conservation.

  10. Climate-dependence of ecosystem services in a nature reserve in northern China

    PubMed Central

    Fang, Jiaohui; Song, Huali; Zhang, Yiran; Li, Yanran

    2018-01-01

    Evaluation of ecosystem services has become a hotspot in terms of research focus, but uncertainties over appropriate methods remain. Evaluation can be based on the unit price of services (services value method) or the unit price of the area (area value method). The former takes meteorological factors into account, while the latter does not. This study uses Kunyu Mountain Nature Reserve as a study site at which to test the effects of climate on the ecosystem services. Measured data and remote sensing imagery processed in a geographic information system were combined to evaluate gas regulation and soil conservation, and the influence of meteorological factors on ecosystem services. Results were used to analyze the appropriateness of the area value method. Our results show that the value of ecosystem services is significantly affected by meteorological factors, especially precipitation. Use of the area value method (which ignores the impacts of meteorological factors) could considerably impede the accuracy of ecosystem services evaluation. Results were also compared with the valuation obtained using the modified equivalent value factor (MEVF) method, which is a modified area value method that considers changes in meteorological conditions. We found that MEVF still underestimates the value of ecosystem services, although it can reflect to some extent the annual variation in meteorological factors. Our findings contribute to increasing the accuracy of evaluation of ecosystem services. PMID:29438427

  11. Climate-dependence of ecosystem services in a nature reserve in northern China.

    PubMed

    Fang, Jiaohui; Song, Huali; Zhang, Yiran; Li, Yanran; Liu, Jian

    2018-01-01

    Evaluation of ecosystem services has become a hotspot in terms of research focus, but uncertainties over appropriate methods remain. Evaluation can be based on the unit price of services (services value method) or the unit price of the area (area value method). The former takes meteorological factors into account, while the latter does not. This study uses Kunyu Mountain Nature Reserve as a study site at which to test the effects of climate on the ecosystem services. Measured data and remote sensing imagery processed in a geographic information system were combined to evaluate gas regulation and soil conservation, and the influence of meteorological factors on ecosystem services. Results were used to analyze the appropriateness of the area value method. Our results show that the value of ecosystem services is significantly affected by meteorological factors, especially precipitation. Use of the area value method (which ignores the impacts of meteorological factors) could considerably impede the accuracy of ecosystem services evaluation. Results were also compared with the valuation obtained using the modified equivalent value factor (MEVF) method, which is a modified area value method that considers changes in meteorological conditions. We found that MEVF still underestimates the value of ecosystem services, although it can reflect to some extent the annual variation in meteorological factors. Our findings contribute to increasing the accuracy of evaluation of ecosystem services.

  12. 40 CFR 53.14 - Modification of a reference or equivalent method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Modification of a reference or equivalent method. 53.14 Section 53.14 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions...

  13. 40 CFR 53.8 - Designation of reference and equivalent methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Designation of reference and equivalent methods. 53.8 Section 53.8 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8...

  14. An Empirical Orthogonal Function Reanalysis of the Northern Polar External and Induced Magnetic Field During Solar Cycle 23

    NASA Astrophysics Data System (ADS)

    Shore, R. M.; Freeman, M. P.; Gjerloev, J. W.

    2018-01-01

    We apply the method of data-interpolating empirical orthogonal functions (EOFs) to ground-based magnetic vector data from the SuperMAG archive to produce a series of month length reanalyses of the surface external and induced magnetic field (SEIMF) in 110,000 km2 equal-area bins over the entire northern polar region at 5 min cadence over solar cycle 23, from 1997.0 to 2009.0. Each EOF reanalysis also decomposes the measured SEIMF variation into a hierarchy of spatiotemporal patterns which are ordered by their contribution to the monthly magnetic field variance. We find that the leading EOF patterns can each be (subjectively) interpreted as well-known SEIMF systems or their equivalent current systems. The relationship of the equivalent currents to the true current flow is not investigated. We track the leading SEIMF or equivalent current systems of similar type by intermonthly spatial correlation and apply graph theory to (objectively) group their appearance and relative importance throughout a solar cycle, revealing seasonal and solar cycle variation. In this way, we identify the spatiotemporal patterns that maximally contribute to SEIMF variability over a solar cycle. We propose this combination of EOF and graph theory as a powerful method for objectively defining and investigating the structure and variability of the SEIMF or their equivalent ionospheric currents for use in both geomagnetism and space weather applications. It is demonstrated here on solar cycle 23 but is extendable to any epoch with sufficient data coverage.

  15. Calculation of Absorbed Dose in Target Tissue and Equivalent Dose in Sensitive Tissues of Patients Treated by BNCT Using MCNP4C

    NASA Astrophysics Data System (ADS)

    Zamani, M.; Kasesaz, Y.; Khalafi, H.; Pooya, S. M. Hosseini

    Boron Neutron Capture Therapy (BNCT) is used for treatment of many diseases, including brain tumors, in many medical centers. In this method, a target area (e.g., head of patient) is irradiated by some optimized and suitable neutron fields such as research nuclear reactors. Aiming at protection of healthy tissues which are located in the vicinity of irradiated tissue, and based on the ALARA principle, it is required to prevent unnecessary exposure of these vital organs. In this study, by using numerical simulation method (MCNP4C Code), the absorbed dose in target tissue and the equiavalent dose in different sensitive tissues of a patiant treated by BNCT, are calculated. For this purpose, we have used the parameters of MIRD Standard Phantom. Equiavelent dose in 11 sensitive organs, located in the vicinity of target, and total equivalent dose in whole body, have been calculated. The results show that the absorbed dose in tumor and normal tissue of brain equal to 30.35 Gy and 0.19 Gy, respectively. Also, total equivalent dose in 11 sensitive organs, other than tumor and normal tissue of brain, is equal to 14 mGy. The maximum equivalent doses in organs, other than brain and tumor, appear to the tissues of lungs and thyroid and are equal to 7.35 mSv and 3.00 mSv, respectively.

  16. Calculation of reaction forces in the boiler supports using the method of equivalent stiffness of membrane wall.

    PubMed

    Sertić, Josip; Kozak, Dražan; Samardžić, Ivan

    2014-01-01

    The values of reaction forces in the boiler supports are the basis for the dimensioning of bearing steel structure of steam boiler. In this paper, the application of the method of equivalent stiffness of membrane wall is proposed for the calculation of reaction forces. The method of equalizing displacement, as the method of homogenization of membrane wall stiffness, was applied. On the example of "Milano" boiler, using the finite element method, the calculation of reactions in the supports for the real geometry discretized by the shell finite element was made. The second calculation was performed with the assumption of ideal stiffness of membrane walls and the third using the method of equivalent stiffness of membrane wall. In the third case, the membrane walls are approximated by the equivalent orthotropic plate. The approximation of membrane wall stiffness is achieved using the elasticity matrix of equivalent orthotropic plate at the level of finite element. The obtained results were compared, and the advantages of using the method of equivalent stiffness of membrane wall for the calculation of reactions in the boiler supports were emphasized.

  17. Dose Equivalents for Antipsychotic Drugs: The DDD Method.

    PubMed

    Leucht, Stefan; Samara, Myrto; Heres, Stephan; Davis, John M

    2016-07-01

    Dose equivalents of antipsychotics are an important but difficult to define concept, because all methods have weaknesses and strongholds. We calculated dose equivalents based on defined daily doses (DDDs) presented by the World Health Organisation's Collaborative Center for Drug Statistics Methodology. Doses equivalent to 1mg olanzapine, 1mg risperidone, 1mg haloperidol, and 100mg chlorpromazine were presented and compared with the results of 3 other methods to define dose equivalence (the "minimum effective dose method," the "classical mean dose method," and an international consensus statement). We presented dose equivalents for 57 first-generation and second-generation antipsychotic drugs, available as oral, parenteral, or depot formulations. Overall, the identified equivalent doses were comparable with those of the other methods, but there were also outliers. The major strength of this method to define dose response is that DDDs are available for most drugs, including old antipsychotics, that they are based on a variety of sources, and that DDDs are an internationally accepted measure. The major limitations are that the information used to estimate DDDS is likely to differ between the drugs. Moreover, this information is not publicly available, so that it cannot be reviewed. The WHO stresses that DDDs are mainly a standardized measure of drug consumption, and their use as a measure of dose equivalence can therefore be misleading. We, therefore, recommend that if alternative, more "scientific" dose equivalence methods are available for a drug they should be preferred to DDDs. Moreover, our summary can be a useful resource for pharmacovigilance studies. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  18. Evaluation of a Head-Worn Display System as an Equivalent Head-Up Display for Low Visibility Commercial Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis (Trey) J., III; Shelton, Kevin J.; Prinzel, Lawrence J.; Nicholas, Stephanie N.; Williams, Steven P.; Ellis, Kyle E.; Jones, Denise R.; Bailey, Randall E.; Harrison, Stephanie J.; Barnes, James R.

    2017-01-01

    Research, development, test, and evaluation of fight deck interface technologies is being conducted by the National Aeronautics and Space Administration (NASA) to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). One specific area of research was the use of small Head-Worn Displays (HWDs) to serve as a possible equivalent to a Head-Up Display (HUD). A simulation experiment and a fight test were conducted to evaluate if the HWD can provide an equivalent level of performance to a HUD. For the simulation experiment, airline crews conducted simulated approach and landing, taxi, and departure operations during low visibility operations. In a follow-on fight test, highly experienced test pilots evaluated the same HWD during approach and surface operations. The results for both the simulation and fight tests showed that there were no statistical differences in the crews' performance in terms of approach, touchdown and takeoff; but, there are still technical hurdles to be overcome for complete display equivalence including, most notably, the end-to-end latency of the HWD system.

  19. 21 CFR 610.9 - Equivalent methods and processes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Equivalent methods and processes. 610.9 Section 610.9 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS General Provisions § 610.9 Equivalent methods and processes...

  20. 21 CFR 610.9 - Equivalent methods and processes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Equivalent methods and processes. 610.9 Section 610.9 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS General Provisions § 610.9 Equivalent methods and processes...

  1. Streamflow responses to road building and harvesting: A comparison with the equivalent clearcut area procedure

    Treesearch

    John G. King

    1989-01-01

    lncreases in annual streamflow and peak streamflows were determined on four small watersheds following timber harvesting and road building. The measured hydrologic changes are compared to those predicted by a methodology commonly used in the Forest Service's Northern Region, the equivalent clearcut area procedure. lncreases in peak streamflows are discussed with...

  2. Determination of multiple sclerosis plaque size with diffusion-tensor MR Imaging: comparison study with healthy volunteers.

    PubMed

    Kealey, Susan M; Kim, Youngjoo; Whiting, Wythe L; Madden, David J; Provenzale, James M

    2005-08-01

    To use diffusion-tensor magnetic resonance (MR) imaging to measure involvement of normal-appearing white matter (WM) immediately adjacent to multiple sclerosis (MS) plaques and thus redefine actual plaque size on diffusion-tensor images through comparison with T2-weighted images of equivalent areas in healthy volunteers. Informed consent was not required given the retrospective nature of the study on an anonymized database. The study complied with requirements of the Health Insurance Portability and Accountability Act. Twelve patients with MS (four men, eight women; mean age, 35 years) and 14 healthy volunteers (six men, eight women; mean age, 25 years) were studied. The authors obtained fractional anisotropy (FA) values in MS plaques and in the adjacent normal-appearing WM in patients with MS and in equivalent areas in healthy volunteers. They placed regions of interest (ROIs) around the periphery of plaques and defined the total ROIs (ie, plaques plus peripheral ROIs) as abnormal if their mean FA values were at least 2 standard deviations below those of equivalent ROIs within equivalent regions in healthy volunteers. The combined area of the plaque and the peripheral ROI was compared with the area of the plaque seen on T2-weighted MR images by means of a Student paired t test (P = .05). The mean plaque size on T2-weighted images was 72 mm2 +/- 21 (standard deviation). The mean plaque FA value was 0.285 +/- 0.088 (0.447 +/- 0.069 in healthy volunteers [P < .001]; mean percentage reduction in FA in MS plaques, 37%). The mean plaque size on FA maps was 91 mm2 +/- 35, a mean increase of 127% compared with the size of the original plaque on T2-weighted images (P = .03). A significant increase in plaque size was seen when normal-appearing WM was interrogated with diffusion-tensor MR imaging. This imaging technique may represent a more sensitive method of assessing disease burden and may have a future role in determining disease burden and activity.

  3. The use of snowcovered area in runoff forecasts

    NASA Technical Reports Server (NTRS)

    Rango, A.; Hannaford, J. F.; Hall, R. L.; Rosenzweig, M.; Brown, A. J.

    1977-01-01

    Long-term snowcovered area data from aircraft and satellite observations have proven useful in reducing seasonal runoff forecast error on the Kern river watershed. Similar use of snowcovered area on the Kings river watershed produced results that were about equivalent to methods based solely on conventional data. Snowcovered area will be most effective in reducing forecast procedural error on watersheds with: (1) a substantial amount of area within a limited elevation range; (2) an erratic precipitation and/or snowpack accumulation pattern not strongly related to elevation; and (3) poor coverage by precipitation stations or snow courses restricting adequate indexing of water supply conditions. When satellite data acquisition and delivery problems are resolved, the derived snowcover information should provide a means for enhancing operational streamflow forecasts for areas that depend primarily on snowmelt for their water supply.

  4. Digital pathology: elementary, rapid and reliable automated image analysis.

    PubMed

    Bouzin, Caroline; Saini, Monika L; Khaing, Kyi-Kyi; Ambroise, Jérôme; Marbaix, Etienne; Grégoire, Vincent; Bol, Vanesa

    2016-05-01

    Slide digitalization has brought pathology to a new era, including powerful image analysis possibilities. However, while being a powerful prognostic tool, immunostaining automated analysis on digital images is still not implemented worldwide in routine clinical practice. Digitalized biopsy sections from two independent cohorts of patients, immunostained for membrane or nuclear markers, were quantified with two automated methods. The first was based on stained cell counting through tissue segmentation, while the second relied upon stained area proportion within tissue sections. Different steps of image preparation, such as automated tissue detection, folds exclusion and scanning magnification, were also assessed and validated. Quantification of either stained cells or the stained area was found to be correlated highly for all tested markers. Both methods were also correlated with visual scoring performed by a pathologist. For an equivalent reliability, quantification of the stained area is, however, faster and easier to fine-tune and is therefore more compatible with time constraints for prognosis. This work provides an incentive for the implementation of automated immunostaining analysis with a stained area method in routine laboratory practice. © 2015 John Wiley & Sons Ltd.

  5. Coupled vibration of isotropic metal hollow cylinders with large geometrical dimensions

    NASA Astrophysics Data System (ADS)

    Lin, Shuyu

    2007-08-01

    In this paper, the coupled vibration of isotropic metal hollow cylinders with large geometrical dimensions is studied by using an approximate analytic method. According to this method, when the equivalent mechanical coupling coefficient that is defined as the stress ratio is introduced, the coupled vibration of a metal hollow cylinder is reduced to two equivalent one-dimensional vibrations, one is an equivalent longitudinal extensional vibration in the height direction of the cylinder, and the other is an equivalent plane radial vibration in the radius direction. These two equivalent vibrations are coupled to each other by the equivalent mechanical coupling coefficient. The resonance frequency equation of metal hollow cylinders in coupled vibration is derived and longitudinal and radial resonance frequencies are computed. For comparison, the resonance frequencies of the hollow cylinders are also computed by using numerical method. The analysis shows that the results from these two methods are in a good agreement with each other.

  6. Shielding implications for secondary neutrons and photons produced within the patient during IMPT.

    PubMed

    DeMarco, J; Kupelian, P; Santhanam, A; Low, D

    2013-07-01

    Intensity modulated proton therapy (IMPT) uses a combination of computer controlled spot scanning and spot-weight optimized planning to irradiate the tumor volume uniformly. In contrast to passive scattering systems, secondary neutrons and photons produced from inelastic proton interactions within the patient represent the major source of emitted radiation during IMPT delivery. Various published studies evaluated the shielding considerations for passive scattering systems but did not directly address secondary neutron production from IMPT and the ambient dose equivalent on surrounding occupational and nonoccupational work areas. Thus, the purpose of this study was to utilize Monte Carlo simulations to evaluate the energy and angular distributions of secondary neutrons and photons following inelastic proton interactions within a tissue-equivalent phantom for incident proton spot energies between 70 and 250 MeV. Monte Carlo simulation methods were used to calculate the ambient dose equivalent of secondary neutrons and photons produced from inelastic proton interactions in a tissue-equivalent phantom. The angular distribution of emitted neutrons and photons were scored as a function of incident proton energy throughout a spherical annulus at 1, 2, 3, 4, and 5 m from the phantom center. Appropriate dose equivalent conversion factors were applied to estimate the total ambient dose equivalent from secondary neutrons and photons. A reference distance of 1 m from the center of the patient was used to evaluate the mean energy distribution of secondary neutrons and photons and the resulting ambient dose equivalent. For an incident proton spot energy of 250 MeV, the total ambient dose equivalent (3.6 × 10(-3) mSv per proton Gy) was greatest along the direction of the incident proton spot (0°-10°) with a mean secondary neutron energy of 71.3 MeV. The dose equivalent decreased by a factor of 5 in the backward direction (170°-180°) with a mean energy of 4.4 MeV. An 8 × 8 × 8 cm(3) volumetric spot distribution (5 mm FWHM spot size, 4 mm spot spacing) optimized to produce a uniform dose distribution results in an ambient dose equivalent of 4.5 × 10(-2) mSv per proton Gy in the forward direction. This work evaluated the secondary neutron and photon emission due to monoenergetic proton spots between 70 and 250 MeV, incident on a tissue equivalent phantom. Example calculations were performed to estimate concrete shield thickness based upon appropriate workload and shielding design assumptions. Although lower than traditional passive scattered proton therapy systems, the ambient dose equivalent from secondary neutrons produced by the patient during IMPT can be significant relative to occupational and nonoccupational workers in the vicinity of the treatment vault. This work demonstrates that Monte Carlo simulations are useful as an initial planning tool for studying the impact of the treatment room and maze design on surrounding occupational and nonoccupational work areas.

  7. Design of three-phased SPWM based on AT89C52

    NASA Astrophysics Data System (ADS)

    Wu, Xiaorui

    2018-05-01

    According to the AT89C52 and the area equivalent principle, a three phase SPWM algorithm based on the 8 bit single chip is obtained. Through computer programming, three-phase SPWM wave generated by a single chip microcomputer is applied to the circuit of the static reactive power generator. The result shows that this method is feasible and can reduce the cost of SVG.

  8. Execution of deep dipole geoelectrical soundings in areas of geothermal interest. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patella, D.

    It is suggested that deep geoelectrical problems may be resolved by carrying out dipole soundings in the field and applying a quantitative interpretation in the Schlumberger domain. The 'transformation' of original field dipole sounding curves into equivalent Schlumberger curves is outlined for the cases of layered structures and arbitrary underground structures. Theoretical apparent resistivity curves are derived for soundings over bidimensional structures. Following a summary of the geological features of the Travale-Radicondoli geothermal area of Italy, the dipole sounding method employed for this field study and the means of collecting and analyzing the data, are outlined.

  9. Flight Test of a Head-Worn Display as an Equivalent-HUD for Terminal Operations

    NASA Technical Reports Server (NTRS)

    Shelton, K. J.; Arthur, J. J., III; Prinzel, L. J., III; Nicholas, S. N.; Williams, S. P.; Bailey, R. E.

    2015-01-01

    Research, development, test, and evaluation of flight deck interface technologies is being conducted by NASA to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). Under NASA's Aviation Safety Program, one specific area of research is the use of small Head-Worn Displays (HWDs) as a potential equivalent display to a Head-up Display (HUD). Title 14 of the US CFR 91.175 describes a possible operational credit which can be obtained with airplane equipage of a HUD or an "equivalent"' display combined with Enhanced Vision (EV). A successful HWD implementation may provide the same safety and operational benefits as current HUD-equipped aircraft but for significantly more aircraft in which HUD installation is neither practical nor possible. A flight test was conducted to evaluate if the HWD, coupled with a head-tracker, can provide an equivalent display to a HUD. Approach and taxi testing was performed on-board NASA's experimental King Air aircraft in various visual conditions. Preliminary quantitative results indicate the HWD tested provided equivalent HUD performance, however operational issues were uncovered. The HWD showed significant potential as all of the pilots liked the increased situation awareness attributable to the HWD's unique capability of unlimited field-of-regard.

  10. Flight test of a head-worn display as an equivalent-HUD for terminal operations

    NASA Astrophysics Data System (ADS)

    Shelton, K. J.; Arthur, J. J.; Prinzel, L. J.; Nicholas, S. N.; Williams, S. P.; Bailey, R. E.

    2015-05-01

    Research, development, test, and evaluation of flight deck interface technologies is being conducted by NASA to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). Under NASA's Aviation Safety Program, one specific area of research is the use of small Head-Worn Displays (HWDs) as a potential equivalent display to a Head-up Display (HUD). Title 14 of the US CFR 91.175 describes a possible operational credit which can be obtained with airplane equipage of a HUD or an "equivalent"' display combined with Enhanced Vision (EV). A successful HWD implementation may provide the same safety and operational benefits as current HUD-equipped aircraft but for significantly more aircraft in which HUD installation is neither practical nor possible. A flight test was conducted to evaluate if the HWD, coupled with a head-tracker, can provide an equivalent display to a HUD. Approach and taxi testing was performed on-board NASA's experimental King Air aircraft in various visual conditions. Preliminary quantitative results indicate the HWD tested provided equivalent HUD performance, however operational issues were uncovered. The HWD showed significant potential as all of the pilots liked the increased situation awareness attributable to the HWD's unique capability of unlimited field-of-regard.

  11. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes

    PubMed Central

    Zhang, Hong; Pei, Yun

    2016-01-01

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266

  12. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.

    PubMed

    Zhang, Hong; Pei, Yun

    2016-08-12

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.

  13. Equivalent peak resolution: characterization of the extent of separation for two components based on their relative peak overlap.

    PubMed

    Dvořák, Martin; Svobodová, Jana; Dubský, Pavel; Riesová, Martina; Vigh, Gyula; Gaš, Bohuslav

    2015-03-01

    Although the classical formula of peak resolution was derived to characterize the extent of separation only for Gaussian peaks of equal areas, it is often used even when the peaks follow non-Gaussian distributions and/or have unequal areas. This practice can result in misleading information about the extent of separation in terms of the severity of peak overlap. We propose here the use of the equivalent peak resolution value, a term based on relative peak overlap, to characterize the extent of separation that had been achieved. The definition of equivalent peak resolution is not constrained either by the form(s) of the concentration distribution function(s) of the peaks (Gaussian or non-Gaussian) or the relative area of the peaks. The equivalent peak resolution value and the classically defined peak resolution value are numerically identical when the separated peaks are Gaussian and have identical areas and SDs. Using our new freeware program, Resolution Analyzer, one can calculate both the classically defined and the equivalent peak resolution values. With the help of this tool, we demonstrate here that the classical peak resolution values mischaracterize the extent of peak overlap even when the peaks are Gaussian but have different areas. We show that under ideal conditions of the separation process, the relative peak overlap value is easily accessible by fitting the overall peak profile as the sum of two Gaussian functions. The applicability of the new approach is demonstrated on real separations. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Changes in ambient dose equivalent rates around roads at Kawamata town after the Fukushima accident.

    PubMed

    Kinase, Sakae; Sato, Satoshi; Sakamoto, Ryuichi; Yamamoto, Hideaki; Saito, Kimiaki

    2015-11-01

    Changes in ambient dose equivalent rates noted through vehicle-borne surveys have elucidated ecological half-lives of radioactive caesium in the environment. To confirm that the ecological half-lives are appropriate for predicting ambient dose equivalent rates within living areas, it is important to ascertain ambient dose equivalent rates on/around roads. In this study, radiation monitoring on/around roads at Kawamata town, located about 37 km northwest of the Fukushima Daiichi Nuclear Power Plant, was performed using monitoring vehicles and survey meters. It was found that the ambient dose equivalent rates around roads were higher than those on roads as of October 2012. And withal the ecological half-lives on roads were essentially consistent with those around roads. With dose predictions using ecological half-lives on roads, it is necessary to make corrections to ambient dose equivalent rates through the vehicle-borne surveys against those within living areas. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Calculation of Reaction Forces in the Boiler Supports Using the Method of Equivalent Stiffness of Membrane Wall

    PubMed Central

    Sertić, Josip; Kozak, Dražan; Samardžić, Ivan

    2014-01-01

    The values of reaction forces in the boiler supports are the basis for the dimensioning of bearing steel structure of steam boiler. In this paper, the application of the method of equivalent stiffness of membrane wall is proposed for the calculation of reaction forces. The method of equalizing displacement, as the method of homogenization of membrane wall stiffness, was applied. On the example of “Milano” boiler, using the finite element method, the calculation of reactions in the supports for the real geometry discretized by the shell finite element was made. The second calculation was performed with the assumption of ideal stiffness of membrane walls and the third using the method of equivalent stiffness of membrane wall. In the third case, the membrane walls are approximated by the equivalent orthotropic plate. The approximation of membrane wall stiffness is achieved using the elasticity matrix of equivalent orthotropic plate at the level of finite element. The obtained results were compared, and the advantages of using the method of equivalent stiffness of membrane wall for the calculation of reactions in the boiler supports were emphasized. PMID:24959612

  16. Introduction of argon beam coagulation functionality to robotic procedures using the ABC D-Flex probe: equivalency to an existing laparoscopic instrument

    NASA Astrophysics Data System (ADS)

    Merchel, Renée. A.; Barnes, Kelli S.; Taylor, Kenneth D.

    2015-03-01

    INTRODUCTION: The ABC® D-Flex Probe utilizes argon beam coagulation (ABC) technology to achieve hemostasis during minimally invasive surgery. A handle on the probe allows for integration with robotic surgical systems and introduces ABC to the robotic toolbox. To better understand the utility of D-Flex, this study compares the performance of the D-Flex probe to an existing ABC laparoscopic probe through ex vivo tissue analysis. METHODS: Comparisons were performed to determine the effect of four parameters: ABC device, tissue type, activation duration, and distance from tissue. Ten ABC D-Flex probes were used to create 30 burn samples for each comparison. Ex vivo bovine liver and porcine muscle were used as tissue models. The area and depth of each burn was measured using a light microscope. The resulting dimensional data was used to correlate tissue effect with each variable. RESULTS: D-Flex created burns which were smaller in surface area than the laparoscopic probe at all power levels. Additionally, D-Flex achieved thermal penetration levels equivalent to the laparoscopic probe. CONCLUSION: D-Flex implements a small 7F geometry which creates a more focused beam. When used with robotic precision, quick localized superficial hemostasis can be achieved with minimal collateral damage. Additionally, D-Flex achieved equivalent thermal penetration levels at lower power and argon flow-rate settings than the laparoscopic probe.

  17. Water demand-supply analysis in a large spatial area based on the processes of evapotranspiration and runoff

    PubMed Central

    Maruyama, Toshisuke

    2007-01-01

    To estimate the amount of evapotranspiration in a river basin, the “short period water balance method” was formulated. Then, by introducing the “complementary relationship method,” the amount of evapotranspiration was estimated seasonally, and with reasonable accuracy, for both small and large areas. Moreover, to accurately estimate river discharge in the low water season, the “weighted statistical unit hydrograph method” was proposed and a procedure for the calculation of the unit hydrograph was developed. Also, a new model, based on the “equivalent roughness method,” was successfully developed for the estimation of flood runoff from newly reclaimed farmlands. Based on the results of this research, a “composite reservoir model” was formulated to analyze the repeated use of irrigation water in large spatial areas. The application of this model to a number of watershed areas provided useful information with regard to the realities of water demand-supply systems in watersheds predominately dedicated to paddy fields, in Japan. PMID:24367144

  18. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  19. [FMRI-study of speech perception impairment in post-stroke patients with sensory aphasia].

    PubMed

    Maĭorova, L A; Martynova, O V; Fedina, O N; Petrushevskiĭ, A G

    2013-01-01

    The aim of the study was to find neurophysiological correlates of the primary stage impairment of speech perception, namely phonemic discrimination, in patients with sensory aphasia after acute ischemic stroke in the left hemisphere by noninvasive method of fMRI. For this purpose we registered the fMRI-equivalent of mismatch negativity (MMN) in response to the speech phonemes--syllables "ba" and "pa" in odd-ball paradigm in 20 healthy subjects and 23 patients with post-stroke sensory aphasia. In healthy subjects active brain areas depending from the MMN contrast were observed in the superior temporal and inferior frontal gyri in the right and left hemispheres. In the group of patients there was a significant activation of the auditory cortex in the right hemisphere only, and this activation was less in a volume and intensity than in healthy subjects and correlated to the degree of preservation of speech. Thus, the method of recording fMRI equivalent of MMN is sensitive to study the speech perception impairment.

  20. Can electronic medical images replace hard-copy film? Defining and testing the equivalence of diagnostic tests.

    PubMed

    Obuchowski, N A

    2001-10-15

    Electronic medical images are an efficient and convenient format in which to display, store and transmit radiographic information. Before electronic images can be used routinely to screen and diagnose patients, however, it must be shown that readers have the same diagnostic performance with this new format as traditional hard-copy film. Currently, there exist no suitable definitions of diagnostic equivalence. In this paper we propose two criteria for diagnostic equivalence. The first criterion ('population equivalence') considers the variability between and within readers, as well as the mean reader performance. This criterion is useful for most applications. The second criterion ('individual equivalence') involves a comparison of the test results for individual patients and is necessary when patients are followed radiographically over time. We present methods for testing both individual and population equivalence. The properties of the proposed methods are assessed in a Monte Carlo simulation study. Data from a mammography screening study is used to illustrate the proposed methods and compare them with results from more conventional methods of assessing equivalence and inter-procedure agreement. Copyright 2001 John Wiley & Sons, Ltd.

  1. [Biomechanical analysis on healing process of sagittal fracture of the mandibular condyle after rigid fixation].

    PubMed

    Jing, Jie; Qu, Ai-li; Ding, Xiao-mei; Hei, Yu-na

    2015-04-01

    To analyze the biomechanical healing process on rigid fixation of sagittal fracture of the mandibular condyle (SFMC), and to provide guidelines for surgical treatment. Three-dimensional finite element model (3D-FEAM) of mandible and condyle was established. The right condyle was simulated as SFMC with 0.1 mm space across the condyle length ways. The 3D-FEAM of rigid fixation was established. The biomechanical factors such as stress distribution of condylar surface, displacement around fracture, stress on the plate and stress shielding were calculated during 0, 4, 8 and 12-week after rigid fixation. The maximum equivalent stress of normal condyle was located at the area of middle 1/3 of condylar neck. The maximum equivalent stress at 0-week after fixation was 23 times than that on normal condyle. They were located at the condylar stump and the plate near inferior punctual areas of fracture line. There were little stress on the other areas. The maximum equivalent stress at 4, 8 and 12-week was approximately 6 times than that on normal condyle. They were located at the areas same as the area at 0-week. There were little stress on the other areas at the condyle. The maximum total displacement and maximum total corner were increased 0.57-0.75 mm and 0.01-0.09° respectively during healing process. The maximum equivalent stress at 0-week on the condylar trump was 5-6 times compared with that at 4, 8, and 12-week. The maximum equivalent stress, maximum total displacement and maximum total corner on the fractured fragment were not changed significantly during healing process. The maximum equivalent stress at 0-week on the plate was 7-9 times compared with that at 4, 8, 12-week. The stress of the condyle and stress shielding of the plate may be the reasons of absorbing and rebuilding on the condyle in healing process of SFMC. The biomechanical parameters increase obviously at 4-week after fixation. Elastic intermaxillary traction is necessary to decrease total displacement and total corner of the condyle, and liquid diet is necessary to decrease equivalent stress within 4 weeks. Rehabilitation training should be used to recover TMJ functions after 4 weeks because the condyle and mandible have the ability to carry out normal functions.

  2. Calculations of a wideband metamaterial absorber using equivalent medium theory

    NASA Astrophysics Data System (ADS)

    Huang, Xiaojun; Yang, Helin; Wang, Danqi; Yu, Shengqing; Lou, Yanchao; Guo, Ling

    2016-08-01

    Metamaterial absorbers (MMAs) have drawn increasing attention in many areas due to the fact that they can achieve electromagnetic (EM) waves with unity absorptivity. We demonstrate the design, simulation, experiment and calculation of a wideband MMA based on a loaded double-square-loop (DSL) array of chip resisters. For a normal incidence EM wave, the simulated results show that the absorption of the full width at half maximum is about 9.1 GHz, and the relative bandwidth is 87.1%. Experimental results are in agreement with the simulations. More importantly, equivalent medium theory (EMT) is utilized to calculate the absorptions of the DSL MMA, and the calculated absorptions based on EMT agree with the simulated and measured results. The method based on EMT provides a new way to analysis the mechanism of MMAs.

  3. Review of Recent Development of Dynamic Wind Farm Equivalent Models Based on Big Data Mining

    NASA Astrophysics Data System (ADS)

    Wang, Chenggen; Zhou, Qian; Han, Mingzhe; Lv, Zhan’ao; Hou, Xiao; Zhao, Haoran; Bu, Jing

    2018-04-01

    Recently, the big data mining method has been applied in dynamic wind farm equivalent modeling. In this paper, its recent development with present research both domestic and overseas is reviewed. Firstly, the studies of wind speed prediction, equivalence and its distribution in the wind farm are concluded. Secondly, two typical approaches used in the big data mining method is introduced, respectively. For single wind turbine equivalent modeling, it focuses on how to choose and identify equivalent parameters. For multiple wind turbine equivalent modeling, the following three aspects are concentrated, i.e. aggregation of different wind turbine clusters, the parameters in the same cluster, and equivalence of collector system. Thirdly, an outlook on the development of dynamic wind farm equivalent models in the future is discussed.

  4. An Equivalent cross-section Framework for improving computational efficiency in Distributed Hydrologic Modelling

    NASA Astrophysics Data System (ADS)

    Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish

    2014-05-01

    While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.

  5. Accumulation of subcutaneous fat, but not visceral fat, is a predictor of adiponectin levels in preterm infants at term-equivalent age.

    PubMed

    Nakano, Yuya; Itabashi, Kazuo; Sakurai, Motoichiro; Aizawa, Madoka; Dobashi, Kazushige; Mizuno, Katsumi

    2014-05-01

    Preterm infants have altered fat tissue development, including a higher percentage of fat mass and increased volume of visceral fat. They also have altered adiponectin levels, including a lower ratio of high-molecular-weight adiponectin (HMW-Ad) to total adiponectin (T-Ad) at term-equivalent age, compared with term infants. The objective of this study was to investigate the association between adiponectin levels and fat tissue accumulation or distribution in preterm infants at term-equivalent age. Cross-sectional clinical study. Study subjects were 53 preterm infants born at ≤34weeks gestation with a mean birth weight of 1592g. Serum levels of T-Ad and HMW-Ad were measured and a computed tomography (CT) scan was performed at the level of the umbilicus at term-equivalent age to analyze how fat tissue accumulation or distribution was correlated with adiponectin levels. T-Ad (r=0.315, p=0.022) and HMW-Ad levels (r=0.338, p=0.013) were positively associated with subcutaneous fat area evaluated by performing CT scan at term-equivalent age, but were not associated with visceral fat area in simple regression analyses. In addition, T-Ad (β=0.487, p=0.003) and HMW-Ad levels (β=0.602, p<0.001) were positively associated with subcutaneous fat tissue area, but they were not associated with visceral fat area also in multiple regression analyses. Subcutaneous fat accumulation contributes to increased levels of T-Ad and HMW-Ad, while visceral fat accumulation does not influence adiponectin levels in preterm infants at term-equivalent age. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. A profilometry-based dentifrice abrasion Method for V8 brushing machines. Part I: Introduction to RDA-PE.

    PubMed

    White, Donald J; Schneiderman, Eva; Colón, Ellen; St John, Samuel

    2015-01-01

    This paper describes the development and standardization of a profilometry-based method for assessment of dentifrice abrasivity called Radioactive Dentin Abrasivity - Profilometry Equivalent (RDA-PE). Human dentine substrates are mounted in acrylic blocks of precise standardized dimensions, permitting mounting and brushing in V8 brushing machines. Dentin blocks are masked to create an area of "contact brushing." Brushing is carried out in V8 brushing machines and dentifrices are tested as slurries. An abrasive standard is prepared by diluting the ISO 11609 abrasivity reference calcium pyrophosphate abrasive into carboxymethyl cellulose/glycerin, just as in the RDA method. Following brushing, masked areas are removed and profilometric analysis is carried out on treated specimens. Assessments of average abrasion depth (contact or optical profilometry) are made. Inclusion of standard calcium pyrophosphate abrasive permits a direct RDA equivalent assessment of abrasion, which is characterized with profilometry as Depth test/Depth control x 100. Within the test, the maximum abrasivity standard of 250 can be created in situ simply by including a treatment group of standard abrasive with 2.5x number of brushing strokes. RDA-PE is enabled in large part by the availability of easy-to-use and well-standardized modern profilometers, but its use in V8 brushing machines is enabled by the unique specific conditions described herein. RDA-PE permits the evaluation of dentifrice abrasivity to dentin without the requirement of irradiated teeth and infrastructure for handling them. In direct comparisons, the RDA-PE method provides dentifrice abrasivity assessments comparable to the gold industry standard RDA technique.

  7. Experimental measurement and modeling of snow accumulation and snowmelt in a mountain microcatchment

    NASA Astrophysics Data System (ADS)

    Danko, Michal; Krajčí, Pavel; Hlavčo, Jozef; Kostka, Zdeněk; Holko, Ladislav

    2016-04-01

    Fieldwork is a very useful source of data in all geosciences. This naturally applies also to the snow hydrology. Snow accumulation and snowmelt are spatially very heterogeneous especially in non-forested, mountain environments. Direct field measurements provide the most accurate information about it. Quantification and understanding of processes, that cause these spatial differences are crucial in prediction and modelling of runoff volumes in spring snowmelt period. This study presents possibilities of detailed measurement and modeling of snow cover characteristics in a mountain experimental microcatchment located in northern part of Slovakia in Western Tatra mountains. Catchment area is 0.059 km2 and mean altitude is 1500 m a.s.l. Measurement network consists of 27 snow poles, 3 small snow lysimeters, discharge measurement device and standard automatic weather station. Snow depth and snow water equivalent (SWE) were measured twice a month near the snow poles. These measurements were used to estimate spatial differences in accumulation of SWE. Snowmelt outflow was measured by small snow lysimeters. Measurements were performed in winter 2014/2015. Snow water equivalent variability was very high in such a small area. Differences between particular measuring points reached 600 mm in time of maximum SWE. The results indicated good performance of a snow lysimeter in case of snowmelt timing identification. Increase of snowmelt measured by the snow lysimeter had the same timing as increase in discharge at catchment's outlet and the same timing as the increase in air temperature above the freezing point. Measured data were afterwards used in distributed rainfall-runoff model MIKE-SHE. Several methods were used for spatial distribution of precipitation and snow water equivalent. The model was able to simulate snow water equivalent and snowmelt timing in daily step reasonably well. Simulated discharges were slightly overestimated in later spring.

  8. Measurements of neutron dose equivalent for a proton therapy center using uniform scanning proton beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Yuanshui; Liu Yaxi; Zeidan, Omar

    Purpose: Neutron exposure is of concern in proton therapy, and varies with beam delivery technique, nozzle design, and treatment conditions. Uniform scanning is an emerging treatment technique in proton therapy, but neutron exposure for this technique has not been fully studied. The purpose of this study is to investigate the neutron dose equivalent per therapeutic dose, H/D, under various treatment conditions for uniform scanning beams employed at our proton therapy center. Methods: Using a wide energy neutron dose equivalent detector (SWENDI-II, ThermoScientific, MA), the authors measured H/D at 50 cm lateral to the isocenter as a function of proton range,more » modulation width, beam scanning area, collimated field size, and snout position. They also studied the influence of other factors on neutron dose equivalent, such as aperture material, the presence of a compensator, and measurement locations. They measured H/D for various treatment sites using patient-specific treatment parameters. Finally, they compared H/D values for various beam delivery techniques at various facilities under similar conditions. Results: H/D increased rapidly with proton range and modulation width, varying from about 0.2 mSv/Gy for a 5 cm range and 2 cm modulation width beam to 2.7 mSv/Gy for a 30 cm range and 30 cm modulation width beam when 18 Multiplication-Sign 18 cm{sup 2} uniform scanning beams were used. H/D increased linearly with the beam scanning area, and decreased slowly with aperture size and snout retraction. The presence of a compensator reduced the H/D slightly compared with that without a compensator present. Aperture material and compensator material also have an influence on neutron dose equivalent, but the influence is relatively small. H/D varied from about 0.5 mSv/Gy for a brain tumor treatment to about 3.5 mSv/Gy for a pelvic case. Conclusions: This study presents H/D as a function of various treatment parameters for uniform scanning proton beams. For similar treatment conditions, the H/D value per uncollimated beam size for uniform scanning beams was slightly lower than that from a passive scattering beam and higher than that from a pencil beam scanning beam, within a factor of 2. Minimizing beam scanning area could effectively reduce neutron dose equivalent for uniform scanning beams, down to the level close to pencil beam scanning.« less

  9. 'Equivalence' and the translation and adaptation of health-related quality of life questionnaires.

    PubMed

    Herdman, M; Fox-Rushby, J; Badia, X

    1997-04-01

    The increasing use of health-related quality of life (HRQOL) questionnaires in multinational studies has resulted in the translation of many existing measures. Guidelines for translation have been published, and there has been some discussion of how to achieve and assess equivalence between source and target questionnaires. Our reading in this area had led us, however, to the conclusion that different types of equivalence were not clearly defined, and that a theoretical framework for equivalence was lacking. To confirm this we reviewed definitions of equivalence in the HRQOL literature on the use of generic questionnaires in multicultural settings. The literature review revealed: definitions of 19 different types of equivalence; vague or conflicting definitions, particularly in the case of conceptual equivalence; and the use of many redundant terms. We discuss these findings in the light of a framework adapted from cross-cultural psychology for describing three different orientations to cross-cultural research: absolutism, universalism and relativism. We suggest that the HRQOL field has generally adopted an absolutist approach and that this may account for some of the confusion in this area. We conclude by suggesting that there is an urgent need for a standardized terminology within the HRQOL field, by offering a standard definition of conceptual equivalence, and by suggesting that the adoption of a universalist orientation would require substantial changes to guidelines and more empirical work on the conceptualization of HRQOL in different cultures.

  10. Snow-depth and water-equivalent data for the Fairbanks area, Alaska, spring 1995

    USGS Publications Warehouse

    Plumb, E.W.; Lilly, M.R.

    1996-01-01

    Snow depths at 34 sites and snow-water equivalents at 13 sites in the Fairbanks area were monitored during the 1995 snowmelt period (March 30 to April 26) in the spring of 1995. The U.S. Geological Survey conducted this study in cooperation with the Fairbanks International Airport, the University of Alaska Fairbanks, the Alaska Department of Natural Resources-Division of Mining and Water Management, the U.S Army, Alaska, and the U.S. Army Corps of Engineers-Alaska District. These data were collected to provide information about potential recharge of the ground-and surface-water systems during the snowmelt period in the Fairbanks area. This information is needed by companion geohydrologic studies of areas with known or suspected contaminants in the subsurface. Data-collection sites selected had open, boggy, wooded, or brushy vegetation cover and had different slope aspects. The deepest snow at any site, 27.1 inches, was recorded on April 1, 1995; the shallowest snow measured that day was 19.1 inches. The snow-water equivalents at these two sites were 5.9 inches and 4.5 inches, respectively. Snow depths and water equivalents were comparatively greater at open and bog sites than at wooded or brushy sites. Snow depths and water equivalents at all sites decreased throughout the measuring period. The decrease was more rapid at open and boggy sites than at wooded and brushy sites. Snow had completely disappeared from all sites by April 26, 1995.

  11. Spatiotemporal Interpolation of Elevation Changes Derived from Satellite Altimetry for Jakobshavn Isbrae, Greenland

    NASA Technical Reports Server (NTRS)

    Hurkmans, R.T.W.L.; Bamber, J.L.; Sorensen, L. S.; Joughin, I. R.; Davis, C. H.; Krabill, W. B.

    2012-01-01

    Estimation of ice sheet mass balance from satellite altimetry requires interpolation of point-scale elevation change (dHdt) data over the area of interest. The largest dHdt values occur over narrow, fast-flowing outlet glaciers, where data coverage of current satellite altimetry is poorest. In those areas, straightforward interpolation of data is unlikely to reflect the true patterns of dHdt. Here, four interpolation methods are compared and evaluated over Jakobshavn Isbr, an outlet glacier for which widespread airborne validation data are available from NASAs Airborne Topographic Mapper (ATM). The four methods are ordinary kriging (OK), kriging with external drift (KED), where the spatial pattern of surface velocity is used as a proxy for that of dHdt, and their spatiotemporal equivalents (ST-OK and ST-KED).

  12. Equivalence testing using existing reference data: An example with genetically modified and conventional crops in animal feeding studies.

    PubMed

    van der Voet, Hilko; Goedhart, Paul W; Schmidt, Kerstin

    2017-11-01

    An equivalence testing method is described to assess the safety of regulated products using relevant data obtained in historical studies with assumedly safe reference products. The method is illustrated using data from a series of animal feeding studies with genetically modified and reference maize varieties. Several criteria for quantifying equivalence are discussed, and study-corrected distribution-wise equivalence is selected as being appropriate for the example case study. An equivalence test is proposed based on a high probability of declaring equivalence in a simplified situation, where there is no between-group variation, where the historical and current studies have the same residual variance, and where the current study is assumed to have a sample size as set by a regulator. The method makes use of generalized fiducial inference methods to integrate uncertainties from both the historical and the current data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Anomalous Shocks on the Measured Near-Field Pressure Signatures of Low-Boom Wind-Tunnel Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2006-01-01

    Unexpected shocks on wind-tunnel-measured pressure signatures prompted questions about design methods, pressure signature measurement techniques, and the quality of measurements in the flow fields near lifting models. Some of these unexpected shocks were the result of component integration methods. Others were attributed to the three-dimension nature of the flow around a lifting model, to inaccuracies in the prediction of the area-ruled lift, or to wing-tip stall effects. This report discusses the low-boom model wind-tunnel data where these unexpected shocks were initially observed, the physics of the lifting wing/body model's flow field, the wind-tunnel data used to evaluate the applicability of methods for calculating equivalent areas due to lift, the performance of lift prediction codes, and tip stall effects so that the cause of these shocks could be determined.

  14. An analytical method to calculate equivalent fields to irregular symmetric and asymmetric photon fields.

    PubMed

    Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima

    2014-01-01

    Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.

  15. Music Therapy: A Career in Music Therapy

    MedlinePlus

    ... combination with doctoral study in related areas. Degree Equivalent Training in Music Therapy P ersonal qualifications include ... the student completes only the coursework necessary for equivalent music therapy training without necessarily earning a second ...

  16. Reconnaissance for radioactive deposits in eastern Alaska, 1952

    USGS Publications Warehouse

    Nelson, Arthur Edward; West, Walter S.; Matzko, John J.

    1954-01-01

    Reconnaissance for radioactive deposits was conducted in selected areas of eastern Alaska during 1952. Examination of copper, silver, and molybdenum occurrences and of a reported nickel prospect in the Slana-Nabesna and Chisana districts in the eastern Alaska Range revealed a maximum radioactivity of about 0.003 percent equivalent uranium. No appreciable radioactivity anomolies were indicated by aerial and foot traverses in the area. Reconnaissance for possible lode concentrations of uranium minerals in the vicinity of reported fluoride occurrences in the Hope Creek and Miller House-Circle Hot Springs areas of the Circle quadrangle and in the Fortymile district found a maximum of 0.055 percent equivalent uranium in a float fragment of ferruginous breccia in the Hope Creek area; analysis of samples obtained in the vicinity of the other fluoride occurrences showed a maximum of only 0.005 percent equivalent uranium. No uraniferous loads were discovered in the Koyukuk-Chandalar region, nor was the source of the monazite, previously reported in the placer concentrates from the Chandalar mining district, located. The source of the uranotheorianite in the placers at Gold Bench on the South Fork of the Koyukuk River was not found during a brief reconaissance, but a placer concentrate was obtained that contains 0.18 percent equivalent uranium. This concentrate is about ten times more radioactive than concentrates previously available from the area.

  17. The concept of equivalence and its application to the assessment of thrombolytic effects.

    PubMed

    Hampton, J R

    1997-12-01

    Very large clinical trials have become the norm in the evaluation of thrombolytic agents, and these 'megatrials' are administratively complex and expensive. It remains to be seen whether new thrombolytics will lead to further large reductions in fatality from an acute myocardial infarction, but new agents may well have advantages in areas such as safety and ease of administration, in addition to other clinical benefits (i.e. fewer cases of cardiac shock, heart failure and atrial fibrillation). The problem is how to introduce such new agents without a megatrial for each one. Endpoints other than fatality have some advantages and, in thrombolysis, angiographic studies are a necessary step in the development of new agents. However, such studies may not always correlate precisely with the results of mortality endpoint studies. Measurements of the resolution of ST segment elevation in myocardial infarction seem to provide a very useful method of assessing thrombolysis, but although such a technique can be applied to large numbers of patients, it cannot totally replace mortality endpoint trials. The 'equivalence' of two treatments is a clinical, not a statistical, concept, although statistical principles that allow equivalence to be investigated with medium-sized trials should be applied. Demonstrating equivalence in outcome between the new thrombolytic reteplase and streptokinase was the aim of the INJECT study.

  18. Distribution and variability of precipitation chemistry in the conterminous United States, January through December 1983

    USGS Publications Warehouse

    Rinella, J.F.; Miller, T.L.

    1988-01-01

    Analysis of atmospheric precipitation samples, collected during the 1983 calendar year from 109 National Trends Network sites in the United States, are presented in this report. The sites were grouped into six geographical regions based on the chemical composition of the samples. Precipitation chemistry in these regions was influenced by proximity to (1) oceans, (2) major industrial and fossil-fuel consuming areas, and (3) major agricultural and livestock areas. Frequency distributions of ionic composition, determined on 10 chemical constituents and on precipitation quantities for each site, showed wide variations in chemical concentrations and precipitation quantities from site to site. Of the 109 sites, 55 had data coverage for the year sufficient to characterize precipitation quality patterns on a nationwide basis. Except for ammonium and calcium, both of which showed largest concentrations in the agricultural midwest and plains states, the largest concentrations and loads generally were in areas that include the heavily industrialized population center of the eastern United States. Except for hydrogen, all chemical ions are inversely related to the quantity of precipitation depth. Precipitation quantities generally account for less than 30% of chemical variation in precipitation samples. However, precipitation quantities account for 30 to 65% of the variations of calcium concentrations in precipitation. In regions where precipitation has a large ionic proportion of hydrogen-ion equivalents, much of the hydrogen-ion concentration could be balanced by sulfate equivalents and partly balanced by nitrite-plus-nitrate equivalents. In the regions where hydrogen-ion equivalents in precipitation were smaller, ammonion-and calcium-ion equivalents were necessary, along with the hydrogen-ion equivalents, to balance the sulfate plus nitrite-plus-nitrate equivalent. (USGS)

  19. Ancient Babylonian astronomers calculated Jupiter’s position from the area under a time-velocity graph

    NASA Astrophysics Data System (ADS)

    Ossendrijver, Mathieu

    2016-01-01

    The idea of computing a body’s displacement as an area in time-velocity space is usually traced back to 14th-century Europe. I show that in four ancient Babylonian cuneiform tablets, Jupiter’s displacement along the ecliptic is computed as the area of a trapezoidal figure obtained by drawing its daily displacement against time. This interpretation is prompted by a newly discovered tablet on which the same computation is presented in an equivalent arithmetical formulation. The tablets date from 350 to 50 BCE. The trapezoid procedures offer the first evidence for the use of geometrical methods in Babylonian mathematical astronomy, which was thus far viewed as operating exclusively with arithmetical concepts.

  20. Effect of the losses in the vocal tract on determination of the area function.

    PubMed

    Gülmezoğlu, M Bilginer; Barkana, Atalay

    2003-01-01

    In this work, the cross-sectional areas of the vocal tract are determined for the lossy and lossless cases by using the pole-zero models obtained from the electrical equivalent circuit model of the vocal tract and the system identification method. The cross-sectional areas are used to compare the lossy and lossless cases. In the lossy case, the internal losses due to wall vibration, heat conduction, air friction and viscosity are considered, that is, the complex poles and zeros obtained from the models are used directly. Whereas, in the lossless case, only the imaginary parts of these poles and zeros are used. The vocal tract shapes obtained for the lossy case are close to the actual ones.

  1. 40 CFR Table A-1 to Subpart A of... - Summary of Applicable Requirements for Reference and Equivalent Methods for Air Monitoring of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Methods for Air Monitoring of Criteria Pollutants Pollutant Ref. or equivalent Manual or automated Applicable part 50 appendix Applicable subparts of part 53 A B C D E F SO2 Reference Manual A Equivalent Manual ✓ ✓ Automated ✓ ✓ ✓ CO Reference Automated C ✓ ✓ Equivalent Manual ✓ ✓ Automated ✓ ✓ ✓ O3...

  2. Titration of Limited Hold to Comparison in Conditional Discrimination Training and Stimulus Equivalence Testing

    ERIC Educational Resources Information Center

    Arntzen, Erik; Haugland, Silje

    2012-01-01

    Reaction time (RT), thought to be important for acquiring a full understanding of the establishment of equivalence classes, has been reported in a number of studies within the area of stimulus equivalence research. In this study, we trained 3 classes of potentially 3 members, with arbitrary stimuli in a one-to-many training structure in 5 adult…

  3. Development of a computer technique for the prediction of transport aircraft flight profile sonic boom signatures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Coen, Peter G.

    1991-01-01

    A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.

  4. Equivalent linearization for fatigue life estimates of a nonlinear structure

    NASA Technical Reports Server (NTRS)

    Miles, R. N.

    1989-01-01

    An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.

  5. Equivalent radiation source of 3D package for electromagnetic characteristics analysis

    NASA Astrophysics Data System (ADS)

    Li, Jun; Wei, Xingchang; Shu, Yufei

    2017-10-01

    An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).

  6. Detecting overpressure using the Eaton and Equivalent Depth methods in Offshore Nova Scotia, Canada

    NASA Astrophysics Data System (ADS)

    Ernanda; Primasty, A. Q. T.; Akbar, K. A.

    2018-03-01

    Overpressure is an abnormal high subsurface pressure of any fluids which exceeds the hydrostatic pressure of column of water or formation brine. In Offshore Nova Scotia Canada, the values and depth of overpressure zone are determined using the eaton and equivalent depth method, based on well data and the normal compaction trend analysis. Since equivalent depth method is using effective vertical stress principle and Eaton method considers physical property ratio (velocity). In this research, pressure evaluation only applicable on Penobscot L-30 well. An abnormal pressure is detected at depth 11804 feet as possibly overpressure zone, based on pressure gradient curve and calculation between the Eaton method (7241.3 psi) and Equivalent Depth method (6619.4 psi). Shales within Abenaki formation especially Baccaro Member is estimated as possible overpressure zone due to hydrocarbon generation mechanism.

  7. A Riemannian geometric mapping technique for identifying incompressible equivalents to subsonic potential flows

    NASA Astrophysics Data System (ADS)

    German, Brian Joseph

    This research develops a technique for the solution of incompressible equivalents to planar steady subsonic potential flows. Riemannian geometric formalism is used to develop a gauge transformation of the length measure followed by a curvilinear coordinate transformation to map the given subsonic flow into a canonical Laplacian flow with the same boundary conditions. The effect of the transformation is to distort both the immersed profile shape and the domain interior nonuniformly as a function of local flow properties. The method represents the full nonlinear generalization of the classical methods of Prandtl-Glauert and Karman-Tsien. Unlike the classical methods which are "corrections," this method gives exact results in the sense that the inverse mapping produces the subsonic full potential solution over the original airfoil, up to numerical accuracy. The motivation for this research was provided by an observed analogy between linear potential flow and the special theory of relativity that emerges from the invariance of the d'Alembert wave equation under Lorentz transformations. This analogy is well known in an operational sense, being leveraged widely in linear unsteady aerodynamics and acoustics, stemming largely from the work of Kussner. Whereas elements of the special theory can be invoked for compressibility effects that are linear and global in nature, the question posed in this work was whether other mathematical techniques from the realm of relativity theory could be used to similar advantage for effects that are nonlinear and local. This line of thought led to a transformation leveraging Riemannian geometric methods common to the general theory of relativity. A gauge transformation is used to geometrize compressibility through the metric tensor of the underlying space to produce an equivalent incompressible flow that lives not on a plane but on a curved surface. In this sense, forces owing to compressibility can be ascribed to the geometry of space in much the same way that general relativity ascribes gravitational forces to the curvature of space-time. Although the analogy with general relativity is fruitful, it is important not to overstate the similarities between compressibility and the physics of gravity, as the interest for this thesis is primarily in the mathematical framework and not physical phenomenology or epistemology. The thesis presents the philosophy and theory for the transformation method followed by a numerical method for practical solutions of equivalent incompressible flows over arbitrary closed profiles. The numerical method employs an iterative approach involving the solution of the equivalent incompressible flow with a panel method, the calculation of the metric tensor for the gauge transformation, and the solution of the curvilinear coordinate mapping to the canonical flow with a finite difference approach for the elliptic boundary value problem. This method is demonstrated for non-circulatory flow over a circular cylinder and both symmetric and lifting flows over a NACA 0012 profile. Results are validated with accepted subcritical full potential test cases available in the literature. For chord-preserving mapping boundary conditions, the results indicate that the equivalent incompressible profiles thicken with Mach number and develop a leading edge droop with increased angle of attack. Two promising areas of potential applicability of the method have been identified. The first is in airfoil inverse design methods leveraging incompressible flow knowledge including heuristics and empirical data for the potential field effects on viscous phenomena such as boundary layer transition and separation. The second is in aerodynamic testing using distorted similarity-scaled models.

  8. The assessment of EUMETSAT HSAF Snow Products for mountainuos areas in the eastern part of Turkey

    NASA Astrophysics Data System (ADS)

    Akyurek, Z.; Surer, S.; Beser, O.; Bolat, K.; Erturk, A. G.

    2012-04-01

    Monitoring the snow parameters (e.g. snow cover area, snow water equivalent) is a challenging work. Because of its natural physical properties, snow highly affects the evolution of weather from daily basis to climate on a longer time scale. The derivation of snow products over mountainous regions has been considered very challenging. This can be done by periodic and precise mapping of the snow cover. However inaccessibility and scarcity of the ground observations limit the snow cover mapping in the mountainous areas. Today, it is carried out operationally by means of optical satellite imagery and microwave radiometry. In retrieving the snow cover area from satellite images bring the problem of topographical variations within the footprint of satellite sensors and spatial and temporal variation of snow characteristics in the mountainous areas. Most of the global and regional operational snow products use generic algorithms for flat and mountainous areas. However the non-uniformity of the snow characteristics can only be modeled with different algorithms for mountain and flat areas. In this study the early findings of Satellite Application Facilities on Hydrology (H-SAF) project, which is financially supported by EUMETSAT, will be presented. Turkey is a part of the H-SAF project, both in product generation (eg. snow recognition, fractional snow cover and snow water equivalent) for mountainous regions for whole Europe, cal/val of satellite-derived snow products with ground observations and cal/val studies with hydrological modeling in the mountainous terrain of Europe. All the snow products are operational on a daily basis. For the snow recognition product (H10) for mountainous areas, spectral thresholding methods were applied on sub pixel scale of MSG-SEVIRI images. The different spectral characteristics of cloud, snow and land determined the structure of the algorithm and these characteristics were obtained from subjective classification of known snow cover features in the MSG/SEVIRI images. The fractional snow cover area (H12) algorithm is based on a sub-pixel reflectance model applied on METOP-AVHRR data. Knowing the effects of topography on satellite-measured radiances for rough terrain, the sun zenith and azimuth angles, as well as direction of observation relative to these are taken into account in estimating the target reflectances from the satellite images. The values of SWE products (H13) were obtained using an assimilation process based on the Helsinki University of Technology model using Advanced Microwave Scanning Radiometer for EOS (AMSR-E) daily brightness-temperature values. The validation studies for three products have been performed for the water years 2010 and 2011. Average values of 70% of probability of detection for snow recognition product, 60% of overall accuracy for the fractional snow cover product and 45 mm RMSE for the snow water equivalent product have been obtained from the validation studies. Final versions of these three products will be presented and discussed. Key words: snow, satellite images, mountain, HSAF, snow cover, snow water equivalent

  9. Electrical Characterization of Semiconductor and Dielectric Materials with a Non-Damaging FastGateTM Probe

    NASA Astrophysics Data System (ADS)

    Robert, Hillard; William, Howland; Bryan, Snyder

    2002-03-01

    Determination of the electrical properties of semiconductor materials and dielectrics is highly desirable since these correlate best to final device performance. The properties of SiO2 and high k dielectrics such as Equivalent Oxide Thickness(EOT), Interface Trap Density(Dit), Oxide Effective Charge(Neff), Flatband Voltage Hysteresis(Delta Vfb), Threshold Voltage(VT) and, bulk properties such as carrier density profile and channel dose are all important parameters that require monitoring during front end processing. Conventional methods for determining these parameters involve the manufacturing of polysilicon or metal gate MOS capacitors and subsequent measurements of capacitance-voltage(CV) and/or current-voltage(IV). These conventional techniques are time consuming and can introduce changes to the materials being monitored. Also, equivalent circuit effects resulting from excessive leakage current, series resistance and stray inductance can introduce large errors in the measured results. In this paper, a new method is discussed that provides rapid determination of these critical parameters and is robust against equivalent circuit errors. This technique uses a small diameter(30 micron), elastically deformed probe to form a gate for MOSCAP CV and IV and can be used to measure either monitor wafers or test areas within scribe lines on product wafers. It allows for measurements of dielectrics thinner than 10 Angstroms. A detailed description and applications such as high k dielectrics, will be presented.

  10. 10 CFR 63.111 - Performance objectives for the geologic repository operations area through permanent closure.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... of the deep dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose equivalent may not exceed 0.15 Sv (15... TEDE (hereafter referred to as “dose”) to any real member of the public located beyond the boundary of...

  11. 10 CFR 63.111 - Performance objectives for the geologic repository operations area through permanent closure.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... of the deep dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose equivalent may not exceed 0.15 Sv (15... TEDE (hereafter referred to as “dose”) to any real member of the public located beyond the boundary of...

  12. 10 CFR 63.111 - Performance objectives for the geologic repository operations area through permanent closure.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... of the deep dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose equivalent may not exceed 0.15 Sv (15... TEDE (hereafter referred to as “dose”) to any real member of the public located beyond the boundary of...

  13. 10 CFR 63.111 - Performance objectives for the geologic repository operations area through permanent closure.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... of the deep dose equivalent and the committed dose equivalent to any individual organ or tissue (other than the lens of the eye) of 0.5 Sv (50 rem). The lens dose equivalent may not exceed 0.15 Sv (15... TEDE (hereafter referred to as “dose”) to any real member of the public located beyond the boundary of...

  14. 41 CFR 102-80.115 - Is there more than one option for establishing that an equivalent level of safety exists?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... equivalent level of safety. (c) As a third option, other technical analysis procedures, as approved by the... Equivalent Level of Safety Analysis § 102-80.115 Is there more than one option for establishing that an... areas of safety. Available safe egress times would be developed based on analysis of a number of assumed...

  15. Should non-disclosures be considered as morally equivalent to lies within the doctor–patient relationship?

    PubMed Central

    Cox, Caitriona L; Fritz, Zoe

    2016-01-01

    In modern practice, doctors who outright lie to their patients are often condemned, yet those who employ non-lying deceptions tend to be judged less critically. Some areas of non-disclosure have recently been challenged: not telling patients about resuscitation decisions; inadequately informing patients about risks of alternative procedures and withholding information about medical errors. Despite this, there remain many areas of clinical practice where non-disclosures of information are accepted, where lies about such information would not be. Using illustrative hypothetical situations, all based on common clinical practice, we explore the extent to which we should consider other deceptive practices in medicine to be morally equivalent to lying. We suggest that there is no significant moral difference between lying to a patient and intentionally withholding relevant information: non-disclosures could be subjected to Bok's ‘Test of Publicity’ to assess permissibility in the same way that lies are. The moral equivalence of lying and relevant non-disclosure is particularly compelling when the agent's motivations, and the consequences of the actions (from the patient's perspectives), are the same. We conclude that it is arbitrary to claim that there is anything inherently worse about lying to a patient to mislead them than intentionally deceiving them using other methods, such as euphemism or non-disclosure. We should question our intuition that non-lying deceptive practices in clinical practice are more permissible and should thus subject non-disclosures to the same scrutiny we afford to lies. PMID:27451425

  16. Analysis of difference between direct and geodetic mass balance measurements at South Cascade Glacier, Washington

    USGS Publications Warehouse

    Krimmel, R.M.

    1999-01-01

    Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.

  17. The Same or Not the Same: Equivalence as an Issue in Educational Research

    NASA Astrophysics Data System (ADS)

    Lewis, Scott E.; Lewis, Jennifer E.

    2005-09-01

    In educational research, particularly in the sciences, a common research design calls for the establishment of a control and experimental group to determine the effectiveness of an intervention. As part of this design, it is often desirable to illustrate that the two groups were equivalent at the start of the intervention, based on measures such as standardized cognitive tests or student grades in prior courses. In this article we use SAT and ACT scores to illustrate a more robust way of testing equivalence. The method incorporates two one-sided t tests evaluating two null hypotheses, providing a stronger claim for equivalence than the standard method, which often does not address the possible problem of low statistical power. The two null hypotheses are based on the construction of an equivalence interval particular to the data, so the article also provides a rationale for and illustration of a procedure for constructing equivalence intervals. Our consideration of equivalence using this method also underscores the need to include sample sizes, standard deviations, and group means in published quantitative studies.

  18. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  19. Preliminary study of radioactive limonite localities in Colorado, Utah, and Wyoming

    USGS Publications Warehouse

    Lovering, T.G.; Beroni, E.P.

    1956-01-01

    Nine radioactive limonite localities of different types were sampled during the spring and fall of 1953 in an effort to establish criteria for differentiating limonite outcrops associated with uranium or thorium deposits from limonite outcrops not associated with such deposits. The samples were analyzed for uranium and thorium by standard chemical methods, for equivalent uranium by the radiometric method, and for a number of common metals by semiquantitative geochemical methods. Correlation coefficients were then calculated for each of the metals with respect to equivalent uranium, and to uranium where present, for all of the samples from each locality. The correlation coefficients may indicate a significant association between uranium or thorium and certain metals. Occurrences of specific that are interpreted as significant very considerably for different uranium localities but are more consistent for the thorium localities. Samples taken from radioactive outcrops in the vicinity of uranium or thorium deposits can be quickly analyzed by geochemical methods for various elements. Correlation coefficients can then be determined for the various elements with respect to uranium or thorium; if any significant correlations are obtained, the elements showing such correlation may be indicators of uranium or thorium. Soil samples of covered areas in the vicinity of the radioactive outcrop may then be analyzed for the indicator elements and any resulting anomalies used as a guide for prospecting where the depth of overburden is too great to allow the use of radiation-detecting instruments. Correlation coefficients of the associated indicator elements, used in conjunction with petrographic evidence, may also be useful in interpreting the origin and paragenesis of radioactive deposits. Changes in color of limonite stains on the outcrop may also be a useful guide to ore in some areas.

  20. 10 CFR 835.205 - Determination of compliance for non-uniform exposure of the skin.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 100 cm 2 or more. The non-uniform equivalent dose received during the year shall be averaged over the... irradiated is 10 cm 2 or more, but is less than 100 cm 2. The non-uniform equivalent dose (H) to the... less than 0.1 be used. (3) Area of skin irradiated is less than 10 cm 2. The non-uniform equivalent...

  1. 10 CFR 835.205 - Determination of compliance for non-uniform exposure of the skin.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 100 cm 2 or more. The non-uniform equivalent dose received during the year shall be averaged over the... irradiated is 10 cm 2 or more, but is less than 100 cm 2. The non-uniform equivalent dose (H) to the... less than 0.1 be used. (3) Area of skin irradiated is less than 10 cm 2. The non-uniform equivalent...

  2. 10 CFR 835.205 - Determination of compliance for non-uniform exposure of the skin.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 100 cm 2 or more. The non-uniform equivalent dose received during the year shall be averaged over the... irradiated is 10 cm 2 or more, but is less than 100 cm 2. The non-uniform equivalent dose (H) to the... less than 0.1 be used. (3) Area of skin irradiated is less than 10 cm 2. The non-uniform equivalent...

  3. 10 CFR 835.205 - Determination of compliance for non-uniform exposure of the skin.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 100 cm 2 or more. The non-uniform equivalent dose received during the year shall be averaged over the... irradiated is 10 cm 2 or more, but is less than 100 cm 2. The non-uniform equivalent dose (H) to the... less than 0.1 be used. (3) Area of skin irradiated is less than 10 cm 2. The non-uniform equivalent...

  4. 10 CFR 835.205 - Determination of compliance for non-uniform exposure of the skin.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 100 cm 2 or more. The non-uniform equivalent dose received during the year shall be averaged over the... irradiated is 10 cm 2 or more, but is less than 100 cm 2. The non-uniform equivalent dose (H) to the... less than 0.1 be used. (3) Area of skin irradiated is less than 10 cm 2. The non-uniform equivalent...

  5. Equivalent Electromagnetic Constants for Microwave Application to Composite Materials for the Multi-Scale Problem

    PubMed Central

    Fujisaki, Keisuke; Ikeda, Tomoyuki

    2013-01-01

    To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395

  6. An equivalent method for optimization of particle tuned mass damper based on experimental parametric study

    NASA Astrophysics Data System (ADS)

    Lu, Zheng; Chen, Xiaoyi; Zhou, Ying

    2018-04-01

    A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.

  7. Linear and nonlinear equivalent circuit modeling of CMUTs.

    PubMed

    Lohfink, Annette; Eccardt, Peter-Christian

    2005-12-01

    Using piston radiator and plate capacitance theory capacitive micromachined ultrasound transducers (CMUT) membrane cells can be described by one-dimensional (1-D) model parameters. This paper describes in detail a new method, which derives a 1-D model for CMUT arrays from finite-element methods (FEM) simulations. A few static and harmonic FEM analyses of a single CMUT membrane cell are sufficient to derive the mechanical and electrical parameters of an equivalent piston as the moving part of the cell area. For an array of parallel-driven cells, the acoustic parameters are derived as a complex mechanical fluid impedance, depending on the membrane shape form. As a main advantage, the nonlinear behavior of the CMUT can be investigated much easier and faster compared to FEM simulations, e.g., for a design of the maximum applicable voltage depending on the input signal. The 1-D parameter model allows an easy description of the CMUT behavior in air and fluids and simplifies the investigation of wave propagation within the connecting fluid represented by FEM or transmission line matrix (TLM) models.

  8. An equivalent layer magnetization model for the United States derived from MAGSAT data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Galliher, S. C. (Principal Investigator)

    1982-01-01

    Long wavelength anomalies in the total magnetic field measured field measured by MAGSAT over the United States and adjacent areas are inverted to an equivalent layer crustal magnetization distribution. The model is based on an equal area dipole grid at the Earth's surface. Model resolution having physical significance, is about 220 km for MAGSAT data in the elevation range 300-500 km. The magnetization contours correlate well with large-scale tectonic provinces.

  9. Final report of CCQM-K136 measurement of porosity properties (specific adsorption, BET specific surface area, specific pore volume and pore diameter) of nanoporous Al2O3

    NASA Astrophysics Data System (ADS)

    Sobina, E.; Zimathis, A.; Prinz, C.; Emmerling, F.; Unger, W.; de Santis Neves, R.; Galhardo, C. E.; De Robertis, E.; Wang, H.; Mizuno, K.; Kurokawa, A.

    2016-01-01

    CCQM key comparison K-136 Measurement of porosity properties (specific adsorption, BET specific surface area, specific pore volume and pore diameter) of nanoporous Al2O3 has been performed by the Surface Analysis Working Group (SAWG) of the Consultative Committee for Amount of Substance (CCQM). The objective of this key comparison is to compare the equivalency of the National Metrology Institutes (NMIs) and Designated Institutes (DIs) for the measurement of specific adsorption, BET specific surface area, specific pore volume and pore diameter) of nanoporous substances (sorbents, catalytic agents, cross-linkers, zeolites, etc) used in advanced technology. In this key comparison, a commercial sorbent (aluminum oxide) was supplied as a sample. Five NMIs participated in this key comparison. All participants used a gas adsorption method, here nitrogen adsorption at 77.3 K, for analysis according to the international standards ISO 15901-2 and 9277. In this key comparison, the degrees of equivalence uncertainties for specific adsorption, BET specific surface area, specific pore volume and pore diameter was established. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  10. Skylight energy balance analysis procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietz, P.S.; Murdoch, J.B.; Pokoski, J.L.

    1981-10-01

    This paper provides a systematic method for calculating the total, net differential energy balance observed when sections of the roof of a building are replaced with skylights. Among the topics discussed are the effect of solar gains, dome and curb conduction heat transfers, equivalent roof area heat transfers, infiltration heat transfers, artificial lighting energy requirements, and illumination savings from skylights. The paper also provides much of the supplementary information needed to complete these energy calculations. This information appears in the form of appendices, tables, and graphs. 9 refs.

  11. 7 CFR 1001.54 - Equivalent price.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1001.54 Section 1001.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE MILK IN THE NORTHEAST MARKETING AREA Order Regulating...

  12. [A Method to Measure the Velocity of Fragments of Large Equivalence Explosion Field Based on Explosion Flame Spectral Analysis].

    PubMed

    Liu, Ji; Yu, Li-xia; Zhang, Bin; Zhao Dong-e; Liij, Xiao-yan; Wang, Heng-fei

    2016-03-01

    The deflagration fire lasting for a long time and covering a large area in the process of large equivalent explosion makes it difficult to obtain velocity parameters of fragments in the near-field. In order to solve the problem, it is proposed in this paper a photoelectric transceiver integrated method which utilize laser screen as the sensing area. The analysis of three different types of warhead explosion flame spectral distribution of radiation shows that 0.3 to 1.0 μm within the band is at relatively low intensity. On the basis of this, the optical system applies the principle of determining the fixed distance by measuring the time and the reflector technology, which consists of single longitudinal mode laser, cylindrical Fresnel lens, narrow-band filters and high-speed optical sensors, etc. The system has its advantage, such as transceiver, compact structure and combination of narrowband filter and single longitudinal mode laser, which can stop the spectrum of fire from suppressing the interference of background light effectively. Large amounts of experiments in different models and equivalent have been conducted to measure the velocity of difference kinds of warheads, obtaining higher signal-to-noise ratio of the waveform signal after a series of signal de-noising and recognition through NI company data acquisition and recording system. The experimental results show that this method can complete the accurately test velocity of fragments around center of the explosion. Specifically, the minimum size of fragments can be measured is 4 mm while the speed can be obtained is up to 1 200 m x s(-1) and the capture rate is better than 95% comparing with test results of target plate. At the same time, the system adopts Fresnel lenses-transparent to form a rectangular screen, which makes the distribution of rectangular light uniform in vertical direction, and the light intensity uniformity in horizontal direction is more than 80%. Consequently, the system can distinguish preliminarily the correspondence between the velocity and the sizes of prefabricated fragments.

  13. Implementation of equivalent domain integral method in the two-dimensional analysis of mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-intergrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The total and product integrals consist of the sum of an area of domain integral and line integrals on the crack faces. The line integrals vanish only when the crack faces are traction free and the loading is either pure mode 1 or pure mode 2 or a combination of both with only the square-root singular term in the stress field. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all problems analyzed. The EDI method when applied to a problem of an interface crack in two different materials showed that the mode 1 and mode 2 components are domain dependent while the total integral is not. This behavior is caused by the presence of the oscillatory part of the singularity in bimaterial crack problems. The EDI method, thus, shows behavior similar to the virtual crack closure method for bimaterial problems.

  14. Modified methods for growing 3-D skin equivalents: an update.

    PubMed

    Lamb, Rebecca; Ambler, Carrie A

    2014-01-01

    Artificial epidermis can be reconstituted in vitro by seeding primary epidermal cells (keratinocytes) onto a supportive substrate and then growing the developing skin equivalent at the air-liquid interface. In vitro skin models are widely used to study skin biology and for industrial drug and cosmetic testing. Here, we describe updated methods for growing 3-dimensional skin equivalents using de-vitalized, de-epidermalized dermis (DED) substrates including methods for DED substrate preparation, cell seeding, growth conditions, and fixation procedures.

  15. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...

  16. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...

  17. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...

  18. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...

  19. Evaluation of the pharmacokinetic equivalence and 54-week efficacy and safety of CT-P13 and innovator infliximab in Japanese patients with rheumatoid arthritis

    PubMed Central

    Takeuchi, Tsutomu; Yamanaka, Hisashi; Tanaka, Yoshiya; Sakurai, Takeo; Saito, Kazuyoshi; Ohtsubo, Hideo; Lee, Sang Joon; Nambu, Yoshihiro

    2015-01-01

    Objectives. To demonstrate the pharmacokinetic equivalence of CT-P13 and its innovator infliximab (IFX) in Japanese patients with rheumatoid arthritis (RA), and to compare the efficacy and safety of these drugs, administered for 54 weeks. Methods. In a randomized, double-blind, parallel-group, multicenter study, 3 mg/kg of CT-P13 or IFX, in combination with methotrexate (MTX) (6–16 mg/week), was administered for 54 weeks to Japanese active RA patients with an inadequate response to MTX, to demonstrate the pharmacokinetic equivalence, based on the area under the curve (AUCτ) (weeks 6–14) and Cmax (week 6) of these drugs, and to compare their efficacy and safety. Results. The CT-P13-to-IFX ratios (90% confidence intervals) of the geometric mean AUCτ and Cmax values in patients negative for antibodies to infliximab at week 14 were 111.62% (100.24–124.29%) and 104.09% (92.12–117.61%), respectively, demonstrating the pharmacokinetic equivalence of these drugs. In the full analysis set, CT-P13 and IFX showed comparable therapeutic effectiveness, as measured by the American College of Rheumatology, Disease Activity Score in 28 joints, the European League Against Rheumatism, and other efficacy criteria, at weeks 14 and 30. The incidence of adverse events was similar for these drugs. Conclusion. CT-P13 and IFX, administered at a dose of 3 mg/kg in combination with MTX to active RA patients, were pharmacokinetically equivalent and comparable in efficacy and safety. PMID:25736355

  20. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  1. Towards an Automated Development Methodology for Dependable Systems with Application to Sensor Networks

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a probably equivalent model has yet to appeal: Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a probably equivalent implementation are valuable but not su8cient. The "gap" unfilled by such tools and methods is that their. formal models cannot be proven to be equivalent to the system requirements as originated by the customel: For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a probably equivalent formal model that can be used as the basis for code generation and other transformations.

  2. Optical Distortion Evaluation in Large Area Windows using Interferometry

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Skow, Miles; Nurge, Mark A.

    2015-01-01

    It is important that imagery seen through large area windows, such as those used on space vehicles, not be substantially distorted. Many approaches are described in the literature for measuring the distortion of an optical window, but most suffer from either poor resolution or processing difficulties. In this paper a new definition of distortion is presented, allowing accurate measurement using an optical interferometer. This new definition is shown to be equivalent to the definitions provided by the military and the standards organizations. In order to determine the advantages and disadvantages of this new approach the distortion of an acrylic window is measured using three different methods; image comparison, Moiré interferometry, and phase-shifting interferometry.

  3. Experimental microbubble generation by sudden pressure drop and fluidics

    NASA Astrophysics Data System (ADS)

    Franco Gutierrez, Fernando; Figueroa Espinoza, Bernardo; Aguilar Corona, Alicia; Vargas Correa, Jesus; Solorio Diaz, Gildardo

    2014-11-01

    Mass and heat transfer, as well as chemical species in bubbly flow are of importance in environmental and industrial applications. Microbubbles are well suited to these applications due to the large interface contact area and residence time. The objective of this investigation is to build devices to produce microbubbles using two methods: pressure differences and fluidics. Some characteristics, advantages and drawbacks of both methods are briefly discussed, as well as the characterization of the bubbly suspensions in terms of parameters such as the pressure jump and bubble equivalent diameter distribution. The authors acknowledge the support of Consejo Nacional de Ciencia y Tecnología.

  4. Silk industry and carbon footprint mitigation

    NASA Astrophysics Data System (ADS)

    Giacomin, A. M.; Garcia, J. B., Jr.; Zonatti, W. F.; Silva-Santos, M. C.; Laktim, M. C.; Baruque-Ramos, J.

    2017-10-01

    Currently there is a concern with issues related to sustainability and more conscious consumption habits. The carbon footprint measures the total amount of greenhouse gas (GHG) emissions produced directly and indirectly by human activities and is usually expressed in tonnes of carbon dioxide (CO2) equivalents. The present study takes into account data collected in scientific literature regarding the carbon footprint, garments produced with silk fiber and the role of mulberry as a CO2 mitigation tool. There is an indication of a positive correlation between silk garments and carbon footprint mitigation when computed the cultivation of mulberry trees in this calculation. A field of them mitigates CO2 equivalents in a proportion of 735 times the weight of the produced silk fiber by the mulberry cultivated area. At the same time, additional researches are needed in order to identify and evaluate methods to advertise this positive correlation in order to contribute to a more sustainable fashion industry.

  5. Combining Passive Microwave and Optical Data to Estimate Snow Water Equivalent in Afghanistan's Hindu Kush

    NASA Astrophysics Data System (ADS)

    Dozier, J.; Bair, N.; Calfa, A. A.; Skalka, C.; Tolle, K.; Bongard, J.

    2015-12-01

    The task is to estimate spatiotemporally distributed estimates of snow water equivalent (SWE) in snow-dominated mountain environments, including those that lack on-the-ground measurements such as the Hindu Kush range in Afghanistan. During the snow season, we can use two measurements: (1) passive microwave estimates of SWE, which generally underestimate in the mountains; (2) fractional snow-covered area from MODIS. Once the snow has melted, we can reconstruct the accumulated SWE back to the last significant snowfall by calculating the energy used in melt. The reconstructed SWE values provide a training set for predictions from the passive microwave SWE and snow-covered area. We examine several machine learning methods—regression-boosted decision trees, bagged trees, neural networks, and genetic programming—to estimate SWE. All methods work reasonably well, with R2 values greater than 0.8. Predictors built with multiple years of data reduce the bias that usually appears if we predict one year from just one other year's training set. Genetic programming tends to produce results that additionally provide physical insight. Adding precipitation estimates from the Global Precipitation Measurements mission is in progress.

  6. Investigations on Torsion of the Two-Chords Single Laced Members

    NASA Astrophysics Data System (ADS)

    Lorkowski, Paweł; Gosowski, Bronisław

    2017-06-01

    The paper presents experimental and numerical studies to determine the equivalent second moment of area of the uniform torsion of the two-chord steel single laced members. The members are used as poles of railway traction network gates, and steel columns of framed buildings as well. The stiffness of uniform torsion of this kind of columns allows to the determine the critical loads of the spatial stability. The experimental studies have been realized on a single - span members with rotation arrested at their ends, loaded by a torque applied at the mid-span. The relationship between angle of rotation of the considered cross-section and the torque has been determined. Appropriate numerical model was created in the ABAQUS program, based on the finite element method. A very good compatibility has been observed between experimental and numerical studies. The equivalent second moment of area of the uniform torsion for analysed members has been determined by comparing the experimental and analytical results to those obtained from differential equation of non-uniform torsion, based on Vlasov's theory. Additionally, the parametric analyses of similar members subjected to the uniform torsion, for the richer range of cross-sections have been carried out by the means of SOFiSTiK program. The purpose of the latter was determining parametrical formulas for calculation of the second moment of area of uniform torsion.

  7. Application of the modified chi-square ratio statistic in a stepwise procedure for cascade impactor equivalence testing.

    PubMed

    Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther

    2015-03-01

    Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.

  8. 40 CFR 53.1 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... followed by a gravimetric mass determination, but which is not a Class I equivalent method because of... MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.1 Definitions. Terms used but not defined... slope of a linear plot fitted to corresponding candidate and reference method mean measurement data...

  9. Coupling intensity between discharge and magnetic circuit in Hall thrusters

    NASA Astrophysics Data System (ADS)

    Wei, Liqiu; Yang, Xinyong; Ding, Yongjie; Yu, Daren; Zhang, Chaohai

    2017-03-01

    Coupling oscillation is a newly discovered plasma oscillation mode that utilizes the coupling between the discharge circuit and magnetic circuit, whose oscillation frequency spectrum ranges from several kilohertz to megahertz. The coupling coefficient parameter represents the intensity of coupling between the discharge and magnetic circuits. According to previous studies, the coupling coefficient is related to the material and the cross-sectional area of the magnetic coils, and the magnetic circuit of the Hall thruster. However, in our recent study on coupling oscillations, it was found that the Hall current equivalent position and radius have important effects on the coupling intensity between the discharge and magnetic circuits. This causes a difference in the coupling coefficient for different operating conditions of Hall thrusters. Through non-intrusive methods for measuring the Hall current equivalent radius and the axial position, it is found that with an increase in the discharge voltage and magnetic field intensity, the Hall current equivalent radius increases and its axial position moves towards the exit plane. Thus, both the coupling coefficient and the coupling intensity between the discharge and magnetic circuits increase. Contribution to the Topical Issue "Physics of Ion Beam Sources", edited by Holger Kersten and Horst Neumann.

  10. 77 FR 32632 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ... Hydrogen Peroxide Filter Extraction'' In this method, total suspended particulate matter (TSP) is collected on glass fiber filters according to 40 CFR Appendix G to part 50, EPA Reference Method for the Determination of Lead in Suspended Particulate Matter Collected From Ambient Air. The filter samples are...

  11. Development of a soft-soldering system for aluminum

    NASA Astrophysics Data System (ADS)

    Falke, W. L.; Lee, A. Y.; Neumeier, L. A.

    1983-03-01

    The method employs application of a thin nickel copper alloy coating to the substrate, which enables the tin lead solders to wet readily and spread over the areas to be joined. The aluminum substrate is mechanically or chemically cleaned to facilitate bonding to a minute layer of zinc that is subsequently applied, with an electroless zincate solution. The nickel copper alloy (30 to 70 pct Ni) coating is then applied electrolytically over the zinc, using immersion cell or brush coating techniques. Development of acetate electrolytes has permitted deposition of the proper alloys coatings. The coated areas can then be readily joined with conventional tin lead solders and fluxs. The joints so formed are ductile, strong, and relatively corrosion resistant, and exhibit strengths equivalent to those formed on copper and brass when the same solders and fluxes are used. The method has also been employed to soft solder magnesium alloys.

  12. Use of equivalent spheres to model the relation between radar reflectivity and optical extinction of ice cloud particles.

    PubMed

    Donovan, David Patrick; Quante, Markus; Schlimme, Ingo; Macke, Andreas

    2004-09-01

    The effect of ice crystal size and shape on the relation between radar reflectivity and optical extinction is examined. Discrete-dipole approximation calculations of 95-GHz radar reflectivity and ray-tracing calculations are applied to ice crystals of various habits and sizes. Ray tracing was used primarily to calculate optical extinction and to provide approximate information on the lidar backscatter cross section. The results of the combined calculations are compared with Mie calculations applied to collections of different types of equivalent spheres. Various equivalent sphere formulations are considered, including equivalent radar-lidar spheres; equivalent maximum dimension spheres; equivalent area spheres, and equivalent volume and equivalent effective radius spheres. Marked differences are found with respect to the accuracy of different formulations, and certain types of equivalent spheres can be used for useful prediction of both the radar reflectivity at 95 GHz and the optical extinction (but not lidar backscatter cross section) over a wide range of particle sizes. The implications of these results on combined lidar-radar ice cloud remote sensing are discussed.

  13. Identifying a maximum tolerated contour in two-dimensional dose-finding

    PubMed Central

    Wages, Nolan A.

    2016-01-01

    The majority of Phase I methods for multi-agent trials have focused on identifying a single maximum tolerated dose combination (MTDC) among those being investigated. Some published methods in the area have been based on the notion that there is no unique MTDC, and that the set of dose combinations with acceptable toxicity forms an equivalence contour in two dimensions. Therefore, it may be of interest to find multiple MTDC's for further testing for efficacy in a Phase II setting. In this paper, we present a new dose-finding method that extends the continual reassessment method to account for the location of multiple MTDC's. Operating characteristics are demonstrated through simulation studies, and are compared to existing methodology. Some brief discussion of implementation and available software is also provided. PMID:26910586

  14. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  15. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM 2.5 or PM 10-2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  16. Superwetting and aptamer functionalized shrink-induced high surface area electrochemical sensors.

    PubMed

    Hauke, A; Kumar, L S Selva; Kim, M Y; Pegan, J; Khine, M; Li, H; Plaxco, K W; Heikenfeld, J

    2017-08-15

    Electrochemical sensing is moving to the forefront of point-of-care and wearable molecular sensing technologies due to the ability to miniaturize the required equipment, a critical advantage over optical methods in this field. Electrochemical sensors that employ roughness to increase their microscopic surface area offer a strategy to combatting the loss in signal associated with the loss of macroscopic surface area upon miniaturization. A simple, low-cost method of creating such roughness has emerged with the development of shrink-induced high surface area electrodes. Building on this approach, we demonstrate here a greater than 12-fold enhancement in electrochemically active surface area over conventional electrodes of equivalent on-chip footprint areas. This two-fold improvement on previous performance is obtained via the creation of a superwetting surface condition facilitated by a dissolvable polymer coating. As a test bed to illustrate the utility of this approach, we further show that electrochemical aptamer-based sensors exhibit exceptional signal strength (signal-to-noise) and excellent signal gain (relative change in signal upon target binding) when deployed on these shrink electrodes. Indeed, the observed 330% gain we observe for a kanamycin sensor is 2-fold greater than that seen on planar gold electrodes. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. An equivalent viscoelastic model for rock mass with parallel joints

    NASA Astrophysics Data System (ADS)

    Li, Jianchun; Ma, Guowei; Zhao, Jian

    2010-03-01

    An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.

  18. [Pollution characteristics of PCBs in electronic waste dismantling areas of Zhejiang province].

    PubMed

    Wang, Xiaofeng; Lou, Xiaoming; Han, Guangen; Shen, Haitao; Ding, Gangqiang

    2011-09-01

    To study the pollution level and distribution pattern of polychlorinated biphenyls (PCBs) in the environment media in electronic waste dismantling area of Zhejiang province. Water, soil and PM10 were sampled in electronic waste dismantling areas. The contents, distribution characteristics and toxic equivalents (TEQs) of PCBs in local environment were evaluated by ultra-trace detection methods. The PCBs contents of water, soil and PM10 in Luqiao and Zhenhai, the relatively high polluted areas, were higher than those in Longyou, the control area. The dominant PCBs detected from the environment in Luqiao were hexa-CBs (PCB138 and PCB153), while penta-CBs were dominant in Zhenhai and Longyou. TEQs in electronic waste recycling area were higher than those in control areas. The TEQs of PCBs in water and soil were the highest in Zhenhai, while the TEQs of PM10 were the highest in Luqiao. The local environment has been polluted by PCBs emitted from electronic waste recycling. PCBs pollution monitoring in electronic waste recycling area should be strengthened to prevent PCBs-induced health effects.

  19. Determination of uronic acids in isolated hemicelluloses from kenaf using diffuse reflectance infrared fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method.

    PubMed

    Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G

    2004-02-01

    Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.

  20. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  1. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE PAGES

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...

    2016-12-23

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  2. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  3. A new method for CT dose estimation by determining patient water equivalent diameter from localizer radiographs: Geometric transformation and calibration methods using readily available phantoms.

    PubMed

    Zhang, Da; Mihai, Georgeta; Barbaras, Larry G; Brook, Olga R; Palmer, Matthew R

    2018-05-10

    Water equivalent diameter (Dw) reflects patient's attenuation and is a sound descriptor of patient size, and is used to determine size-specific dose estimator from a CT examination. Calculating Dw from CT localizer radiographs makes it possible to utilize Dw before actual scans and minimizes truncation errors due to limited reconstructed fields of view. One obstacle preventing the user community from implementing this useful tool is the necessity to calibrate localizer pixel values so as to represent water equivalent attenuation. We report a practical method to ease this calibration process. Dw is calculated from water equivalent area (Aw) which is deduced from the average localizer pixel value (LPV) of the line(s) in the localizer radiograph that correspond(s) to the axial image. The calibration process is conducted to establish the relationship between Aw and LPV. Localizer and axial images were acquired from phantoms of different total attenuation. We developed a program that automates the geometrical association between axial images and localizer lines and manages the measurements of Dw and average pixel values. We tested the calibration method on three CT scanners: a GE CT750HD, a Siemens Definition AS, and a Toshiba Acquilion Prime80, for both posterior-anterior (PA) and lateral (LAT) localizer directions (for all CTs) and with different localizer filters (for the Toshiba CT). The computer program was able to correctly perform the geometrical association between corresponding axial images and localizer lines. Linear relationships between Aw and LPV were observed (with R 2 all greater than 0.998) on all tested conditions, regardless of the direction and image filters used on the localizer radiographs. When comparing LAT and PA directions with the same image filter and for the same scanner, the slope values were close (maximum difference of 0.02 mm), and the intercept values showed larger deviations (maximum difference of 2.8 mm). Water equivalent diameter estimation on phantoms and patients demonstrated high accuracy of the calibration: percentage difference between Dw from axial images and localizers was below 2%. With five clinical chest examinations and five abdominal-pelvic examinations of varying patient sizes, the maximum percentage difference was approximately 5%. Our study showed that Aw and LPV are highly correlated, providing enough evidence to allow for the Dw determination once the experimental calibration process is established. © 2018 American Association of Physicists in Medicine.

  4. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  5. 76 FR 19769 - Agency Information Collection Activities; Proposed Collection; Comment Request; Application for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-08

    ... Activities; Proposed Collection; Comment Request; Application for Reference and Equivalent Method... ID No. EPA-HQ- ORD-2005-0530, by one of the following methods: http://www.regulations.gov : Follow... instruments, or any other applicant for a reference or an equivalent method determination. Title: Application...

  6. 40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...

  7. 40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...

  8. 40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...

  9. 40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...

  10. 40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...

  11. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  12. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  13. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  14. Differentiation between primary and secondary Raynaud's phenomenon: a prospective study comparing nailfold capillaroscopy using an ophthalmoscope or stereomicroscope

    PubMed Central

    Anders, H; Sigl, T; Schattenkirchner, M

    2001-01-01

    BACKGROUND—Nailfold capillary microscopy is a routine procedure in the investigation of patients with Raynaud's phenomenon (RP). As a standard method, nailfold capillary morphology is inspected with a stereomicroscope to look for capillary abnormalities such as giant loops, avascular areas, and bushy capillaries, which have all been found to be associated with certain connective tissue diseases.
AIM—To investigate prospectively whether nailfold capillary inspection using an ophthalmoscope is of equivalent diagnostic value to standard nailfold capillary microscopy.
METHOD—All the fingers of 26 patients with RP were examined in a blinded fashion and compared with the final diagnosis one month later.
RESULTS—All giant loops, large avascular areas, and bushy capillaries were identified by both methods. The correlation for moderate avascular areas and crossed capillaries was 0.93 and 0.955 respectively. The correlation for minor abnormalities that do not contribute to the differentiation between primary and secondary RP was 0.837 and 0.861 respectively. All patients were classified identically by the two methods.
CONCLUSION—For the evaluation of patients with RP, nailfold capillary morphology can reliably be assessed with an ophthalmoscope.

 PMID:11247874

  15. The 2.5-dimensional equivalent sources method for directly exposed and shielded urban canyons.

    PubMed

    Hornikx, Maarten; Forssén, Jens

    2007-11-01

    When a domain in outdoor acoustics is invariant in one direction, an inverse Fourier transform can be used to transform solutions of the two-dimensional Helmholtz equation to a solution of the three-dimensional Helmholtz equation for arbitrary source and observer positions, thereby reducing the computational costs. This previously published approach [D. Duhamel, J. Sound Vib. 197, 547-571 (1996)] is called a 2.5-dimensional method and has here been extended to the urban geometry of parallel canyons, thereby using the equivalent sources method to generate the two-dimensional solutions. No atmospheric effects are considered. To keep the error arising from the transform small, two-dimensional solutions with a very fine frequency resolution are necessary due to the multiple reflections in the canyons. Using the transform, the solution for an incoherent line source can be obtained much more efficiently than by using the three-dimensional solution. It is shown that the use of a coherent line source for shielded urban canyon observer positions leads mostly to an overprediction of levels and can yield erroneous results for noise abatement schemes. Moreover, the importance of multiple facade reflections in shielded urban areas is emphasized by vehicle pass-by calculations, where cases with absorptive and diffusive surfaces have been modeled.

  16. The importance of being equivalent: Newton's two models of one-body motion

    NASA Astrophysics Data System (ADS)

    Pourciau, Bruce

    2004-05-01

    As an undergraduate at Cambridge, Newton entered into his "Waste Book" an assumption that we have named the Equivalence Assumption (The Younger): "If a body move progressively in some crooked line [about a center of motion] ..., [then this] crooked line may bee conceived to consist of an infinite number of streight lines. Or else in any point of the croked line the motion may bee conceived to be on in the tangent". In this assumption, Newton somewhat imprecisely describes two mathematical models, a "polygonal limit model" and a "tangent deflected model", for "one-body motion", that is, for the motion of a "body in orbit about a fixed center", and then claims that these two models are equivalent. In the first part of this paper, we study the Principia to determine how the elder Newton would more carefully describe the polygonal limit and tangent deflected models. From these more careful descriptions, we then create Equivalence Assumption (The Elder), a precise interpretation of Equivalence Assumption (The Younger) as it might have been restated by Newton, after say 1687. We then review certain portions of the Waste Book and the Principia to make the case that, although Newton never restates nor even alludes to the Equivalence Assumption after his youthful Waste Book entry, still the polygonal limit and tangent deflected models, as well as an unspoken belief in their equivalence, infuse Newton's work on orbital motion. In particular, we show that the persuasiveness of the argument for the Area Property in Proposition 1 of the Principia depends crucially on the validity of Equivalence Assumption (The Elder). After this case is made, we present the mathematical analysis required to establish the validity of the Equivalence Assumption (The Elder). Finally, to illustrate the fundamental nature of the resulting theorem, the Equivalence Theorem as we call it, we present three significant applications: we use the Equivalence Theorem first to clarify and resolve questions related to Leibniz's "polygonal model" of one-body motion; then to repair Newton's argument for the Area Property in Proposition 1; and finally to clarify and resolve questions related to the transition from impulsive to continuous forces in "De motu" and the Principia.

  17. Schooling and Bilingualization in a Highland Guatemalan Community.

    ERIC Educational Resources Information Center

    Richards, Julia Becker

    To examine the process of language shift (bilingualization) in an area where there is a local dialect equivalent to a "language of solidarity" and a national language equivalent to a "language of power," language interactions in the impoverished village of San Marcos in the highlands of Guatemala were examined. Although Spanish…

  18. SUPPORT FOR REFERENCE AND EQUIVALENCY PROGRAM

    EPA Science Inventory

    Federal Reference Methods (FRMs) and Federal Equivalent Methods (FEMs) form the backbone of the EPA's national monitoring strategy. They are the measurement methodologies that define attainment of a National Ambient Air Quality Standard (NAAQS). As knowledge and technology adva...

  19. The equivalent magnetizing method applied to the design of gradient coils for MRI.

    PubMed

    Lopez, Hector Sanchez; Liu, Feng; Crozier, Stuart

    2008-01-01

    This paper presents a new method for the design of gradient coils for Magnetic Resonance Imaging systems. The method is based on the equivalence between a magnetized volume surrounded by a conducting surface and its equivalent representation in surface current/charge density. We demonstrate that the curl of the vertical magnetization induces a surface current density whose stream line defines the coil current pattern. This method can be applied for coils wounds on arbitrary surface shapes. A single layer unshielded transverse gradient coil is designed and compared, with the designs obtained using two conventional methods. Through the presented example we demonstrate that the generated unconventional current patterns obtained using the magnetizing current method produces a superior gradient coil performance than coils designed by applying conventional methods.

  20. Antipsychotic dose equivalents and dose-years: a standardized method for comparing exposure to different drugs.

    PubMed

    Andreasen, Nancy C; Pressler, Marcus; Nopoulos, Peg; Miller, Del; Ho, Beng-Choon

    2010-02-01

    A standardized quantitative method for comparing dosages of different drugs is a useful tool for designing clinical trials and for examining the effects of long-term medication side effects such as tardive dyskinesia. Such a method requires establishing dose equivalents. An expert consensus group has published charts of equivalent doses for various antipsychotic medications for first- and second-generation medications. These charts were used in this study. Regression was used to compare each drug in the experts' charts to chlorpromazine and haloperidol and to create formulas for each relationship. The formulas were solved for chlorpromazine 100 mg and haloperidol 2 mg to derive new chlorpromazine and haloperidol equivalents. The formulas were incorporated into our definition of dose-years such that 100 mg/day of chlorpromazine equivalent or 2 mg/day of haloperidol equivalent taken for 1 year is equal to one dose-year. All comparisons to chlorpromazine and haloperidol were highly linear with R(2) values greater than .9. A power transformation further improved linearity. By deriving a unique formula that converts doses to chlorpromazine or haloperidol equivalents, we can compare otherwise dissimilar drugs. These equivalents can be multiplied by the time an individual has been on a given dose to derive a cumulative value measured in dose-years in the form of (chlorpromazine equivalent in mg) x (time on dose measured in years). After each dose has been converted to dose-years, the results can be summed to provide a cumulative quantitative measure of lifetime exposure. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. Equivalent Circuit Parameter Calculation of Interior Permanent Magnet Motor Involving Iron Loss Resistance Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Yamazaki, Katsumi

    In this paper, we propose a method to calculate the equivalent circuit parameters of interior permanent magnet motors including iron loss resistance using the finite element method. First, the finite element analysis considering harmonics and magnetic saturation is carried out to obtain time variations of magnetic fields in the stator and the rotor core. Second, the iron losses of the stator and the rotor are calculated from the results of the finite element analysis with the considerations of harmonic eddy current losses and the minor hysteresis losses of the core. As a result, we obtain the equivalent circuit parameters i.e. the d-q axis inductance and the iron loss resistance as functions of operating condition of the motor. The proposed method is applied to an interior permanent magnet motor to calculate the characteristics based on the equivalent circuit obtained by the proposed method. The calculated results are compared with the experimental results to verify the accuracy.

  2. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  3. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  4. Footprint Contact Area and Interface Pressure Comparison Between the Knotless and Knot-Tying Transosseous-Equivalent Technique for Rotator Cuff Repair.

    PubMed

    Kim, Sung-Jae; Kim, Sung-Hwan; Moon, Hyun-Soo; Chun, Yong-Min

    2016-01-01

    To quantify and compare the footprint contact area and interface pressure on the greater tuberosity between knotless and knot-tying transosseous-equivalent (TOE) repair using pressure-sensitive film. We used 11 pairs of fresh-frozen cadaveric shoulders (22 specimens), in which rotator cuff tears were created before repair. Each pair was randomized to either conventional medial knot-tying TOE repair (group A) or medial knotless TOE repair using the modified Mason-Allen technique (group B). Pressure-sensitive film was used to quantify the pressurized contact area and interface pressure between the greater tuberosity and supraspinatus tendon. The mean pressurized contact area was 33.2 ± 2.5 mm(2) for group A and 28.4 ± 2.4 mm(2) for group B. There was a significant difference between groups (P = .005). Although the overall contact configuration of both groups was similar and showed an M shape, group A showed a greater pressurized configuration around the medial row. The mean interface pressure was 0.20 ± 0.02 MPa for group A and 0.17 ± 0.02 MPa for group B. There was a significant difference between groups (P = .001). Contrary to our hypothesis, in this time-zero study, medial knotless TOE repair using a modified Mason-Allen suture produced a significantly inferior footprint contact area and interface pressure compared with conventional medial knot-tying TOE repair. Even though we found a statistically significant difference between the 2 repair methods, it is still unknown if this statistical difference seen in our study has any clinical and radiologic significance. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  5. Advancing Research on Racial–Ethnic Health Disparities: Improving Measurement Equivalence in Studies with Diverse Samples

    PubMed Central

    Landrine, Hope; Corral, Irma

    2014-01-01

    To conduct meaningful, epidemiologic research on racial–ethnic health disparities, racial–ethnic samples must be rendered equivalent on other social status and contextual variables via statistical controls of those extraneous factors. The racial–ethnic groups must also be equally familiar with and have similar responses to the methods and measures used to collect health data, must have equal opportunity to participate in the research, and must be equally representative of their respective populations. In the absence of such measurement equivalence, studies of racial–ethnic health disparities are confounded by a plethora of unmeasured, uncontrolled correlates of race–ethnicity. Those correlates render the samples, methods, and measures incomparable across racial–ethnic groups, and diminish the ability to attribute health differences discovered to race–ethnicity vs. to its correlates. This paper reviews the non-equivalent yet normative samples, methodologies and measures used in epidemiologic studies of racial–ethnic health disparities, and provides concrete suggestions for improving sample, method, and scalar measurement equivalence. PMID:25566524

  6. Global equivalent magnetization of the oceanic lithosphere

    NASA Astrophysics Data System (ADS)

    Dyment, J.; Choi, Y.; Hamoudi, M.; Lesur, V.; Thebault, E.

    2015-11-01

    As a by-product of the construction of a new World Digital Magnetic Anomaly Map over oceanic areas, we use an original approach based on the global forward modeling of seafloor spreading magnetic anomalies and their comparison to the available marine magnetic data to derive the first map of the equivalent magnetization over the World's ocean. This map reveals consistent patterns related to the age of the oceanic lithosphere, the spreading rate at which it was formed, and the presence of mantle thermal anomalies which affects seafloor spreading and the resulting lithosphere. As for the age, the equivalent magnetization decreases significantly during the first 10-15 Myr after its formation, probably due to the alteration of crustal magnetic minerals under pervasive hydrothermal alteration, then increases regularly between 20 and 70 Ma, reflecting variations in the field strength or source effects such as the acquisition of a secondary magnetization. As for the spreading rate, the equivalent magnetization is twice as strong in areas formed at fast rate than in those formed at slow rate, with a threshold at ∼40 km/Myr, in agreement with an independent global analysis of the amplitude of Anomaly 25. This result, combined with those from the study of the anomalous skewness of marine magnetic anomalies, allows building a unified model for the magnetic structure of normal oceanic lithosphere as a function of spreading rate. Finally, specific areas affected by thermal mantle anomalies at the time of their formation exhibit peculiar equivalent magnetization signatures, such as the cold Australian-Antarctic Discordance, marked by a lower magnetization, and several hotspots, marked by a high magnetization.

  7. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  8. A formal and data-based comparison of measures of motor-equivalent covariation.

    PubMed

    Verrel, Julius

    2011-09-15

    Different analysis methods have been developed for assessing motor-equivalent organization of movement variability. In the uncontrolled manifold (UCM) method, the structure of variability is analyzed by comparing goal-equivalent and non-goal-equivalent variability components at the level of elemental variables (e.g., joint angles). In contrast, in the covariation by randomization (CR) approach, motor-equivalent organization is assessed by comparing variability at the task level between empirical and decorrelated surrogate data. UCM effects can be due to both covariation among elemental variables and selective channeling of variability to elemental variables with low task sensitivity ("individual variation"), suggesting a link between the UCM and CR method. However, the precise relationship between the notion of covariation in the two approaches has not been analyzed in detail yet. Analysis of empirical and simulated data from a study on manual pointing shows that in general the two approaches are not equivalent, but the respective covariation measures are highly correlated (ρ > 0.7) for two proposed definitions of covariation in the UCM context. For one-dimensional task spaces, a formal comparison is possible and in fact the two notions of covariation are equivalent. In situations in which individual variation does not contribute to UCM effects, for which necessary and sufficient conditions are derived, this entails the equivalence of the UCM and CR analysis. Implications for the interpretation of UCM effects are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. A simple calculation method for determination of equivalent square field.

    PubMed

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-04-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.

  10. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  11. 76 FR 44583 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-26

    ... confidential business information may be necessary to make a reference or equivalent method determination. The... Equivalent Method Determination (Renewal) AGENCY: Environmental Protection Agency (EPA). [[Page 44584... http://www.regulations.gov (our preferred method), by e-mail to [email protected] , or by mail to: EPA...

  12. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    NASA Astrophysics Data System (ADS)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  13. Study on Standard Fatigue Vehicle Load Model

    NASA Astrophysics Data System (ADS)

    Huang, H. Y.; Zhang, J. P.; Li, Y. H.

    2018-02-01

    Based on the measured data of truck from three artery expressways in Guangdong Province, the statistical analysis of truck weight was conducted according to axle number. The standard fatigue vehicle model applied to industrial areas in the middle and late was obtained, which adopted equivalence damage principle, Miner linear accumulation law, water discharge method and damage ratio theory. Compared with the fatigue vehicle model Specified by the current bridge design code, the proposed model has better applicability. It is of certain reference value for the fatigue design of bridge in China.

  14. Suspension system vibration analysis with regard to variable type ability to smooth road irregularities

    NASA Astrophysics Data System (ADS)

    Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Makhno, D. E.; Fedotov, K. V.

    2018-03-01

    The paper aims to analyze vibrations of the dynamic system equivalent of the suspension system with regard to tyre ability to smooth road irregularities. The research is based on static dynamics for linear systems of automated control, methods of correlation, spectral and numerical analysis. Input of new data on the smoothing effect of the pneumatic tyre reflecting changes of a contact area between the wheel and road under vibrations of the suspension makes the system non-linear which requires using numerical analysis methods. Taking into account the variable smoothing ability of the tyre when calculating suspension vibrations, one can approximate calculation and experimental results and improve the constant smoothing ability of the tyre.

  15. A Comparison of Three Methods for Measuring Distortion in Optical Windows

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Nurge, Mark A.; Skow, Miles

    2015-01-01

    It's important that imagery seen through large-area windows, such as those used on space vehicles, not be substantially distorted. Many approaches are described in the literature for measuring the distortion of an optical window, but most suffer from either poor resolution or processing difficulties. In this paper a new definition of distortion is presented, allowing accurate measurement using an optical interferometer. This new definition is shown to be equivalent to the definitions provided by the military and the standards organizations. In order to determine the advantages and disadvantages of this new approach, the distortion of an acrylic window is measured using three different methods: image comparison, moiré interferometry, and phase-shifting interferometry.

  16. Equivalent modulus method for finite element simulation of the sound absorption of anechoic coating backed with orthogonally rib-stiffened plate

    NASA Astrophysics Data System (ADS)

    Jin, Zhongkun; Yin, Yao; Liu, Bilong

    2016-03-01

    The finite element method is often used to investigate the sound absorption of anechoic coating backed with orthogonally rib-stiffened plate. Since the anechoic coating contains cavities, the number of grid nodes of a periodic unit cell is usually large. An equivalent modulus method is proposed to reduce the large amount of nodes by calculating an equivalent homogeneous layer. Applications of this method in several models show that the method can well predict the sound absorption coefficient of such structure in a wide frequency range. Based on the simulation results, the sound absorption performance of such structure and the influences of different backings on the first absorption peak are also discussed.

  17. Study of noise level at Raja Haji Fisabilillah airport in Tanjung Pinang, Riau Islands

    NASA Astrophysics Data System (ADS)

    Nofriandi, H.; Wijayanti, A.; Fachrul, M. F.

    2018-01-01

    Raja Haji Fisabilillah International Airport is a central airport located in Kampung Mekarsari, Pinang Kencana District, Tanjung Pinang City, Riau Islands Province. The aims of this study are to determine noise level at the airport and to calculate noise index using WECPNL (Weighted Equivalent Continuous Perceived Noise Level) method. The method using recommendations from the International Civil Aviation Organization (ICAO), the measurement point is based on at a distance of 300 meters parallel to the runway, as well as 1000 meters, 2000 meters, 3000 meters and 4000 meters from the runway end. The results at point 3 was 75.30 dB(A). Based on the noise intensity result, Boeing aircraft 737-500 was considered as the highest in the airport surrounding area, which is 95.24 dB(A) and the lowest was at point 12 with a value of 37,24 dB(A). Mapping contour shows that 3 areas of noise and point 3 with 75,30 dB(A) were considered as second level area and were complied to the standard required.

  18. 47 CFR 36.156 - Interexchange Cable and Wire Facilities (C&WF)-Category 3-apportionment procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... cost per equivalent interexchange telephone circuit kilometer for all circuits in Category 3 is determined and applied to the equivalent interexchange telephone circuit kilometer counts of each of the... Interexchange Cable and Wire Facilities C&WF where feasible. All study areas shall apportion the non-directly...

  19. 47 CFR 36.156 - Interexchange Cable and Wire Facilities (C&WF)-Category 3-apportionment procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... cost per equivalent interexchange telephone circuit kilometer for all circuits in Category 3 is determined and applied to the equivalent interexchange telephone circuit kilometer counts of each of the... Interexchange Cable and Wire Facilities C&WF where feasible. All study areas shall apportion the non-directly...

  20. 47 CFR 36.156 - Interexchange Cable and Wire Facilities (C&WF)-Category 3-apportionment procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... cost per equivalent interexchange telephone circuit kilometer for all circuits in Category 3 is determined and applied to the equivalent interexchange telephone circuit kilometer counts of each of the... Interexchange Cable and Wire Facilities C&WF where feasible. All study areas shall apportion the non-directly...

  1. 47 CFR 36.156 - Interexchange Cable and Wire Facilities (C&WF)-Category 3-apportionment procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... cost per equivalent interexchange telephone circuit kilometer for all circuits in Category 3 is determined and applied to the equivalent interexchange telephone circuit kilometer counts of each of the... Interexchange Cable and Wire Facilities C&WF where feasible. All study areas shall apportion the non-directly...

  2. 42 CFR 422.252 - Terminology.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... § 422.252 Terminology. Annual MA capitation rate means a county payment rate for an MA local area... to refer to the annual MA capitation rate. MA local area means a payment area consisting of county or equivalent area specified by CMS. MA monthly basic beneficiary premium means the premium amount an MA plan...

  3. On the wavelet optimized finite difference method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1994-01-01

    When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.

  4. Ecosystems, ecological restoration, and economics: does habitat or resource equivalency analysis mean other economic valuation methods are not needed?

    PubMed

    Shaw, W Douglass; Wlodarz, Marta

    2013-09-01

    Coastal and other area resources such as tidal wetlands, seagrasses, coral reefs, wetlands, and other ecosystems are often harmed by environmental damage that might be inflicted by human actions, or could occur from natural hazards such as hurricanes. Society may wish to restore resources to offset the harm, or receive compensation if this is not possible, but faces difficult choices among potential compensation projects. The optimal amount of restoration efforts can be determined by non-market valuation methods, service-to-service, or resource-to-resource approaches such as habitat equivalency analysis (HEA). HEA scales injured resources and lost services on a one-to-one trade-off basis. Here, we present the main differences between the HEA approach and other non-market valuation approaches. Particular focus is on the role of the social discount rate, which appears in the HEA equation and underlies calculations of the present value of future damages. We argue that while HEA involves elements of economic analysis, the assumption of a one-to-one trade-off between lost and restored services sometimes does not hold, and then other non-market economic valuation approaches may help in restoration scaling or in damage determination.

  5. The Capacity Gain of Orbital Angular Momentum Based Multiple-Input-Multiple-Output System

    PubMed Central

    Zhang, Zhuofan; Zheng, Shilie; Chen, Yiling; Jin, Xiaofeng; Chi, Hao; Zhang, Xianmin

    2016-01-01

    Wireless communication using electromagnetic wave carrying orbital angular momentum (OAM) has attracted increasing interest in recent years, and its potential to increase channel capacity has been explored widely. In this paper, we compare the technique of using uniform linear array consist of circular traveling-wave OAM antennas for multiplexing with the conventional multiple-in-multiple-out (MIMO) communication method, and numerical results show that the OAM based MIMO system can increase channel capacity while communication distance is long enough. An equivalent model is proposed to illustrate that the OAM multiplexing system is equivalent to a conventional MIMO system with a larger element spacing, which means OAM waves could decrease the spatial correlation of MIMO channel. In addition, the effects of some system parameters, such as OAM state interval and element spacing, on the capacity advantage of OAM based MIMO are also investigated. Our results reveal that OAM waves are complementary with MIMO method. OAM waves multiplexing is suitable for long-distance line-of-sight (LoS) communications or communications in open area where the multi-path effect is weak and can be used in massive MIMO systems as well. PMID:27146453

  6. Standing adult human phantoms based on 10th, 50th and 90th mass and height percentiles of male and female Caucasian populations

    NASA Astrophysics Data System (ADS)

    Cassola, V. F.; Milian, F. M.; Kramer, R.; de Oliveira Lira, C. A. B.; Khoury, H. J.

    2011-07-01

    Computational anthropomorphic human phantoms are useful tools developed for the calculation of absorbed or equivalent dose to radiosensitive organs and tissues of the human body. The problem is, however, that, strictly speaking, the results can be applied only to a person who has the same anatomy as the phantom, while for a person with different body mass and/or standing height the data could be wrong. In order to improve this situation for many areas in radiological protection, this study developed 18 anthropometric standing adult human phantoms, nine models per gender, as a function of the 10th, 50th and 90th mass and height percentiles of Caucasian populations. The anthropometric target parameters for body mass, standing height and other body measures were extracted from PeopleSize, a well-known software package used in the area of ergonomics. The phantoms were developed based on the assumption of a constant body-mass index for a given mass percentile and for different heights. For a given height, increase or decrease of body mass was considered to reflect mainly the change of subcutaneous adipose tissue mass, i.e. that organ masses were not changed. Organ mass scaling as a function of height was based on information extracted from autopsy data. The methods used here were compared with those used in other studies, anatomically as well as dosimetrically. For external exposure, the results show that equivalent dose decreases with increasing body mass for organs and tissues located below the subcutaneous adipose tissue layer, such as liver, colon, stomach, etc, while for organs located at the surface, such as breasts, testes and skin, the equivalent dose increases or remains constant with increasing body mass due to weak attenuation and more scatter radiation caused by the increasing adipose tissue mass. Changes of standing height have little influence on the equivalent dose to organs and tissues from external exposure. Specific absorbed fractions (SAFs) have also been calculated with the 18 anthropometric phantoms. The results show that SAFs decrease with increasing height and increase with increasing body mass. The calculated data suggest that changes of the body mass may have a significant effect on equivalent doses, primarily for external exposure to organs and tissue located below the adipose tissue layer, while for superficial organs, for changes of height and for internal exposures the effects on equivalent dose are small to moderate.

  7. Theory and experimental verifications of the resonator Q and equivalent electrical parameters due to viscoelastic and mounting supports losses.

    PubMed

    Yong, Yook-Kong; Patel, Mihir S; Tanaka, Masako

    2010-08-01

    A novel analytical/numerical method for calculating the resonator Q and its equivalent electrical parameters due to viscoelastic, conductivity, and mounting supports losses is presented. The method presented will be quite useful for designing new resonators and reducing the time and costs of prototyping. There was also a necessity for better and more realistic modeling of the resonators because of miniaturization and the rapid advances in the frequency ranges of telecommunication. We present new 3-D finite elements models of quartz resonators with viscoelasticity, conductivity, and mounting support losses. The losses at the mounting supports were modeled by perfectly matched layers (PMLs). A previously published theory for dissipative anisotropic piezoelectric solids was formulated in a weak form for finite element (FE) applications. PMLs were placed at the base of the mounting supports to simulate the energy losses to a semi-infinite base substrate. FE simulations were carried out for free vibrations and forced vibrations of quartz tuning fork and AT-cut resonators. Results for quartz tuning fork and thickness shear AT-cut resonators were presented and compared with experimental data. Results for the resonator Q and the equivalent electrical parameters were compared with their measured values. Good equivalences were found. Results for both low- and high-Q AT-cut quartz resonators compared well with their experimental values. A method for estimating the Q directly from the frequency spectrum obtained for free vibrations was also presented. An important determinant of the quality factor Q of a quartz resonator is the loss of energy from the electrode area to the base via the mountings. The acoustical characteristics of the plate resonator are changed when the plate is mounted onto a base substrate. The base affects the frequency spectra of the plate resonator. A resonator with a high Q may not have a similarly high Q when mounted on a base. Hence, the base is an energy sink and the Q will be affected by the shape and size of this base. A lower-bound Q will be obtained if the base is a semi-infinite base because it will absorb all acoustical energies radiated from the resonator.

  8. Component Performance Investigation of J71 Type II Turbines: III - Overall Performance of J71 Type IIA Turbine

    NASA Technical Reports Server (NTRS)

    Schum, Harold J.; Davison, Elmer H.; Petrash, Donald A.

    1955-01-01

    The over-all component performance characteristics of the J71 Type IIA three-stage turbine were experimentally determined over a range of speed and over-all turbine total-pressure ratio at inlet-air conditions af 35 inches of mercury absolute and 700 deg. R. The results are compared with those obtained for the J71 Type IIF turbine, which was previously investigated, the two turbines being designed for the same engine application. Geometrically the two turbines were much alike, having the same variation of annular flow area and the same number of blades for corresponding stator and rotor rows. However, the blade throat areas downstream of the first stator of the IIA turbine were smaller than those of the IIF; and the IIA blade profiles were curve-backed, whereas those of the IIF were straight-backed. The IIA turbine passed the equivalent design weight flow and had a brake internal efficiency of 0.880 at design equivalent speed and work output. A maximum efficiency of 0.896 occurred at 130 percent of design equivalent speed and a pressure ratio of 4.0. The turbine had a wide range of efficient operation. The IIA turbine had slightly higher efficiencies than the IIF turbine at comparable operating conditions. The fact that the IIA turbine obtained the design equivalent weight flow at the design equivalent operating point was probably a result of the decrease in the blading throat areas downstream of the first stator from those of the IIF turbine, which passed 105 percent of design weight flow at the corresponding operating point. The third stator row of blades of the IIA turbine choked at the design equivalent speed and at an over-all pressure ratio of 4.2; the third rotor choked at a pressure ratio of approximately 4.9

  9. An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2003-01-01

    Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.

  10. Reference and Equivalent Methods Used to Measure National Ambient Air Quality Standards (NAAQS) Criteria Air Pollutants - Volume I

    EPA Science Inventory

    There are a number of Federal Reference Method (FRM) and Federal Equivalent Method (FEM) systems used to monitor the six criteria air pollutants (Lead [Pb], Carbon Monoxide [CO], Sulfur Dioxide [SO2], Nitrogen Dioxide [NO2], Ozone [O3], Particulate Matter [PM]) to determine if an...

  11. Comparisons of Internet-Based and Face-to-Face Learning Systems Based on "Equivalency of Experiences" According to Students' Academic Achievements and Satisfactions

    ERIC Educational Resources Information Center

    Karatas, Sercin; Simsek, Nurettin

    2009-01-01

    The purpose of this study is to determine whether "equivalent learning experiences" ensure equivalency, in the Internet-based and face-to-face interaction methods on learning results and student satisfaction. In the experimental process of this study, the effect of the Internet-based and face-to-face learning on the equivalency in…

  12. Dose Equivalents for Second-Generation Antipsychotic Drugs: The Classical Mean Dose Method

    PubMed Central

    Leucht, Stefan; Samara, Myrto; Heres, Stephan; Patel, Maxine X.; Furukawa, Toshi; Cipriani, Andrea; Geddes, John; Davis, John M.

    2015-01-01

    Background: The concept of dose equivalence is important for many purposes. The classical approach published by Davis in 1974 subsequently dominated textbooks for several decades. It was based on the assumption that the mean doses found in flexible-dose trials reflect the average optimum dose which can be used for the calculation of dose equivalence. We are the first to apply the method to second-generation antipsychotics. Methods: We searched for randomized, double-blind, flexible-dose trials in acutely ill patients with schizophrenia that examined 13 oral second-generation antipsychotics, haloperidol, and chlorpromazine (last search June 2014). We calculated the mean doses of each drug weighted by sample size and divided them by the weighted mean olanzapine dose to obtain olanzapine equivalents. Results: We included 75 studies with 16 555 participants. The doses equivalent to 1 mg/d olanzapine were: amisulpride 38.3 mg/d, aripiprazole 1.4 mg/d, asenapine 0.9 mg/d, chlorpromazine 38.9 mg/d, clozapine 30.6 mg/d, haloperidol 0.7 mg/d, quetiapine 32.3mg/d, risperidone 0.4mg/d, sertindole 1.1 mg/d, ziprasidone 7.9 mg/d, zotepine 13.2 mg/d. For iloperidone, lurasidone, and paliperidone no data were available. Conclusions: The classical mean dose method is not reliant on the limited availability of fixed-dose data at the lower end of the effective dose range, which is the major limitation of “minimum effective dose methods” and “dose-response curve methods.” In contrast, the mean doses found by the current approach may have in part depended on the dose ranges chosen for the original trials. Ultimate conclusions on dose equivalence of antipsychotics will need to be based on a review of various methods. PMID:25841041

  13. Intra-laboratory validation of microplate methods for total phenolic content and antioxidant activity on polyphenolic extracts, and comparison with conventional spectrophotometric methods.

    PubMed

    Bobo-García, Gloria; Davidov-Pardo, Gabriel; Arroqui, Cristina; Vírseda, Paloma; Marín-Arroyo, María R; Navarro, Montserrat

    2015-01-01

    Total phenolic content (TPC) and antioxidant activity (AA) assays in microplates save resources and time, therefore they can be useful to overcome the fact that the conventional methods are time-consuming, labour intensive and use large amounts of reagents. An intra-laboratory validation of the Folin-Ciocalteu microplate method to measure TPC and the 2,2-diphenyl-1-picrylhydrazyl (DPPH) microplate method to measure AA was performed and compared with conventional spectrophotometric methods. To compare the TPC methods, the confidence intervals of a linear regression were used. In the range of 10-70 mg L(-1) of gallic acid equivalents (GAE), both methods were equivalent. To compare the AA methodologies, the F-test and t-test were used in a range from 220 to 320 µmol L(-1) of Trolox equivalents. Both methods had homogeneous variances, and the means were not significantively different. The limits of detection and quantification for the TPC microplate method were 0.74 and 2.24 mg L(-1) GAE and for the DPPH 12.07 and 36.58 µmol L(-1) of Trolox equivalents. The relative standard deviation of the repeatability and reproducibility for both microplate methods were ≤ 6.1%. The accuracy ranged from 88% to 100%. The microplate and the conventional methods are equals in a 95% confidence level. © 2014 Society of Chemical Industry.

  14. Conceptual Design of Low-Boom Aircraft with Flight Trim Requirement

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Geiselhart, Karl A.; Fenbert, James W.

    2014-01-01

    A new low-boom target generation approach is presented which allows the introduction of a trim requirement during the early conceptual design of supersonic aircraft. The formulation provides an approximation of the center of pressure for a presumed aircraft configuration with a reversed equivalent area matching a low-boom equivalent area target. The center of pressure is approximated from a surrogate lift distribution that is based on the lift component of the classical equivalent area. The assumptions of the formulation are verified to be sufficiently accurate for a supersonic aircraft of high fineness ratio through three case studies. The first two quantify and verify the accuracy and the sensitivity of the surrogate center of pressure corresponding to shape deformation of lifting components. The third verification case shows the capability of the approach to achieve a trim state while maintaining the low-boom characteristics of a previously untrimmed configuration. Finally, the new low-boom target generation approach is demonstrated through the early conceptual design of a demonstrator concept that is low-boom feasible, trimmed, and stable in cruise.

  15. Translating dosages from animal models to human clinical trials--revisiting body surface area scaling.

    PubMed

    Blanchard, Otis L; Smoliga, James M

    2015-05-01

    Body surface area (BSA) scaling has been used for prescribing individualized dosages of various drugs and has also been recommended by the U.S. Food and Drug Administration as one method for using data from animal model species to establish safe starting dosages for first-in-human clinical trials. Although BSA conversion equations have been used in certain clinical applications for decades, recent recommendations to use BSA to derive interspecies equivalents for therapeutic dosages of drug and natural products are inappropriate. A thorough review of the literature reveals that BSA conversions are based on antiquated science and have little justification in current translational medicine compared to more advanced allometric and physiologically based pharmacokinetic modeling. Misunderstood and misinterpreted use of BSA conversions may have disastrous consequences, including underdosing leading to abandonment of potentially efficacious investigational drugs, and unexpected deadly adverse events. We aim to demonstrate that recent recommendations for BSA are not appropriate for animal-to-human dosage conversions and use pharmacokinetic data from resveratrol studies to demonstrate how confusion between the "human equivalent dose" and "pharmacologically active dose" can lead to inappropriate dose recommendations. To optimize drug development, future recommendations for interspecies scaling must be scientifically justified using physiologic, pharmacokinetic, and toxicology data rather than simple BSA conversion. © FASEB.

  16. Dwell time considerations for large area cold plasma decontamination

    NASA Astrophysics Data System (ADS)

    Konesky, Gregory

    2009-05-01

    Atmospheric discharge cold plasmas have been shown to be effective in the reduction of pathogenic bacteria and spores and in the decontamination of simulated chemical warfare agents, without the generation of toxic or harmful by-products. Cold plasmas may also be useful in assisting cleanup of radiological "dirty bombs." For practical applications in realistic scenarios, the plasma applicator must have both a large area of coverage, and a reasonably short dwell time. However, the literature contains a wide range of reported dwell times, from a few seconds to several minutes, needed to achieve a given level of reduction. This is largely due to different experimental conditions, and especially, different methods of generating the decontaminating plasma. We consider these different approaches and attempt to draw equivalencies among them, and use this to develop requirements for a practical, field-deployable plasma decontamination system. A plasma applicator with 12 square inches area and integral high voltage, high frequency generator is described.

  17. Methodological Issues in Examining Measurement Equivalence in Patient Reported Outcomes Measures: Methods Overview to the Two-Part Series, “Measurement Equivalence of the Patient Reported Outcomes Measurement Information System® (PROMIS®) Short Forms”

    PubMed Central

    Teresi, Jeanne A.; Jones, Richard N.

    2017-01-01

    The purpose of this article is to introduce the methods used and challenges confronted by the authors of this two-part series of articles describing the results of analyses of measurement equivalence of the short form scales from the Patient Reported Outcomes Measurement Information System® (PROMIS®). Qualitative and quantitative approaches used to examine differential item functioning (DIF) are reviewed briefly. Qualitative methods focused on generation of DIF hypotheses. The basic quantitative approaches used all rely on a latent variable model, and examine parameters either derived directly from item response theory (IRT) or from structural equation models (SEM). A key methods focus of these articles is to describe state-of-the art approaches to examination of measurement equivalence in eight domains: physical health, pain, fatigue, sleep, depression, anxiety, cognition, and social function. These articles represent the first time that DIF has been examined systematically in the PROMIS short form measures, particularly among ethnically diverse groups. This is also the first set of analyses to examine the performance of PROMIS short forms in patients with cancer. Latent variable model state-of-the-art methods for examining measurement equivalence are introduced briefly in this paper to orient readers to the approaches adopted in this set of papers. Several methodological challenges underlying (DIF-free) anchor item selection and model assumption violations are presented as a backdrop for the articles in this two-part series on measurement equivalence of PROMIS measures. PMID:28983448

  18. Methodological Issues in Examining Measurement Equivalence in Patient Reported Outcomes Measures: Methods Overview to the Two-Part Series, "Measurement Equivalence of the Patient Reported Outcomes Measurement Information System® (PROMIS®) Short Forms".

    PubMed

    Teresi, Jeanne A; Jones, Richard N

    2016-01-01

    The purpose of this article is to introduce the methods used and challenges confronted by the authors of this two-part series of articles describing the results of analyses of measurement equivalence of the short form scales from the Patient Reported Outcomes Measurement Information System ® (PROMIS ® ). Qualitative and quantitative approaches used to examine differential item functioning (DIF) are reviewed briefly. Qualitative methods focused on generation of DIF hypotheses. The basic quantitative approaches used all rely on a latent variable model, and examine parameters either derived directly from item response theory (IRT) or from structural equation models (SEM). A key methods focus of these articles is to describe state-of-the art approaches to examination of measurement equivalence in eight domains: physical health, pain, fatigue, sleep, depression, anxiety, cognition, and social function. These articles represent the first time that DIF has been examined systematically in the PROMIS short form measures, particularly among ethnically diverse groups. This is also the first set of analyses to examine the performance of PROMIS short forms in patients with cancer. Latent variable model state-of-the-art methods for examining measurement equivalence are introduced briefly in this paper to orient readers to the approaches adopted in this set of papers. Several methodological challenges underlying (DIF-free) anchor item selection and model assumption violations are presented as a backdrop for the articles in this two-part series on measurement equivalence of PROMIS measures.

  19. Quantifying Groundwater Quality at a Regional Scale: Establishing a Foundation for Economic and Health Assessments

    NASA Astrophysics Data System (ADS)

    Belitz, K.

    2015-12-01

    What is the value of clean groundwater? Might one aquifer be considered more valuable than another? To help address these, and similar, questions, we propose that aquifers be assessed by two metrics: 1) the contaminated area of an aquifer, defined by high concentrations (km2 or proportion); and 2) equivalent-population potentially impacted by that contamination (number of people or proportion). Concentrations are considered high if they are above a human health benchmark. The two metrics provide a quantitative basis for assessment at the aquifer scale, rather than the well scale. This approach has been applied to groundwater used for public supply in California (Belitz and others, 2015). The assessment distinguishes between population (34 million, 2000 census) and equivalent-population (11 million) because public drinking water supplies can be a mix of surface water and groundwater. The assessment was conducted in 87 study areas which account for nearly 100% of the groundwater used for public supply. The area-metric, when expressed as a proportion, is useful for identifying where a particular contaminant or class of contaminants might be a cause for concern. In CA, there are 38 study where the area-metric ≥ 25% for one or more contaminants; in 7 of these, the area-metric ≥ 50%. Naturally-occurring trace elements, such as arsenic and uranium, are the most prevalent contaminant class in 72 study areas. Nitrate is most prevalent at high concentrations in 11 study areas, and organic compounds in 4. By the area-metric, 23% of the groundwater used for public supply in CA has high concentrations of one or more contaminants (20,000 of 89,000 km2 assessed). The population-metric, when expressed as a number of people, identifies the potential impact of groundwater contamination. There are 33 CA study areas where the population-metric exceeds 10,000 people (equivalent population multiplied by detection frequency of wells with high concentrations). The population-metric exceeds 50,000 people in 10 study areas. On a statewide basis, the population metric is 2 million people (18% of 11 million equivalent-people). The proposed assessment approach is independent of scale, allows for consistent comparison across regions, and provides a foundation for subsequent economic or health assessments.

  20. 40 CFR 53.58 - Operational field precision and blank test.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...

  1. 40 CFR 53.58 - Operational field precision and blank test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...

  2. 40 CFR 53.58 - Operational field precision and blank test.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...

  3. 40 CFR 53.58 - Operational field precision and blank test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...

  4. A simple calculation method for determination of equivalent square field

    PubMed Central

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-01-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801

  5. Radiation characteristics and effective optical properties of dumbbell-shaped cyanobacterium Synechocystis sp.

    NASA Astrophysics Data System (ADS)

    Heng, Ri-Liang; Pilon, Laurent

    2016-05-01

    This study presents experimental measurements of the radiation characteristics of unicellular freshwater cyanobacterium Synechocystis sp. during their exponential growth in F medium. Their scattering phase function at 633 nm average spectral absorption and scattering cross-sections between 400 and 750 nm were measured. In addition, an inverse method was used for retrieving the spectral effective complex index of refraction of overlapping or touching bispheres and quadspheres from their absorption and scattering cross-sections. The inverse method combines a genetic algorithm and a forward model based on Lorenz-Mie theory, treating bispheres and quadspheres as projected area and volume-equivalent coated spheres. The inverse method was successfully validated with numerically predicted average absorption and scattering cross-sections of suspensions consisting of bispheres and quadspheres, with realistic size distributions, using the T-matrix method. It was able to retrieve the monomers' complex index of refraction with size parameter up to 11, relative refraction index less than 1.3, and absorption index less than 0.1. Then, the inverse method was applied to retrieve the effective spectral complex index of refraction of Synechocystis sp. approximated as randomly oriented aggregates consisting of two overlapping homogeneous spheres. Both the measured absorption cross-section and the retrieved absorption index featured peaks at 435 and 676 nm corresponding to chlorophyll a, a peak at 625 nm corresponding to phycocyanin, and a shoulder around 485 nm corresponding to carotenoids. These results can be used to optimize and control light transfer in photobioreactors. The inverse method and the equivalent coated sphere model could be applied to other optically soft particles of similar morphologies.

  6. Should non-disclosures be considered as morally equivalent to lies within the doctor-patient relationship?

    PubMed

    Cox, Caitriona L; Fritz, Zoe

    2016-10-01

    In modern practice, doctors who outright lie to their patients are often condemned, yet those who employ non-lying deceptions tend to be judged less critically. Some areas of non-disclosure have recently been challenged: not telling patients about resuscitation decisions; inadequately informing patients about risks of alternative procedures and withholding information about medical errors. Despite this, there remain many areas of clinical practice where non-disclosures of information are accepted, where lies about such information would not be. Using illustrative hypothetical situations, all based on common clinical practice, we explore the extent to which we should consider other deceptive practices in medicine to be morally equivalent to lying. We suggest that there is no significant moral difference between lying to a patient and intentionally withholding relevant information: non-disclosures could be subjected to Bok's 'Test of Publicity' to assess permissibility in the same way that lies are. The moral equivalence of lying and relevant non-disclosure is particularly compelling when the agent's motivations, and the consequences of the actions (from the patient's perspectives), are the same. We conclude that it is arbitrary to claim that there is anything inherently worse about lying to a patient to mislead them than intentionally deceiving them using other methods, such as euphemism or non-disclosure. We should question our intuition that non-lying deceptive practices in clinical practice are more permissible and should thus subject non-disclosures to the same scrutiny we afford to lies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. Analysis and synthesis of bianisotropic metasurfaces by using analytical approach based on equivalent parameters

    NASA Astrophysics Data System (ADS)

    Danaeifar, Mohammad; Granpayeh, Nosrat

    2018-03-01

    An analytical method is presented to analyze and synthesize bianisotropic metasurfaces. The equivalent parameters of metasurfaces in terms of meta-atom properties and other specifications of metasurfaces are derived. These parameters are related to electric, magnetic, and electromagnetic/magnetoelectric dipole moments of the bianisotropic media, and they can simplify the analysis of complicated and multilayer structures. A metasurface of split ring resonators is studied as an example demonstrating the proposed method. The optical properties of the meta-atom are explored, and the calculated polarizabilities are applied to find the reflection coefficient and the equivalent parameters of the metasurface. Finally, a structure consisting of two metasurfaces of the split ring resonators is provided, and the proposed analytical method is applied to derive the reflection coefficient. The validity of this analytical approach is verified by full-wave simulations which demonstrate good accuracy of the equivalent parameter method. This method can be used in the analysis and synthesis of bianisotropic metasurfaces with different materials and in different frequency ranges by considering electric, magnetic, and electromagnetic/magnetoelectric dipole moments.

  8. Unlocking annual firn layer water equivalents from ground-penetrating radar data on an Alpine glacier

    NASA Astrophysics Data System (ADS)

    Sold, L.; Huss, M.; Eichler, A.; Schwikowski, M.; Hoelzle, M.

    2015-05-01

    The spatial representation of accumulation measurements is a major limitation for current glacier mass balance monitoring approaches. Here, we present a method for estimating annual accumulation rates on a temperate Alpine glacier based on the interpretation of internal reflection horizons (IRHs) in helicopter-borne ground-penetrating radar (GPR) data. For each individual GPR measurement, the signal travel time is combined with a simple model for firn densification and refreezing of meltwater. The model is calibrated at locations where GPR repeat measurements are available in two subsequent years and the densification can be tracked over time. Two 10.5 m long firn cores provide a reference for the density and chronology of firn layers. Thereby, IRHs correspond to density maxima, but not exclusively to former summer glacier surfaces. Along GPR profile sections from across the accumulation area we obtain the water equivalent (w.e.) of several annual firn layers. Because deeper IRHs could be tracked over shorter distances, the total length of analysed profile sections varies from 7.3 km for the uppermost accumulation layer (2011) to 0.1 km for the deepest (i.e. oldest) layer (2006). According to model results, refreezing accounts for 10% of the density increase over time and depth, and for 2% of the water equivalent. The strongest limitation to our method is the dependence on layer chronology assumptions. We show that GPR can be used not only to complement existing mass balance monitoring programmes on temperate glaciers but also to retrospectively extend newly initiated time series.

  9. 75 FR 33534 - Milk in the Northeast and Other Marketing Areas; Final Decision on Proposed Amendments to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-14

    ... incorporates an equivalent 2.25 percent true milk protein criterion for determining if a product meets the... percent true milk protein criterion for determining if a product meets the compositional standard. The... solids and incorporates an equivalent 2.25 percent true milk protein criterion for determining whether a...

  10. Method and apparatus for Delta Kappa synthetic aperture radar measurement of ocean current

    NASA Technical Reports Server (NTRS)

    Jain, A. (Inventor)

    1985-01-01

    A synthetic aperture radar (SAR) employed for delta k measurement of ocean current from a spacecraft without the need for a narrow beam and long observation times. The SAR signal is compressed to provide image data for different sections of the chirp band width, equivalent to frequencies and a common area for the separate image fields is selected. The image for the selected area at each frequency is deconvolved to obtain the image signals for the different frequencies and the same area. A product of pairs of signals is formed, Fourier transformed and squared. The spectrum thus obtained from different areas for the same pair of frequencies are added to provide an improved signal to noise ratio. The shift of the peak from the center of the spectrum is measured and compared to the expected shift due to the phase velocity of the Bragg scattering wave. Any difference is a measure of current velocity v sub o (delta k).

  11. Equivalent water height extracted from GRACE gravity field model with robust independent component analysis

    NASA Astrophysics Data System (ADS)

    Guo, Jinyun; Mu, Dapeng; Liu, Xin; Yan, Haoming; Dai, Honglei

    2014-08-01

    The Level-2 monthly GRACE gravity field models issued by Center for Space Research (CSR), GeoForschungs Zentrum (GFZ), and Jet Propulsion Laboratory (JPL) are treated as observations used to extract the equivalent water height (EWH) with the robust independent component analysis (RICA). The smoothing radii of 300, 400, and 500 km are tested, respectively, in the Gaussian smoothing kernel function to reduce the observation Gaussianity. Three independent components are obtained by RICA in the spatial domain; the first component matches the geophysical signal, and the other two match the north-south strip and the other noises. The first mode is used to estimate EWHs of CSR, JPL, and GFZ, and compared with the classical empirical decorrelation method (EDM). The EWH STDs for 12 months in 2010 extracted by RICA and EDM show the obvious fluctuation. The results indicate that the sharp EWH changes in some areas have an important global effect, like in Amazon, Mekong, and Zambezi basins.

  12. CIHR Candrive Cohort Comparison with Canadian Household Population Holding Valid Driver's Licenses.

    PubMed

    Gagnon, Sylvain; Marshall, Shawn; Kadulina, Yara; Stinchcombe, Arne; Bédard, Michel; Gélinas, Isabelle; Man-Son-Hing, Malcolm; Mazer, Barbara; Naglie, Gary; Porter, Michelle M; Rapoport, Mark; Tuokko, Holly; Vrkljan, Brenda

    2016-06-01

    We investigated whether convenience sampling is a suitable method to generate a sample of older drivers representative of the older-Canadian driver population. Using equivalence testing, we compared a large convenience sample of older drivers (Candrive II prospective cohort study) to a similarly aged population of older Canadian drivers. The Candrive sample consists of 928 community-dwelling older drivers from seven metropolitan areas of Canada. The population data was obtained from the Canadian Community Health Survey - Healthy Aging (CCHS-HA), which is a representative sample of older Canadians. The data for drivers aged 70 and older were extracted from the CCHS-HA database, for a total of 3,899 older Canadian drivers. Two samples were demonstrated as equivalent on socio-demographic, health, and driving variables that we compared, but not on driving frequency. We conclude that convenience sampling used in the Candrive study created a fairly representative sample of Canadian older drivers, with a few exceptions.

  13. Low-order black-box models for control system design in large power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamwa, I.; Trudel, G.; Gerin-Lajoie, L.

    1996-02-01

    The paper studies two multi-input multi-output (MIMO) procedures for the identification of low-order state-space models of power systems, by probing the network in open loop with low-energy pulses or random signals. Although such data may result from actual measurements, the development assumes simulated responses from a transient stability program, hence benefiting from the existing large base of stability models. While pulse data is processed using the eigensystem realization algorithm, the analysis of random responses is done by means of subspace identification methods. On a prototype Hydro-Quebec power system, including SVCs, DC lines, series compensation, and more than 1,100 buses, itmore » is verified that the two approaches are equivalent only when strict requirements are imposed on the pulse length and magnitude. The 10th-order equivalent models derived by random-signal probing allow for effective tuning of decentralized power system stabilizers (PSSs) able to damp both local and very slow inter-area modes.« less

  14. Low-order black-box models for control system design in large power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamwa, I.; Trudel, G.; Gerin-Lajoie, L.

    1995-12-31

    The paper studies two multi-input multi-output (MIMO) procedures for the identification of low-order state-space models of power systems, by probing the network in open loop with low-energy pulses or random signals. Although such data may result from actual measurements, the development assumes simulated responses from a transient stability program, hence benefiting form the existing large base of stability models. While pulse data is processed using the eigensystem realization algorithm, the analysis of random responses is done by means of subspace identification methods. On a prototype Hydro-Quebec power system, including SVCs, DC lines, series compensation, and more than 1,100 buses, itmore » is verified that the two approaches are equivalent only when strict requirements are imposed on the pulse length and magnitude. The 10th-order equivalent models derived by random-signal probing allow for effective tuning of decentralized power system stabilizers (PSSs) able to damp both local and very slow inter-area modes.« less

  15. Variation of indoor radon concentration and ambient dose equivalent rate in different outdoor and indoor environments.

    PubMed

    Stojanovska, Zdenka; Boev, Blazo; Zunic, Zora S; Ivanova, Kremena; Ristova, Mimoza; Tsenova, Martina; Ajka, Sorsa; Janevik, Emilija; Taleski, Vaso; Bossew, Peter

    2016-05-01

    Subject of this study is an investigation of the variations of indoor radon concentration and ambient dose equivalent rate in outdoor and indoor environments of 40 dwellings, 31 elementary schools and five kindergartens. The buildings are located in three municipalities of two, geologically different, areas of the Republic of Macedonia. Indoor radon concentrations were measured by nuclear track detectors, deployed in the most occupied room of the building, between June 2013 and May 2014. During the deploying campaign, indoor and outdoor ambient dose equivalent rates were measured simultaneously at the same location. It appeared that the measured values varied from 22 to 990 Bq/m(3) for indoor radon concentrations, from 50 to 195 nSv/h for outdoor ambient dose equivalent rates, and from 38 to 184 nSv/h for indoor ambient dose equivalent rates. The geometric mean value of indoor to outdoor ambient dose equivalent rates was found to be 0.88, i.e. the outdoor ambient dose equivalent rates were on average higher than the indoor ambient dose equivalent rates. All measured can reasonably well be described by log-normal distributions. A detailed statistical analysis of factors which influence the measured quantities is reported.

  16. Normative wideband reflectance, equivalent admittance at the tympanic membrane, and acoustic stapedius reflex threshold in adults

    PubMed Central

    Feeney, M. Patrick; Keefe, Douglas H.; Hunter, Lisa L.; Fitzpatrick, Denis F.; Garinis, Angela C.; Putterman, Daniel B.; McMillan, Garnett P.

    2016-01-01

    Objectives Wideband acoustic immittance (WAI) measures such as pressure reflectance, parameterized by absorbance and group delay, equivalent admittance at the tympanic membrane (TM), and acoustic stapedius reflex threshold (ASRT) describe middle-ear function across a wide frequency range, compared to traditional tests employing a single frequency. The objective of this study was to obtain normative data using these tests for a group of normal hearing adults and investigate test-retest reliability using a longitudinal design. Design A longitudinal prospective design was used to obtain normative test and retest data on clinical and WAI measures. Subjects were 13 males and 20 females (mean age = 25 y). Inclusion criteria included normal audiometry and clinical immittance. Subjects were tested on two separate visits approximately one month apart. Reflectance and equivalent admittance at the TM were measured from 0.25 to 8.0 kHz under three conditions: at ambient pressure in the ear canal and with pressure sweeps from positive to negative pressure (downswept) and negative to positive pressure (upswept). Equivalent admittance at the TM was calculated using admittance measurements at the probe tip which were adjusted using a model of sound transmission in the ear canal and acoustic estimates of ear-canal area and length. Wideband ASRTs were measured at tympanometric peak pressure (TPP) derived from the average TPP of downswept and upswept tympanograms. Descriptive statistics were obtained for all WAI responses, and wideband and clinical ASRTs were compared. Results Mean absorbance at ambient pressure and TPP demonstrated a broad band-pass pattern typical of previous studies. Test-retest differences were lower for absorbance at TPP for the downswept method compared to ambient pressure at frequencies between 1.0 and 1.26 kHz. Mean tympanometric peak-to-tail differences for absorbance were greatest around 1.0 to 2.0 kHz and similar for positive and negative tails. Mean group delay at ambient pressure and at TPP were greatest between 0.32 and 0.6 kHz at 200 to 300 μs, reduced at frequencies between 0.8 and 1.5 kHz, and increased above 1.5 kHz to around 150 μs. Mean equivalent admittance at the TM had a lower level for the ambient method than at TPP for both sweep directions below 1.2 kHz, but the difference between methods was only statistically significant for the comparison between the ambient method and TPP for the upswept tympanogram. Mean equivalent admittance phase was positive at all frequencies. Test-retest reliability of the equivalent admittance level ranged from 1 to 3 dB at frequencies below 1.0 kHz, but increased to 8 to 9 dB at higher frequencies. The mean wideband ASRT for an ipsilateral broadband noise activator was 12 dB lower than the clinical ASRT, but had poorer reliability. Conclusions Normative data for the WAI test battery revealed minor differences for results at ambient pressure compared to tympanometric methods at TPP for reflectance, group delay, and equivalent admittance level at the TM for subjects with middle-ear pressure within ±100 daPa. Test-retest reliability was better for absorbance at TPP for the downswept tympanogram compared to ambient pressure at frequencies around 1.0 kHz. Large peak-to-tail differences in absorbance combined with good reliability at frequencies between about 0.7 and 3.0 kHz suggest that this may be a sensitive frequency range for interpreting absorbance at TPP. The mean wideband ipsilateral ASRT was lower than the clinical ASRT, consistent with previous studies. Results are promising for the use of a wideband test battery to evaluate middle-ear function. PMID:28045835

  17. Cultural adaptation and translation of measures: an integrated method.

    PubMed

    Sidani, Souraya; Guruge, Sepali; Miranda, Joyal; Ford-Gilboe, Marilyn; Varcoe, Colleen

    2010-04-01

    Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integrated method for exploring conceptual equivalence within the process of adapting and translating measures. The integrated method involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integrated method are discussed. (c) 2010 Wiley Periodicals, Inc.

  18. New modeling method for the dielectric relaxation of a DRAM cell capacitor

    NASA Astrophysics Data System (ADS)

    Choi, Sujin; Sun, Wookyung; Shin, Hyungsoon

    2018-02-01

    This study proposes a new method for automatically synthesizing the equivalent circuit of the dielectric relaxation (DR) characteristic in dynamic random access memory (DRAM) without frequency dependent capacitance measurement. Charge loss due to DR can be observed by a voltage drop at the storage node and this phenomenon can be analyzed by an equivalent circuit. The Havariliak-Negami model is used to accurately determine the electrical characteristic parameters of an equivalent circuit. The DRAM sensing operation is performed in HSPICE simulations to verify this new method. The simulation demonstrates that the storage node voltage drop resulting from DR and the reduction in the sensing voltage margin, which has a critical impact on DRAM read operation, can be accurately estimated using this new method.

  19. Mechanical properties investigation on single-wall ZrO2 nanotubes: A finite element method with equivalent Poisson's ratio for chemical bonds

    NASA Astrophysics Data System (ADS)

    Yang, Xiao; Li, Huijian; Hu, Minzheng; Liu, Zeliang; Wärnå, John; Cao, Yuying; Ahuja, Rajeev; Luo, Wei

    2018-04-01

    A method to obtain the equivalent Poisson's ratio in chemical bonds as classical beams with finite element method was proposed from experimental data. The UFF (Universal Force Field) method was employed to calculate the elastic force constants of Zrsbnd O bonds. By applying the equivalent Poisson's ratio, the mechanical properties of single-wall ZrNTs (ZrO2 nanotubes) were investigated by finite element analysis. The nanotubes' Young's modulus (Y), Poisson's ratio (ν) of ZrNTs as function of diameters, length and chirality have been discussed, respectively. We found that the Young's modulus of single-wall ZrNTs is calculated to be between 350 and 420 GPa.

  20. Examination of the Equivalence of Self-Report Survey-Based Paper-and-Pencil and Internet Data Collection Methods

    ERIC Educational Resources Information Center

    Weigold, Arne; Weigold, Ingrid K.; Russell, Elizabeth J.

    2013-01-01

    Self-report survey-based data collection is increasingly carried out using the Internet, as opposed to the traditional paper-and-pencil method. However, previous research on the equivalence of these methods has yielded inconsistent findings. This may be due to methodological and statistical issues present in much of the literature, such as…

  1. An Empirical Method for Deriving Grade Equivalence for University Entrance Qualifications: An Application to A Levels and the International Baccalaureate

    ERIC Educational Resources Information Center

    Green, Francis; Vignoles, Anna

    2012-01-01

    We present a method to compare different qualifications for entry to higher education by studying students' subsequent performance. Using this method for students holding either the International Baccalaureate (IB) or A-levels gaining their degrees in 2010, we estimate an "empirical" equivalence scale between IB grade points and UCAS…

  2. Transportability of Equivalence-Based Programmed Instruction: Efficacy and Efficiency in a College Classroom

    ERIC Educational Resources Information Center

    Fienup, Daniel M.; Critchfield, Thomas S.

    2011-01-01

    College students in a psychology research-methods course learned concepts related to inferential statistics and hypothesis decision making. One group received equivalence-based instruction on conditional discriminations that were expected to promote the emergence of many untaught, academically useful abilities (i.e., stimulus equivalence group). A…

  3. Mapping Children's Understanding of Mathematical Equivalence

    ERIC Educational Resources Information Center

    Taylor, Roger S.; Rittle-Johnson, Bethany; Matthews, Percival G.; McEldoon, Katherine L.

    2009-01-01

    The focus of this research is to develop an initial framework for assessing and interpreting students' level of understanding of mathematical equivalence. Although this topic has been studied for many years, there has been no systematic development or evaluation of a valid measure of equivalence knowledge. A powerful method for accomplishing this…

  4. Proton exchange membrane fuel cell model for aging predictions: Simulated equivalent active surface area loss and comparisons with durability tests

    NASA Astrophysics Data System (ADS)

    Robin, C.; Gérard, M.; Quinaud, M.; d'Arbigny, J.; Bultel, Y.

    2016-09-01

    The prediction of Proton Exchange Membrane Fuel Cell (PEMFC) lifetime is one of the major challenges to optimize both material properties and dynamic control of the fuel cell system. In this study, by a multiscale modeling approach, a mechanistic catalyst dissolution model is coupled to a dynamic PEMFC cell model to predict the performance loss of the PEMFC. Results are compared to two 2000-h experimental aging tests. More precisely, an original approach is introduced to estimate the loss of an equivalent active surface area during an aging test. Indeed, when the computed Electrochemical Catalyst Surface Area profile is fitted on the experimental measures from Cyclic Voltammetry, the computed performance loss of the PEMFC is underestimated. To be able to predict the performance loss measured by polarization curves during the aging test, an equivalent active surface area is obtained by a model inversion. This methodology enables to successfully find back the experimental cell voltage decay during time. The model parameters are fitted from the polarization curves so that they include the global degradation. Moreover, the model captures the aging heterogeneities along the surface of the cell observed experimentally. Finally, a second 2000-h durability test in dynamic operating conditions validates the approach.

  5. A statistical assessment of differences and equivalences between genetically modified and reference plant varieties

    PubMed Central

    2011-01-01

    Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199

  6. Exposure of the surgeon's hands to radiation during hand surgery procedures.

    PubMed

    Żyluk, Andrzej; Puchalski, Piotr; Szlosser, Zbigniew; Dec, Paweł; Chrąchol, Joanna

    2014-01-01

    The objective of the study was to assess the time of exposure of the surgeon's hands to radiation and calculate of the equivalent dose absorbed during surgery of hand and wrist fractures with C-arm fluoroscope guidance. The necessary data specified by the objective of the study were acquired from operations of 287 patients with fractures of fingers, metacarpals, wrist bones and distal radius. 218 operations (78%) were percutaneous procedures and 60 (22%) were performed by open method. Data on the time of exposure and dose of radiation were acquired from the display of the fluoroscope, where they were automatically generated. These data were assigned to the individual patient, type of fracture, method of surgery and the operating surgeon. Fixations of distal radial fractures required longer times of radiation exposure (mean 61 sec.) than fractures of the wrist/metacarpals and fingers (38 and 32 sec., respectively), which was associated with absorption of significantly higher equivalent doses. Fixations of distal radial fractures by open method were associated with statistically significantly higher equivalent doses (0.41 mSv) than percutaneous procedures (0.3 mSv). Fixations of wrist and metacarpal bone fractures by open method were associated with lower equivalent doses (0.34 mSv) than percutaneous procedures (0.37 mSv),but the difference was not significant. Fixations of finger fractures by open method were associated with lower equivalent doses (0.13 mSv) than percutaneous procedures (0.24 mSv), the difference being statistically non-significant. Statistically significant differences in exposure time and equivalent doses were noted between 4 surgeons participating in the study, but no definitive relationship was found between these parameters and surgeons' employment time. 1. Hand surgery procedures under fluoroscopic guidance are associated with mild exposure of the surgeons' hands to radiation. 2. The equivalent dose was related to the type of fracture, operative technique and - to some degree - to the time of employment of the surgeon.

  7. A comparative appraisal of two equivalence tests for multiple standardized effects.

    PubMed

    Shieh, Gwowen

    2016-04-01

    Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. 40 CFR 53.4 - Applications for reference or equivalent method determinations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... using information such as service reports and customer complaints to eliminate potential causes of... standards of good practice and by qualified personnel. Test anomalies or irregularities shall be documented... designated as a reference or equivalent method, to ensure that all analyzers or samplers offered for sale...

  9. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  10. A comparison of quantum limited dose and noise equivalent dose

    NASA Astrophysics Data System (ADS)

    Job, Isaias D.; Boyce, Sarah J.; Petrillo, Michael J.; Zhou, Kungang

    2016-03-01

    Quantum-limited-dose (QLD) and noise-equivalent-dose (NED) are performance metrics often used interchangeably. Although the metrics are related, they are not equivalent unless the treatment of electronic noise is carefully considered. These metrics are increasingly important to properly characterize the low-dose performance of flat panel detectors (FPDs). A system can be said to be quantum-limited when the Signal-to-noise-ratio (SNR) is proportional to the square-root of x-ray exposure. Recent experiments utilizing three methods to determine the quantum-limited dose range yielded inconsistent results. To investigate the deviation in results, generalized analytical equations are developed to model the image processing and analysis of each method. We test the generalized expression for both radiographic and fluoroscopic detectors. The resulting analysis shows that total noise content of the images processed by each method are inherently different based on their readout scheme. Finally, it will be shown that the NED is equivalent to the instrumentation-noise-equivalent-exposure (INEE) and furthermore that the NED is derived from the quantum-noise-only method of determining QLD. Future investigations will measure quantum-limited performance of radiographic panels with a modified readout scheme to allow for noise improvements similar to measurements performed with fluoroscopic detectors.

  11. Equivalent Skin Analysis of Wing Structures Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.

    2000-01-01

    An efficient method of modeling trapezoidal built-up wing structures is developed by coupling. in an indirect way, an Equivalent Plate Analysis (EPA) with Neural Networks (NN). Being assumed to behave like a Mindlin-plate, the wing is solved using the Ritz method with Legendre polynomials employed as the trial functions. This analysis method can be made more efficient by avoiding most of the computational effort spent on calculating contributions to the stiffness and mass matrices from each spar and rib. This is accomplished by replacing the wing inner-structure with an "equivalent" material that combines to the skin and whose properties are simulated by neural networks. The constitutive matrix, which relates the stress vector to the strain vector, and the density of the equivalent material are obtained by enforcing mass and stiffness matrix equities with rec,ard to the EPA in a least-square sense. Neural networks for the material properties are trained in terms of the design variables of the wing structure. Examples show that the present method, which can be called an Equivalent Skin Analysis (ESA) of the wing structure, is more efficient than the EPA and still fairly good results can be obtained. The present ESA is very promising to be used at the early stages of wing structure design.

  12. Midterm Stability Evaluation of Wide-area Power System by using Synchronized Phasor Measurements

    NASA Astrophysics Data System (ADS)

    Ota, Yutaka; Ukai, Hiroyuki; Nakamura, Koichi; Fujita, Hideki

    In recent years, the PMU (Phasor Measurement Unit) receives a great deal of attention as a synchronized measurement system of power systems. Synchronized phasor angles obtained by the PMU provide the effective information for evaluating the stability of a bulk power system. The aspect of instability phenomena during midterm tends to be more complicated, and the stability analysis using the synchronized phasor measurements is significant in order to keep a complicated power system stable. This paper proposes a midterm stability evaluation method of the wide-area power system by using the synchronized phasor measurements. By clustering and aggregating the power system to some coherent groups, the step-out is effectively predicted on the basis of the two-machine equivalent power system model. The midterm stability of a longitudinal power system model of Japanese 60Hz systems constructed by the PSA, which is a hybrid-type power system simulator, is practically evaluated using the proposed method.

  13. Formal Requirements-Based Programming for Complex Systems

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis

    2005-01-01

    Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.

  14. What Do Contrast Threshold Equivalent Noise Studies Actually Measure? Noise vs. Nonlinearity in Different Masking Paradigms

    PubMed Central

    Baldwin, Alex S.; Baker, Daniel H.; Hess, Robert F.

    2016-01-01

    The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system’s input has on its output one can estimate the variance of this internal noise. By applying this simple “linear amplifier” model to the human visual system, one can entirely explain an observer’s detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system’s internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies. PMID:26953796

  15. What Do Contrast Threshold Equivalent Noise Studies Actually Measure? Noise vs. Nonlinearity in Different Masking Paradigms.

    PubMed

    Baldwin, Alex S; Baker, Daniel H; Hess, Robert F

    2016-01-01

    The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system's input has on its output one can estimate the variance of this internal noise. By applying this simple "linear amplifier" model to the human visual system, one can entirely explain an observer's detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system's internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies.

  16. Crystallographic changes in lead zirconate titanate due to neutron irradiation

    DOE PAGES

    Henriques, Alexandra; Graham, Joseph T.; Landsberger, Sheldon; ...

    2014-11-17

    Piezoelectric and ferroelectric materials are useful as the active element in non-destructive monitoring devices for high-radiation areas. Here, crystallographic structural refinement (i.e., the Rietveld method) is used to quantify the type and extent of structural changes in PbZr 0 .5Ti 0 .5O 3 after exposure to a 1 MeV equivalent neutron fluence of 1.7 × 10 15 neutrons/cm 2. The results show a measurable decrease in the occupancy of Pb and O due to irradiation, with O vacancies in the tetragonal phase being created preferentially on one of the two O sites. The results demonstrate a method by which themore » effects of radiation on crystallographic structure may be investigated.« less

  17. Periodic solutions of second-order nonlinear difference equations containing a small parameter. II - Equivalent linearization

    NASA Technical Reports Server (NTRS)

    Mickens, R. E.

    1985-01-01

    The classical method of equivalent linearization is extended to a particular class of nonlinear difference equations. It is shown that the method can be used to obtain an approximation of the periodic solutions of these equations. In particular, the parameters of the limit cycle and the limit points can be determined. Three examples illustrating the method are presented.

  18. 33 CFR 151.2065 - Equivalent reporting methods for vessels other than those entering the Great Lakes or Hudson...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Zone or Canadian equivalent. For vessels required to report under § 151.2060(b)(3) of this subpart, the... methods of reporting if— (a) Such methods are at least as effective as those required by § 151.2060 of this subpart; and (b) Compliance with § 151.2060 of this subpart is economically or physically...

  19. 33 CFR 151.2065 - Equivalent reporting methods for vessels other than those entering the Great Lakes or Hudson...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Zone or Canadian equivalent. For vessels required to report under § 151.2060(b)(3) of this subpart, the... methods of reporting if— (a) Such methods are at least as effective as those required by § 151.2060 of this subpart; and (b) Compliance with § 151.2060 of this subpart is economically or physically...

  20. 33 CFR 151.2065 - Equivalent reporting methods for vessels other than those entering the Great Lakes or Hudson...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Zone or Canadian equivalent. For vessels required to report under § 151.2060(b)(3) of this subpart, the... methods of reporting if— (a) Such methods are at least as effective as those required by § 151.2060 of this subpart; and (b) Compliance with § 151.2060 of this subpart is economically or physically...

  1. Unidimensional IRT Item Parameter Estimates across Equivalent Test Forms with Confounding Specifications within Dimensions

    ERIC Educational Resources Information Center

    Matlock, Ki Lynn; Turner, Ronna

    2016-01-01

    When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within subcontent areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall.…

  2. Delay Discounting Rates Are Temporally Stable in an Equivalent Present Value Procedure Using Theoretical and Area under the Curve Analyses

    ERIC Educational Resources Information Center

    Harrison, Justin; McKay, Ryan

    2012-01-01

    Temporal discounting rates have become a popular dependent variable in social science research. While choice procedures are commonly employed to measure discounting rates, equivalent present value (EPV) procedures may be more sensitive to experimental manipulation. However, their use has been impeded by the absence of test-retest reliability data.…

  3. Obtaining source current density related to irregularly structured electromagnetic target field inside human body using hybrid inverse/FDTD method.

    PubMed

    Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang

    2017-01-01

    Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.

  4. Cross-Ethnicity Measurement Equivalence of Family Coping for Breast Cancer Survivors

    ERIC Educational Resources Information Center

    Lim, Jung-won; Townsend, Aloen

    2012-01-01

    Objective: The current study examines the equivalence of a measure of family coping, the Family Crisis Oriented Personal Evaluation scales (F-COPES), in Chinese American and Korean American breast cancer survivors (BCS). Methods: Factor structure and cross-ethnicity equivalence of the F-COPES were tested using structural equation modeling with 157…

  5. Active microwave water equivalence

    NASA Technical Reports Server (NTRS)

    Boyne, H. S.; Ellerbruch, D. A.

    1980-01-01

    Measurements of water equivalence using an active FM-CW microwave system were conducted over the past three years at various sites in Colorado, Wyoming, and California. The measurement method is described. Measurements of water equivalence and stratigraphy are compared with ground truth. A comparison of microwave, federal sampler, and snow pillow measurements at three sites in Colorado is described.

  6. Money for health: the equivalent variation of cardiovascular diseases.

    PubMed

    Groot, Wim; Van Den Brink, Henriëtte Maassen; Plug, Erik

    2004-09-01

    This paper introduces a new method to calculate the extent to which individuals are willing to trade money for improvements in their health status. An individual welfare function of income (WFI) is applied to calculate the equivalent income variation of health impairments. We believe that this approach avoids various drawbacks of alternative willingness-to-pay methods. The WFI is used to calculate the equivalent variation of cardiovascular diseases. It is found that for a 25 year old male the equivalent variation of a heart disease ranges from 114,000 euro to 380,000 euro depending on the welfare level. This is about 10,000 euro - 30,000 euro for an additional life year. The equivalent variation declines with age and is about the same for men and women. The estimates further vary by discount rate chosen. The estimates of the equivalent variation are generally higher than the money spent on most heart-related medical interventions per QALY. The cost-benefit analysis shows that for most interventions the value of the health benefits exceeds the costs. Heart transplants seem to be too costly and only beneficial if patients are young.

  7. New equivalent-electrical circuit model and a practical measurement method for human body impedance.

    PubMed

    Chinen, Koyu; Kinjo, Ichiko; Zamami, Aki; Irei, Kotoyo; Nagayama, Kanako

    2015-01-01

    Human body impedance analysis is an effective tool to extract electrical information from tissues in the human body. This paper presents a new measurement method of impedance using armpit electrode and a new equivalent circuit model for the human body. The lowest impedance was measured by using an LCR meter and six electrodes including armpit electrodes. The electrical equivalent circuit model for the cell consists of resistance R and capacitance C. The R represents electrical resistance of the liquid of the inside and outside of the cell, and the C represents high frequency conductance of the cell membrane. We propose an equivalent circuit model which consists of five parallel high frequency-passing CR circuits. The proposed equivalent circuit represents alpha distribution in the impedance measured at a lower frequency range due to ion current of the outside of the cell, and beta distribution at a high frequency range due to the cell membrane and the liquid inside cell. The calculated values by using the proposed equivalent circuit model were consistent with the measured values for the human body impedance.

  8. Finite element simulation and comparison of a shear strain and equivalent strain during ECAP and asymmetric rolling

    NASA Astrophysics Data System (ADS)

    Pesin, A.; Pustovoytov, D.; Shveyova, T.; Vafin, R.

    2017-12-01

    The level of a shear strain and equivalent strain plays a key role in terms of the possibility of using the asymmetric rolling process as a method of severe plastic deformation. Strain mode (pure shear or simple shear) can affect very strongly on the equivalent strain and the grain refinement of the material. This paper presents the results of FEM simulations and comparison of the equivalent strain in the aluminium alloy 5083 processed by a single-pass equal channel angular pressing (simple shear), symmetric rolling (pure shear) and asymmetric rolling (simultaneous pure and simple shear). The nonlinear effect of rolls speed ratio on the deformation characteristics during asymmetric rolling was found. Extremely high equivalent strain up to e=4.2 was reached during a single-pass asymmetric rolling. The influence of the shear strain on the level of equivalent strain is discussed. Finite element analysis of the deformation characteristics, presented in this study, can be used for optimization of the asymmetric rolling process as a method of severe plastic deformation.

  9. An Automated Self-Learning Quantification System to Identify Visible Areas in Capsule Endoscopy Images.

    PubMed

    Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao

    2017-08-01

    Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.

  10. Organic light emitting devices for illumination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hack, Michael; Lu, Min-Hao Michael; Weaver, Michael S

    An organic light emitting device an a method of obtaining illumination from such a device is provided. The device has a plurality of regions, each region having an organic emissive layer adapted to emit a different spectrum of light. The regions in combination emit light suitable for illumination purposes. The area of each region may be selected such that the device is more efficient than an otherwise equivalent device having regions of equal size. The regions may have an aspect ratio of at least about four. All parts of any given region may be driven at the same current.

  11. Ternary carbon composite films for supercapacitor applications

    NASA Astrophysics Data System (ADS)

    Tran, Minh-Hai; Jeong, Hae Kyung

    2017-09-01

    A simple, binder-free, method of making supercapacitor electrodes is introduced, based on modification of activated carbon with graphite oxide and carbon nanotubes. The three carbon precursors of different morphologies support each other to provide outstanding electrochemical performance, such as high capacitance and high energy density. The ternary carbon composite shows six times higher specific capacitance compared to that of activated carbon itself with high retention. The excellent electrochemical properties of the ternary composite attribute to the high surface area of 1933 m2 g-1 and low equivalent series resistance of 2 Ω, demonstrating that it improve the electrochemical performance for supercapacitor applications.

  12. Compressive strength of human openwedges: a selection method

    NASA Astrophysics Data System (ADS)

    Follet, H.; Gotteland, M.; Bardonnet, R.; Sfarghiu, A. M.; Peyrot, J.; Rumelhart, C.

    2004-02-01

    A series of 44 samples of bone wedges of human origin, intended for allograft openwedge osteotomy and obtained without particular precautions during hip arthroplasty were re-examined. After viral inactivity chemical treatment, lyophilisation and radio-sterilisation (intended to produce optimal health safety), the compressive strength, independent of age, sex and the height of the sample (or angle of cut), proved to be too widely dispersed [ 10{-}158 MPa] in the first study. We propose a method for selecting samples which takes into account their geometry (width, length, thicknesses, cortical surface area). Statistical methods (Principal Components Analysis PCA, Hierarchical Cluster Analysis, Multilinear regression) allowed final selection of 29 samples having a mean compressive strength σ_{max} =103 MPa ± 26 and with variation [ 61{-}158 MPa] . These results are equivalent or greater than average materials currently used in openwedge osteotomy.

  13. Integrating Terrain Maps Into a Reactive Navigation Strategy

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Werger, Barry; Seraji, Homayoun

    2006-01-01

    An improved method of processing information for autonomous navigation of a robotic vehicle across rough terrain involves the integration of terrain maps into a reactive navigation strategy. Somewhat more precisely, the method involves the incorporation, into navigation logic, of data equivalent to regional traversability maps. The terrain characteristic is mapped using a fuzzy-logic representation of the difficulty of traversing the terrain. The method is robust in that it integrates a global path-planning strategy with sensor-based regional and local navigation strategies to ensure a high probability of success in reaching a destination and avoiding obstacles along the way. The sensor-based strategies use cameras aboard the vehicle to observe the regional terrain, defined as the area of the terrain that covers the immediate vicinity near the vehicle to a specified distance a few meters away.

  14. Machine Learning on Images: Combining Passive Microwave and Optical Data to Estimate Snow Water Equivalent

    NASA Astrophysics Data System (ADS)

    Dozier, J.; Tolle, K.; Bair, N.

    2014-12-01

    We have a problem that may be a specific example of a generic one. The task is to estimate spatiotemporally distributed estimates of snow water equivalent (SWE) in snow-dominated mountain environments, including those that lack on-the-ground measurements. Several independent methods exist, but all are problematic. The remotely sensed date of disappearance of snow from each pixel can be combined with a calculation of melt to reconstruct the accumulated SWE for each day back to the last significant snowfall. Comparison with streamflow measurements in mountain ranges where such data are available shows this method to be accurate, but the big disadvantage is that SWE can only be calculated retroactively after snow disappears, and even then only for areas with little accumulation during the melt season. Passive microwave sensors offer real-time global SWE estimates but suffer from several issues, notably signal loss in wet snow or in forests, saturation in deep snow, subpixel variability in the mountains owing to the large (~25 km) pixel size, and SWE overestimation in the presence of large grains such as depth and surface hoar. Throughout the winter and spring, snow-covered area can be measured at sub-km spatial resolution with optical sensors, with accuracy and timeliness improved by interpolating and smoothing across multiple days. So the question is, how can we establish the relationship between Reconstruction—available only after the snow goes away—and passive microwave and optical data to accurately estimate SWE during the snow season, when the information can help forecast spring runoff? Linear regression provides one answer, but can modern machine learning techniques (used to persuade people to click on web advertisements) adapt to improve forecasts of floods and droughts in areas where more than one billion people depend on snowmelt for their water resources?

  15. Large-area, low-noise, high-speed, photodiode-based fluorescence detectors with fast overdrive recovery

    NASA Astrophysics Data System (ADS)

    Bickman, S.; DeMille, D.

    2005-11-01

    Two large-area, low-noise, high-speed fluorescence detectors have been built. One detector consists of a photodiode with an area of 28mm×28mm and a low-noise transimpedance amplifier. This detector has a input light-equivalent spectral noise density of less than 3pW/√Hz , can recover from a large scattered light pulse within 10μs, and has a bandwidth of at least 900 kHz. The second detector consists of a 16-mm-diam avalanche photodiode and a low-noise transimpedance amplifier. This detector has an input light-equivalent spectral noise density of 0.08pW/√Hz , also can recover from a large scattered light pulse within 10μs, and has a bandwidth of 1 MHz.

  16. The Equivalence of the Radial Return and Mendelson Methods for Integrating the Classical Plasticity Equations

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.

    2006-01-01

    The radial return and Mendelson methods for integrating the equations of classical plasticity, which appear independently in the literature, are shown to be identical. Both methods are presented in detail as are the specifics of their algorithmic implementation. Results illustrate the methods' equivalence across a range of conditions and address the question of when the methods require iteration in order for the plastic state to remain on the yield surface. FORTRAN code implementations of the radial return and Mendelson methods are provided in the appendix.

  17. Postprandial glucose and insulin responses to various tropical fruits of equivalent carbohydrate content in non-insulin-dependent diabetes mellitus.

    PubMed

    Roongpisuthipong, C; Banphotkasem, S; Komindr, S; Tanphaichitr, V

    1991-11-01

    The plasma glucose and insulin responses were determined in 10 NIDDM female patients following the ingestion of tropical fruit containing 25 g of carbohydrate. The five tropical fruits were pineapple, mango, banana, durian and rambutan. Blood was drawn at 0, 30, 60, 120 and 180 min, respectively. The results showed that the glucose-response curves to mango and banana were significantly less than those to rambutan, durian and pineapple (P less than 0.05). Only the glucose area after mango ingestion was significantly less than the glucose areas of the other fruits (P less than 0.05). The insulin response curve and insulin area after durian ingestion was statistically greater than after ingestion of the others. We concluded that after mango ingestion, the glucose area was lower than it had been after rambutan, durian and pineapple ingestion and the insulin area was lower than that after durian ingestion of equivalent carbohydrate content in type 2 (NIDDM) diabetes.

  18. 40 CFR 53.3 - General requirements for an equivalent method determination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... other tests, full wind-tunnel tests similar to those described in § 53.62, or to special tests adapted... 40 Protection of Environment 6 2012-07-01 2012-07-01 false General requirements for an equivalent method determination. 53.3 Section 53.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  19. 40 CFR 53.3 - General requirements for an equivalent method determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... other tests, full wind-tunnel tests similar to those described in § 53.62, or to special tests adapted... 40 Protection of Environment 5 2011-07-01 2011-07-01 false General requirements for an equivalent method determination. 53.3 Section 53.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  20. 40 CFR 53.3 - General requirements for an equivalent method determination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... other tests, full wind-tunnel tests similar to those described in § 53.62, or to special tests adapted... 40 Protection of Environment 6 2014-07-01 2014-07-01 false General requirements for an equivalent method determination. 53.3 Section 53.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  1. 40 CFR 53.3 - General requirements for an equivalent method determination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... other tests, full wind-tunnel tests similar to those described in § 53.62, or to special tests adapted... 40 Protection of Environment 6 2013-07-01 2013-07-01 false General requirements for an equivalent method determination. 53.3 Section 53.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  2. Translation of the Marlowe-Crowne Social Desirability Scale into an Equivalent Spanish Version

    ERIC Educational Resources Information Center

    Collazo, Andres A.

    2005-01-01

    A Spanish version of the Marlowe-Crowne Social Desirability Scale (MCSDS) was developed by applying a method derived from the cross-cultural and psychometric literature. The method included five sequenced studies: (a) translation and back-translation, (b) comprehension assessment, (c) psychometric equivalence study of two mixed-language versions,…

  3. 29 CFR 1910.106 - Flammable liquids.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... by reference as specified in § 1910.6, or an equivalent test method as defined in Appendix B to... an equivalent method as defined by Appendix B to § 1910.1200—Physical Hazard Criteria, shall be used... this subparagraph. (15) Hotel shall mean buildings or groups of buildings under the same management in...

  4. Reconstruction of instantaneous surface normal velocity of a vibrating structure using interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Bi, Chuan-Xing; Xie, Feng; Zhang, Xiao-Zheng

    2018-07-01

    Interpolated time-domain equivalent source method is extended to reconstruct the instantaneous surface normal velocity of a vibrating structure by using the time-evolving particle velocity as the input, which provides a non-contact way to overall understand the instantaneous vibration behavior of the structure. In this method, the time-evolving particle velocity in the near field is first modeled by a set of equivalent sources positioned inside the vibrating structure, and then the integrals of equivalent source strengths are solved by an iterative solving process and are further used to calculate the instantaneous surface normal velocity. An experiment of a semi-cylindrical steel plate impacted by a steel ball is investigated to examine the ability of the extended method, where the time-evolving normal particle velocity and pressure on the hologram surface measured by a Microflown pressure-velocity probe are used as the inputs of the extended method and the method based on pressure measurements, respectively, and the instantaneous surface normal velocity of the plate measured by a laser Doppler vibrometry is used as the reference for comparison. The experimental results demonstrate that the extended method is a powerful tool to visualize the instantaneous surface normal velocity of a vibrating structure in both time and space domains and can obtain more accurate results than that of the method based on pressure measurements.

  5. Assessment of physician and patient (child and adult) equivalent doses during renal angiography by Monte Carlo method.

    PubMed

    Karimian, A; Nikparvar, B; Jabbari, I

    2014-11-01

    Renal angiography is one of the medical imaging methods in which patient and physician receive high equivalent doses due to long duration of fluoroscopy. In this research, equivalent doses of some radiosensitive tissues of patient (adult and child) and physician during renal angiography have been calculated by using adult and child Oak Ridge National Laboratory phantoms and Monte Carlo method (MCNPX). The results showed, in angiography of right kidney in a child and adult patient, that gall bladder with the amounts of 2.32 and 0.35 mSv, respectively, has received the most equivalent dose. About the physician, left hand, left eye and thymus absorbed the most amounts of doses, means 0.020 mSv. In addition, equivalent doses of the physician's lens eye, thyroid and knees were 0.023, 0.007 and 7.9E-4 mSv, respectively. Although these values are less than the reported thresholds by ICRP 103, it should be noted that these amounts are related to one examination. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Experimental Investigation of Premixed Turbulent Hydrocarbon/Air Bunsen Flames

    NASA Astrophysics Data System (ADS)

    Tamadonfar, Parsa

    Through the influence of turbulence, the front of a premixed turbulent flame is subjected to the motions of eddies that leads to an increase in the flame surface area, and the term flame wrinkling is commonly used to describe it. If it is assumed that the flame front would continue to burn locally unaffected by the stretch, then the total turbulent burning velocity is expected to increase proportionally to the increase in the flame surface area caused by wrinkling. When the turbulence intensity is high enough such that the stretch due to hydrodynamics and flame curvature would influence the local premixed laminar burning velocity, then the actual laminar burning velocity (that is, flamelet consumption velocity) should reflect the influence of stretch. To address this issue, obtaining the knowledge of instantaneous flame front structures, flame brush characteristics, and burning velocities of premixed turbulent flames is necessary. Two axisymmetric Bunsen-type burners were used to produce premixed turbulent flames, and three optical measurement techniques were utilized: Particle image velocimetry to measure the turbulence statistics; Rayleigh scattering method to measure the temperature fields of premixed turbulent flames, and Mie scattering method to visualize the flame front contours of premixed turbulent flames. Three hydrocarbons (methane, ethane, and propane) were used as the fuel in the experiments. The turbulence was generated using different perforated plates mounted upstream of the burner exit. A series of comprehensive parameters including the thermal flame front thickness, characteristic flame height, mean flame brush thickness, mean volume of the turbulent flame region, two-dimensional flame front curvature, local flame front angle, two-dimensional flame surface density, wrinkled flame surface area, turbulent burning velocity, mean flamelet consumption velocity, mean turbulent flame stretch factor, mean turbulent Markstein length and number, and mean fuel consumption rate were systematically evaluated from the experimental data. The normalized preheat zone and reaction zone thicknesses decreased with increasing non-dimensional turbulence intensity in ultra-lean premixed turbulent flames under a constant equivalence ratio of 0.6, whereas they increased with increasing equivalence ratios from 0.6 to 1.0 under a constant bulk flow velocity. The normalized preheat zone and reaction zone thicknesses showed no overall trend with increasing non-dimensional longitudinal integral length scale. The normalized preheat zone and reaction zone thicknesses decreased by increasing the Karlovitz number, suggesting that increasing the total stretch rate is the controlling mechanism in the reduction of flame front thickness for the experimental conditions studied in this thesis. In general, the leading edge and half-burning surface turbulent burning velocities were enhanced with increasing equivalence ratio from lean to stoichiometric mixtures, whereas they decreased with increasing equivalence ratio for rich mixtures. These velocities were enhanced with increasing total turbulence intensity. The leading edge and half-burning surface turbulent burning velocities for lean/stoichiometric mixtures were observed to be smaller than that for rich mixtures. The mean turbulent flame stretch factor displayed a dependence on the equivalence ratio and turbulence intensity. Results show that the mean turbulent flame stretch factors for lean/stoichiometric and rich mixtures were not equal when the unstrained premixed laminar burning velocity, non-dimensional bulk flow velocity, non-dimensional turbulence intensity, and non-dimensional longitudinal integral length scale were kept constant.

  7. Finding of No Significant Impact (FONSI) For Demolition of Buildings 113, 130, 140, 141, 256, 257, and the Boresight Tower at New Boston Air Force Station, New Hampshire

    DTIC Science & Technology

    2010-09-01

    day-night weighted equivalent sound level Leq equivalent steady sound level m meter(s) m2 square meter(s) m3 cubic meter(s) mi mile(s) mi2 ...widespread and prolonged ice storms have occurred. Based on the data for the 9,130 km2 (3,530 mi2 ) area that includes the NBAFS, less than two...tornadoes occur per year. The localized area effected by a tornado averages only 0.29 km2 (0.11 mi2 ; Ramsdell and Andrews 1986) (ANL 2000). 3.2.2

  8. Missing data handling in non-inferiority and equivalence trials: A systematic review.

    PubMed

    Rabe, Brooke A; Day, Simon; Fiero, Mallorie H; Bell, Melanie L

    2018-05-25

    Non-inferiority (NI) and equivalence clinical trials test whether a new treatment is therapeutically no worse than, or equivalent to, an existing standard of care. Missing data in clinical trials have been shown to reduce statistical power and potentially bias estimates of effect size; however, in NI and equivalence trials, they present additional issues. For instance, they may decrease sensitivity to differences between treatment groups and bias toward the alternative hypothesis of NI (or equivalence). Our primary aim was to review the extent of and methods for handling missing data (model-based methods, single imputation, multiple imputation, complete case), the analysis sets used (Intention-To-Treat, Per-Protocol, or both), and whether sensitivity analyses were used to explore departures from assumptions about the missing data. We conducted a systematic review of NI and equivalence trials published between May 2015 and April 2016 by searching the PubMed database. Articles were reviewed primarily by 2 reviewers, with 6 articles reviewed by both reviewers to establish consensus. Of 109 selected articles, 93% reported some missing data in the primary outcome. Among those, 50% reported complete case analysis, and 28% reported single imputation approaches for handling missing data. Only 32% reported conducting analyses of both intention-to-treat and per-protocol populations. Only 11% conducted any sensitivity analyses to test assumptions with respect to missing data. Missing data are common in NI and equivalence trials, and they are often handled by methods which may bias estimates and lead to incorrect conclusions. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Quantitative determination of radio-opacity: equivalence of digital and film X-ray systems.

    PubMed

    Nomoto, R; Mishima, A; Kobayashi, K; McCabe, J F; Darvell, B W; Watts, D C; Momoi, Y; Hirano, S

    2008-01-01

    To evaluate the equivalence of a digital X-ray system (DenOptix) to conventional X-ray film in terms of the measured radio-opacity of known filled-resin materials and the suitability of attenuation coefficient for radio-opacity determination. Discs of five thicknesses (0.5-2.5mm) and step-wedges of each of three composite materials of nominal aluminum-equivalence of 50%, 200% and 450% were used. X-ray images of a set of discs (or step-wedge), an aluminum step-wedge, and a lead block were taken at 65 kV and 10 mA at a focus-film distance of 400 mm for 0.15s and 1.6s using an X-ray film or imaging plate. Radio-opacity was determined as equivalent aluminum thickness and attenuation coefficient. The logarithm of the individual optical density or gray scale value, corrected for background, was plotted against thickness, and the attenuation coefficient determined from the slope. The method of ISO 4049 was used for equivalent aluminum thickness. The equivalent aluminum thickness method is not suitable for materials of low radio-opacity, while the attenuation coefficient method could be used for all without difficulty. The digital system gave attenuation coefficients of greater precision than did film, but the use of automatic gain control (AGC) distorted the outcome unusably. Attenuation coefficient is a more precise and generally applicable approach to the determination of radio-opacity. The digital system was equivalent to film but with less noise. The use of AGC is inappropriate for such determinations.

  10. Fast spacecraft adaptive attitude tracking control through immersion and invariance design

    NASA Astrophysics Data System (ADS)

    Wen, Haowei; Yue, Xiaokui; Li, Peng; Yuan, Jianping

    2017-10-01

    This paper presents a novel non-certainty-equivalence adaptive control method for the attitude tracking control problem of spacecraft with inertia uncertainties. The proposed immersion and invariance (I&I) based adaptation law provides a more direct and flexible approach to circumvent the limitations of the basic I&I method without employing any filter signal. By virtue of the adaptation high-gain equivalence property derived from the proposed adaptive method, the closed-loop adaptive system with a low adaptation gain could recover the high adaptation gain performance of the filter-based I&I method, and the resulting control torque demands during the initial transient has been significantly reduced. A special feature of this method is that the convergence of the parameter estimation error has been observably improved by utilizing an adaptation gain matrix instead of a single adaptation gain value. Numerical simulations are presented to highlight the various benefits of the proposed method compared with the certainty-equivalence-based control method and filter-based I&I control schemes.

  11. Resonance treatment using pin-based pointwise energy slowing-down method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sooyoung, E-mail: csy0321@unist.ac.kr; Lee, Changho, E-mail: clee@anl.gov; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for {sup 238}U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solutionmore » is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.« less

  12. On the equivalence of experimental B(E2) values determined by various techniques

    DOE PAGES

    Birch, M.; Pritychenko, B.; Singh, B.

    2016-06-30

    In this paper, we establish the equivalence of the various techniques for measuring B(E2) values using a statistical analysis. Data used in this work come from the recent compilation by B. Pritychenko et al. (2016). We consider only those nuclei for which the B(E2) values were measured by at least two different methods, with each method being independently performed at least twice. Our results indicate that most prevalent methods of measuring B(E2) values are equivalent, with some weak evidence that Doppler-shift attenuation method (DSAM) measurements may differ from Coulomb excitation (CE) and nuclear resonance fluorescence (NRF) measurements. However, such anmore » evidence appears to arise from discrepant DSAM measurements of the lifetimes for 60Ni and some Sn nuclei rather than a systematic deviation in the method itself.« less

  13. A sparse equivalent source method for near-field acoustic holography.

    PubMed

    Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter

    2017-01-01

    This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.

  14. Scanning system, infrared noise equivalent temperature difference: Measurement procedure

    NASA Technical Reports Server (NTRS)

    Cannon, J. B., Jr.

    1975-01-01

    A procedure is described for determining the noise equivalent difference temperature for infrared electro-optical instruments. The instrumentation required, proper measurements, and methods of calculation are included.

  15. Equivalence in Symbolic and Nonsymbolic Contexts: Benefits of Solving Problems with Manipulatives

    ERIC Educational Resources Information Center

    Sherman, Jody; Bisanz, Jeffrey

    2009-01-01

    Children's failure on equivalence problems (e.g., 5 + 4 = 7 + __) is believed to be the result of misunderstanding the equal sign and has been tested using symbolic problems (including "="). For Study 1 (N = 48), we designed a nonsymbolic method for presenting equivalence problems to determine whether Grade 2 children's difficulty is due…

  16. [The effect of composition and structure of radiological equivalent materials on radiological equivalent].

    PubMed

    Wang, Y; Lin, D; Fu, T

    1997-03-01

    Morphology of inorganic material powders before and after being treated by ultrafine crush was observed by transformite electron microscope. The length and diameter of granules were measured. Polymers inorganic material powders before and after being treated by ultrafine crush were used for preparing radiological equivalent materials. Blending compatibility of inorganic meterials with polymer materials was observed by scanning electron microscope. CT values of tissue equivalent materials were measured by X-ray CT. Distribution of inorganic materials was examined. The compactness of materials was determined by the water absorbed method. The elastic module of materials was measured by laser speckle interferementry method. The results showed that the inorganic material powders treated by the ultrafine crush blent well with polymer and the distribution of these powders in the polymer was homogeneous. The equivalent errors of linear attenuation coefficients and CT values of equivalent materials were small. Their elastic modules increased one order of magnitude from 6.028 x 10(2) kg/cm2 to 9.753 x 10(3) kg/cm2. In addition, the rod inorganic material powders having rod granule blent easily with polymer. The present study provides a theoretical guidance and experimental basis for the design and synthesis of radiological equivalent materials.

  17. Design and application of quadrature compensation patterns in bulk silicon micro-gyroscopes.

    PubMed

    Ni, Yunfang; Li, Hongsheng; Huang, Libin

    2014-10-29

    This paper focuses on the detailed design issues of a peculiar quadrature reduction method named system stiffness matrix diagonalization, whose key technology is the design and application of quadrature compensation patterns. For bulk silicon micro-gyroscopes, a complete design and application case was presented. The compensation principle was described first. In the mechanical design, four types of basic structure units were presented to obtain the basic compensation function. A novel layout design was proposed to eliminate the additional disturbing static forces and torques. Parameter optimization was carried out to maximize the available compensation capability in a limited layout area. Two types of voltage loading methods were presented. Their influences on the sense mode dynamics were analyzed. The proposed design was applied on a dual-mass silicon micro-gyroscope developed in our laboratory. The theoretical compensation capability of a quadrature equivalent angular rate no more than 412 °/s was designed. In experiments, an actual quadrature equivalent angular rate of 357 °/s was compensated successfully. The actual compensation voltages were a little larger than the theoretical ones. The correctness of the design and the theoretical analyses was verified. They can be commonly used in planar linear vibratory silicon micro-gyroscopes for quadrature compensation purpose.

  18. SU-E-T-569: Neutron Shielding Calculation Using Analytical and Multi-Monte Carlo Method for Proton Therapy Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, S; Shin, E H; Kim, J

    2015-06-15

    Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using themore » production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.« less

  19. Final Report of an Expansion of a Model for Development of Proficiency/Equivalency Tests for Clinical Laboratory Personnel, July 1, 1980-June 30, 1981.

    ERIC Educational Resources Information Center

    New Jersey Coll. of Medicine and Dentistry, Newark. School of Allied Health Professions.

    A project was conducted to expand a previously developed model for developing proficiency/equivalency tests to evaluate previously acquired knowledge and skill competencies in the areas of clinical microbiology and clinical hematology. Designed for a target group consisting of on-the-job trainees, military personnel, and medical laboratory…

  20. Comparison and Contrast: The 1973 California State University and Colleges Freshman English Equivalency Examination.

    ERIC Educational Resources Information Center

    White, Edward M.

    In the late spring of 1972, the Chancellor's Office agreed to support a summer study to be undertaken by a committee of the California English Council, to investigate equivalency testing in the area of English and to recommend an appropriate program for use by the California State University and Colleges. This report is the result of that study:…

  1. 30 CFR 57.22202 - Main fans (I-A, I-B, I-C, II-A, III, V-A, and V-B mines).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... mines, provided with an automatic signal device to give an alarm when the fan stops. The signal device... possible explosive forces; (2) Equipped with explosion-doors, a weak-wall, or other equivalent devices... or weak-wall shall be at least equivalent to the average cross-sectional area of the airway. (c) (1...

  2. 30 CFR 57.22202 - Main fans (I-A, I-B, I-C, II-A, III, V-A, and V-B mines).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... mines, provided with an automatic signal device to give an alarm when the fan stops. The signal device... possible explosive forces; (2) Equipped with explosion-doors, a weak-wall, or other equivalent devices... or weak-wall shall be at least equivalent to the average cross-sectional area of the airway. (c) (1...

  3. 30 CFR 57.22202 - Main fans (I-A, I-B, I-C, II-A, III, V-A, and V-B mines).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... mines, provided with an automatic signal device to give an alarm when the fan stops. The signal device... possible explosive forces; (2) Equipped with explosion-doors, a weak-wall, or other equivalent devices... or weak-wall shall be at least equivalent to the average cross-sectional area of the airway. (c) (1...

  4. 30 CFR 57.22202 - Main fans (I-A, I-B, I-C, II-A, III, V-A, and V-B mines).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... mines, provided with an automatic signal device to give an alarm when the fan stops. The signal device... possible explosive forces; (2) Equipped with explosion-doors, a weak-wall, or other equivalent devices... or weak-wall shall be at least equivalent to the average cross-sectional area of the airway. (c) (1...

  5. Full waveform time domain solutions for source and induced magnetotelluric and controlled-source electromagnetic fields using quasi-equivalent time domain decomposition and GPU parallelization

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2015-12-01

    Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.

  6. Equivalent square formula for determining the surface dose of rectangular field from 6 MV therapeutic photon beam.

    PubMed

    Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn

    2013-09-06

    The purpose of the study was to investigate the use of the equivalent square formula for determining the surface dose from a rectangular photon beam. A 6 MV therapeutic photon beam delivered from a Varian Clinac 23EX medical linear accelerator was modeled using the EGS4nrc Monte Carlo simulation package. It was then used to calculate the dose in the build-up region from both square and rectangular fields. The field patterns were defined by various settings of the X- and Y-collimator jaw ranging from 5 to 20 cm. Dose measurements were performed using a thermoluminescence dosimeter and a Markus parallel-plate ionization chamber on the four square fields (5 × 5, 10 × 10, 15 × 15, and 20 × 20 cm2). The surface dose was acquired by extrapolating the build-up doses to the surface. An equivalent square for a rectangular field was determined using the area-to-perimeter formula, and the surface dose of the equivalent square was estimated using the square-field data. The surface dose of square field increased linearly from approximately 10% to 28% as the side of the square field increased from 5 to 20 cm. The influence of collimator exchange on the surface dose was found to be not significant. The difference in the percentage surface dose of the rectangular field compared to that of the relevant equivalent square was insignificant and can be clinically neglected. The use of the area-to-perimeter formula for an equivalent square field can provide a clinically acceptable surface dose estimation for a rectangular field from a 6 MV therapy photon beam.

  7. [Comparison between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection].

    PubMed

    Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong

    2006-07-01

    To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.

  8. A Comparison of Measurement Equivalence Methods Based on Confirmatory Factor Analysis and Item Response Theory.

    ERIC Educational Resources Information Center

    Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.

    Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…

  9. Small-area snow surveys on the northern plains of North Dakota

    USGS Publications Warehouse

    Emerson, Douglas G.; Carroll, T.R.; Steppuhn, Harold

    1985-01-01

    Snow-cover data are needed for many facets of hydrology. The variation in snow cover over small areas is the focus of this study. The feasibility of using aerial surveys to obtain information on the snow water equivalent of the snow cover in order to minimize the necessity of labor intensive ground snow surveys was- evaluated. A low-flying aircraft was used to measure attenuations of natural terrestrial gamma radiation by snow cover. Aerial and ground snow surveys of eight 1-mile snow courses and one 4-mile snow course were used in the evaluation, with ground snow surveys used as the base to evaluate aerial data. Each of the 1-mile snow courses consisted of a single land use and all had the same terrain type (plane). The 4-mile snow course consists of a variety of land uses and the same terrain type (plane). Using the aerial snow-survey technique, the snow water equivalent of the 1-mile snow courses was. measured with three passes of the aircraft. Use of more than one pass did not improve the results. The mean absolute difference between the aerial- and ground-measured snow water equivalents for the 1-mile snow courses was 26 percent (0.77 inches). The aerial snow water equivalents determined for the 1-mile snow courses were used to estimate the variations in the snow water equivalents over the 4-mile snow course. The weighted mean absolute difference for the 4-mile snow course was 27 percent (0.8 inches). Variations in snow water equivalents could not be verified adequately by segmenting the aerial snow-survey data because of the uniformity found in the snow cover. On the 4-mile snow coirse, about two-thirds of the aerial snow-survey data agreed with the ground snow-survey data within the accuracy of the aerial technique ( + 0.5 inch of the mean snow water equivalent).

  10. Reconnaissance for radioactive materials in northeastern United States during 1952

    USGS Publications Warehouse

    McKeown, Francis A.; Klemic, Harry

    1953-01-01

    Reconnaissance for radioactive materials was made in parts of Maine, New York, New Jersey, and Pennsylvania. The primary objective was to examine the iron ore deposits and associated rocks in the Adirondack Mountains of New York and the Highlands of New Jersey. In addition, several deposits known or reported to contain radioactive minerals were examined to delimit their extent. Most of the deposits examined are not significant as possible sources of radioactive elements and the data pertaining to them are summarized in table form. Deposits that do warrant more description than can be given in table form are: Benson Mines, St. Lawrence County, N. Y.; Rutgers mine, Clinton County, N. Y.; Mineville Mines, Essex County, N. Y.l Canfield phosphate mine, Morris County, N. J.; Mullgan quarry, Hunterdon County, N. J.; and the Chestnut Hill-Marble Mountain area, Pennsylvania and New Jersey. The Old Bed in the Mineville district is the only deposit that may be economically significant. Apatite from Old Bed ore contains as much as 4.9 percent total rare earth. 0.04 percent thorium, and 0.018 percent uranium. Magnetite ore at the Rutgers mine contains radioactive zircon and apatite. Radioactivity measurements of outcrops and dump material show that the ore contains from 0.005 to 0.010 percent equivalent uranium. One sample of lean magnetite ore contains 0.006 percent equivalent uranium. Garnet-rich zones in the Benson Mines magnetite deposit contain as much as 0.017 equivalent uranium. Most of the rock and ore, however, contains about 0.005 percent equivalent uranium. Available data indicate that the garnet-rich zones are enriched in radioactive allanite. A shear zone in the Kittatinny limestone of Cambrian age at the Mulligan quarry contains uraniferous material. Radioactivity anomalies elsewhere in the quarry and in adjacent fields indicate that there may be other uraniferous shear zones. Assays of samples and measurements of outcrop radioactivity indicate that the uranium content of these zones is low; samples contain from 0.008 to 0.068 percent equivalent uranium. The anomalies, however, may indicate greater concentrations of uranium below surficial leached zones. The Chestnut Hill-Marble Mountain area contains radioactivity anomalies for about 2 miles along the strike of the contact of pre-Cambrian Pickering gneiss and Franklin limestone formations. In places this contact is injected with pegmatite, which probably was the source of the radioelements. The most favorable area for further study is at Marble Mountain, where a nearly continuous anomaly extends for about 1500 feet. Samples from part of this area contain as much as 0.044 percent equivalent uranium and 0.005 percent uranium. Radioactive hematite and florencite, in which thorium may have substituted for cerium, are the only radioactive minerals observed in the Marble Mountain area.

  11. Intensification of constructed wetlands for land area reduction: a review.

    PubMed

    Ilyas, Huma; Masih, Ilyas

    2017-05-01

    The large land area requirement of constructed wetlands (CWs) is a major limitation of its application especially in densely populated and mountainous areas. This review paper provides insights on different strategies applied for the reduction of land area including stack design and intensification of CWs with different aeration methods. The impacts of different aeration methods on the performance and land area reduction were extensively and critically evaluated for nine wetland systems under three aeration strategies such as tidal flow (TF), effluent recirculation (ER), and artificial aeration (AA) applied on three types of CWs including vertical flow constructed wetland (VFCW), horizontal flow constructed wetland (HFCW), and hybrid constructed wetland (HCW). The area reduction and pollutant removal efficiency showed substantial variation among different types of CWs and aeration strategies. The ER-VFCW designated the smallest footprint of 1.1 ± 0.5 m 2 PE -1 (population equivalent) followed by TF-VFCW with the footprint of 2.1 ± 1.8 m 2 PE -1 , and the large footprint was of AA-HFCW (7.8 ± 4.7 m 2 PE -1 ). When footprint and removal efficiency both are the major indicators for the selection of wetland type, the best options for practical application could be TF-VFCW, ER-HCW, and AA-HCW. The data and results outlined in this review could be instructive for futures studies and practical applications of CWs for wastewater treatment, especially in land-limited regions.

  12. Effects of damage and thermal residual stresses on the overall elastoplastic behavior of particle-reinforced metal matrix composites

    NASA Astrophysics Data System (ADS)

    Liu, Haitao

    The objective of the present study is to investigate damage mechanisms and thermal residual stresses of composites, and to establish the frameworks to model the particle-reinforced metal matrix composites with particle-matrix interfacial debonding, particle cracking or thermal residual stresses. An evolutionary interfacial debonding model is proposed for the composites with spheroidal particles. The construction of the equivalent stiffness is based on the fact that when debonding occurs in a certain direction, the load-transfer ability will lose in that direction. By using this equivalent method, the interfacial debonding problem can be converted into a composite problem with perfectly bonded inclusions. Considering the interfacial debonding is a progressive process in which the debonding area increases in proportion to external loading, a progressive interfacial debonding model is proposed. In this model, the relation between external loading and the debonding area is established using a normal stress controlled debonding criterion. Furthermore, an equivalent orthotropic stiffness tensor is constructed based on the debonding areas. This model is able to study the composites with randomly distributed spherical particles. The double-inclusion theory is recalled to model the particle cracking problems. Cracks inside particles are treated as penny-shape particles with zero stiffness. The disturbed stress field due to the existence of a double-inclusion is expressed explicitly. Finally, a thermal mismatch eigenstrain is introduced to simulate the inconsistent expansions of the matrix and the particles due to the difference of the coefficients of thermal expansion. Micromechanical stress and strain fields are calculated due to the combination of applied external loads and the prescribed thermal mismatch eigenstrains. For all of the above models, ensemble-volume averaging procedures are employed to derive the effective yield function of the composites. Numerical simulations are performed to analyze the effects of various parameters and several good agreements between our model's predictions and experimental results are obtained. It should be mentioned that all of expressions in the frameworks are explicitly derived and these analytical results are easy to be adopted in other related investigations.

  13. Evaluation of HIFU-induced lesion region using temperature threshold and equivalent thermal dose methods

    NASA Astrophysics Data System (ADS)

    Chang, Shihui; Xue, Fanfan; Zhou, Wenzheng; Zhang, Ji; Jian, Xiqi

    2017-03-01

    Usually, numerical simulation is used to predict the acoustic filed and temperature distribution of high intensity focused ultrasound (HIFU). In this paper, the simulated lesion volumes obtained by temperature threshold (TRT) 60 °C and equivalent thermal dose (ETD) 240 min were compared with the experimental results which were obtained by animal tissue experiment in vitro. In the simulation, the calculated model was established according to the vitro tissue experiment, and the Finite Difference Time Domain (FDTD) method was used to calculate the acoustic field and temperature distribution in bovine liver by the Westervelt formula and Pennes bio-heat transfer equation, and the non-linear characteristics of the ultrasound was considered. In the experiment, the fresh bovine liver was exposed for 8s, 10s, 12s under different power conditions (150W, 170W, 190W, 210W), and the exposure was repeated 6 times under the same dose. After the exposures, the liver was sliced and photographed every 0.2mm, and the area of the lesion region in every photo was calculated. Then, every value of the areas was multiplied by 0.2mm, and summed to get the approximation volume of the lesion region. The comparison result shows that the lesion volume of the region calculated by TRT 60 °C in simulation was much closer to the lesion volume obtained in experiment, and the volume of the region above 60 °C was larger than the experimental results, but the volume deviation was not exceed 10%. The volume of the lesion region calculated by ETD 240 min was larger than that calculated by TRT 60 °C in simulation, and the volume deviations were ranged from 4.9% to 23.7%.

  14. Relating Satellite Gravimetry Data To Global Snow Water Equivalent Data

    NASA Astrophysics Data System (ADS)

    Baumann, Sabine

    2017-04-01

    In 04/2002, the gravimetric satellites GRACE were launched. They measure Earth's gravity via a precise microwave system. These satellites assess changes of Earth's mass. Main contributions of these changes originate from hydrological compartments as e.g. surface water, groundwater, soil moisture, or snow water equivalent (SWE). The benefit of GRACE data is to receive a direct measured signal. The data are not calibrated with other data (as e.g. done in models) or unusable due to particular Earth's surface conditions (e.g. AMSR-e for thick and wet snow surfaces). GRACE data show changes in total water storage (TWS) but cannot distinguish between different sources. Therefore, other data, models, and methods are necessary to extract the different compartments. Due to the spatial resolution of 200,000 km2 and an accuracy of 2.5 cm w.e., mostly other global products are compared with GRACE. In this study, the hydrological model WGHM (TWS and SWE), the land surface model GLDAS (TWS and SWE), and the passive microwave sensor AMSR-E (SWE) are compared with the GRACE data. All data have to be pre-processed in the same way as the GRACE data to be comparable. A correlation analysis was performed between the different products assuming that changes in TWS can be linked to changes in SWE if either SWE is the dominant compartment of TWS or if SWE changes proportionally with TWS. To focus on the SWE product a second correlation was performed only for the winter season. Spatial extent was focused on the large permafrost areas in North America and Russia. By this method, those areas were detected in which GRACE data can be integrated for SWE data assessment to, for example, improve the models.

  15. Unstable optical resonator loss calculations using the prony method.

    PubMed

    Siegman, A E; Miller, H Y

    1970-12-01

    The eigenvalues for all the significant low-order resonant modes of an unstable optical resonator with circular mirrors are computed using an eigenvalue method called the Prony method. A general equivalence relation is also given, by means of which one can obtain the design parameters for a single-ended unstable resonator of the type usually employed in practical lasers, from the calculated or tabulated values for an equivalent symmetric or double-ended unstable resonator.

  16. A method of Modelling and Simulating the Back-to-Back Modular Multilevel Converter HVDC Transmission System

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Fan, Youping; Zhang, Dai; Ge, Mengxin; Zou, Xianbin; Li, Jingjiao

    2017-09-01

    This paper proposes a method to simulate a back-to-back modular multilevel converter (MMC) HVDC transmission system. In this paper we utilize an equivalent networks to simulate the dynamic power system. Moreover, to account for the performance of converter station, core components of model of the converter station gives a basic model of simulation. The proposed method is applied to an equivalent real power system.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillon, Heather E.; Antonopoulos, Chrissi A.; Solana, Amy E.

    As the model energy codes are improved to reach efficiency levels 50 percent greater than current codes, use of on-site renewable energy generation is likely to become a code requirement. This requirement will be needed because traditional mechanisms for code improvement, including envelope, mechanical and lighting, have been pressed to the end of reasonable limits. Research has been conducted to determine the mechanism for implementing this requirement (Kaufman 2011). Kaufmann et al. determined that the most appropriate way to structure an on-site renewable requirement for commercial buildings is to define the requirement in terms of an installed power density permore » unit of roof area. This provides a mechanism that is suitable for the installation of photovoltaic (PV) systems on future buildings to offset electricity and reduce the total building energy load. Kaufmann et al. suggested that an appropriate maximum for the requirement in the commercial sector would be 4 W/ft{sup 2} of roof area or 0.5 W/ft{sup 2} of conditioned floor area. As with all code requirements, there must be an alternative compliance path for buildings that may not reasonably meet the renewables requirement. This might include conditions like shading (which makes rooftop PV arrays less effective), unusual architecture, undesirable roof pitch, unsuitable building orientation, or other issues. In the short term, alternative compliance paths including high performance mechanical equipment, dramatic envelope changes, or controls changes may be feasible. These options may be less expensive than many renewable systems, which will require careful balance of energy measures when setting the code requirement levels. As the stringency of the code continues to increase however, efficiency trade-offs will be maximized, requiring alternative compliance options to be focused solely on renewable electricity trade-offs or equivalent programs. One alternate compliance path includes purchase of Renewable Energy Credits (RECs). Each REC represents a specified amount of renewable electricity production and provides an offset of environmental externalities associated with non-renewable electricity production. The purpose of this paper is to explore the possible issues with RECs and comparable alternative compliance options. Existing codes have been examined to determine energy equivalence between the energy generation requirement and the RECs alternative over the life of the building. The price equivalence of the requirement and the alternative are determined to consider the economic drivers for a market decision. This research includes case studies that review how the few existing codes have incorporated RECs and some of the issues inherent with REC markets. Section 1 of the report reviews compliance options including RECs, green energy purchase programs, shared solar agreements and leases, and other options. Section 2 provides detailed case studies on codes that include RECs and community based alternative compliance methods. The methods the existing code requirements structure alternative compliance options like RECs are the focus of the case studies. Section 3 explores the possible structure of the renewable energy generation requirement in the context of energy and price equivalence. The price of RECs have shown high variation by market and over time which makes it critical to for code language to be updated frequently for a renewable energy generation requirement or the requirement will not remain price-equivalent over time. Section 4 of the report provides a maximum case estimate for impact to the PV market and the REC market based on the Kaufmann et al. proposed requirement levels. If all new buildings in the commercial sector complied with the requirement to install rooftop PV arrays, nearly 4,700 MW of solar would be installed in 2012, a major increase from EIA estimates of 640 MW of solar generation capacity installed in 2009. The residential sector could contribute roughly an additional 2,300 MW based on the same code requirement levels of 4 W/ft{sup 2} of roof area. Section 5 of the report provides a basic framework for draft code language recommendations based on the analysis of the alternative compliance levels.« less

  18. Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study

    ERIC Educational Resources Information Center

    Rusticus, Shayna A.; Lovato, Chris Y.

    2014-01-01

    The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…

  19. On the Equivalence of the Summation and Transfer-Matrix Methods in Wave Propagation through Multilayers of Lossless and Lossy Media

    ERIC Educational Resources Information Center

    Pereyra, Pedro; Robledo-Martinez, Arturo

    2009-01-01

    We explicitly show that the well-known transmission and reflection amplitudes of planar slabs, obtained via an algebraic summation of Fresnel amplitudes, are completely equivalent to those obtained from transfer matrices in the scattering approach. This equivalence makes the finite periodic systems theory a powerful alternative to the cumbersome…

  20. Large-area, low-noise, high-speed, photodiode-based fluorescence detectors with fast overdrive recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bickman, S.; DeMille, D.

    2005-11-15

    Two large-area, low-noise, high-speed fluorescence detectors have been built. One detector consists of a photodiode with an area of 28 mmx28 mm and a low-noise transimpedance amplifier. This detector has a input light-equivalent spectral noise density of less than 3 pW/{radical}(Hz), can recover from a large scattered light pulse within 10 {mu}s, and has a bandwidth of at least 900 kHz. The second detector consists of a 16-mm-diam avalanche photodiode and a low-noise transimpedance amplifier. This detector has an input light-equivalent spectral noise density of 0.08 pW/{radical}(Hz), also can recover from a large scattered light pulse within 10 {mu}s, andmore » has a bandwidth of 1 MHz.« less

  1. Non-Aqueous Titration Method for Determining Suppressor Concentration in the MCU Next Generation Solvent (NGS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, Kathryn M. L.; Jones, Daniel H.

    A non-aqueous titration method has been used for quantifying the suppressor concentration in the MCU solvent hold tank (SHT) monthly samples since the Next Generation Solvent (NGS) was implemented in 2013. The titration method measures the concentration of the NGS suppressor (TiDG) as well as the residual tri-n-octylamine (TOA) that is a carryover from the previous solvent. As the TOA concentration has decreased over time, it has become difficult to resolve the TiDG equivalence point as the TOA equivalence point has moved closer. In recent samples, the TiDG equivalence point could not be resolved, and therefore, the TiDG concentration wasmore » determined by subtracting the TOA concentration as measured by semi-volatile organic analysis (SVOA) from the total base concentration as measured by titration. In order to improve the titration method so that the TiDG concentration can be measured directly, without the need for the SVOA data, a new method has been developed that involves spiking of the sample with additional TOA to further separate the two equivalence points in the titration. This method has been demonstrated on four recent SHT samples and comparison to results obtained using the SVOA TOA subtraction method shows good agreement. Therefore, it is recommended that the titration procedure be revised to include the TOA spike addition, and this to become the primary method for quantifying the TiDG.« less

  2. 40 CFR Table 9 to Subpart Wwww of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... manufacturing parts that meet the following criteria: 1,000 or more reinforcements or the glass equivalent of 1... resin and wet-out area(s), v. convey resin collected from drip-off pans or other devices to reservoirs...

  3. 40 CFR Table 9 to Subpart Wwww of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... manufacturing parts that meet the following criteria: 1,000 or more reinforcements or the glass equivalent of 1... resin and wet-out area(s), v. convey resin collected from drip-off pans or other devices to reservoirs...

  4. 40 CFR Table 9 to Subpart Wwww of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... manufacturing parts that meet the following criteria: 1,000 or more reinforcements or the glass equivalent of 1... resin and wet-out area(s), v. convey resin collected from drip-off pans or other devices to reservoirs...

  5. Systemic effects of H2S inhalation at human equivalent dose of pathologic halitosis on rats.

    PubMed

    Yalçın Yeler, Defne; Aydin, Murat; Gül, Mehmet; Hocaoğlu, Turgay; Özdemir, Hakan; Koraltan, Melike

    2017-10-01

    Halitosis is composed by hundreds of toxic gases. It is still not clear whether halitosis gases self-inhaled by halitosis patients cause side effects. The aim of the study was to investigate the effect of H 2 S inhalation at a low concentration (human equivalent dose of pathologic halitosis) on rats. The threshold level of pathologic halitosis perceived by humans at 250 ppb of H 2 S was converted to rat equivalent concentration (4.15 ppm). In the experimental group, 8 rats were exposed to H 2 S via continuous inhalation but not the control rats. After 50 days, blood parameters were measured and tissue samples were obtained from the brain, kidney and liver and examined histopathologically to determine any systemic effect. While aspartate transaminase, creatine kinase-MB and lactate dehydrogenase levels were found to be significantly elevated, carbondioxide and alkaline phosphatase were decreased in experimental rats. Other blood parameters were not changed significantly. Experimental rats lost weight and became anxious. Histopathological examination showed mononuclear inflammatory cell invasion in the portal areas, nuclear glycogen vacuoles in the parenchymal area, single-cell necrosis in a few foci, clear expansion in the central hepatic vein and sinusoids, hyperplasia in Kupffer cells and potential fibrous tissue expansion in the portal areas in the experimental rats. However, no considerable histologic damage was observed in the brain and kidney specimens. It can be concluded that H 2 S inhalation equivalent to pathologic halitosis producing level in humans may lead to systemic effects, particularly heart or liver damage in rats.

  6. The equivalence of computerized and paper-and-pencil psychological instruments: implications for measures of negative affect.

    PubMed

    Schulenberg, S E; Yutrzenka, B A

    1999-05-01

    The use of computerized psychological assessment is a growing practice among contemporary mental health professionals. Many popular and frequently used paper-and-pencil instruments have been adapted into computerized versions. Although equivalence for many instruments has been evaluated and supported, this issue is far from resolved. This literature review deals with recent research findings that suggest that computer aversion negatively impacts computerized assessment, particularly as it relates to measures of negative affect. There is a dearth of equivalence studies that take into account computer aversion's potential impact on the measurement of negative affect. Recommendations are offered for future research in this area.

  7. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  8. A Statistical Review of Alternative Zinc and Copper Extraction from Mineral Fertilizers and Industrial By-Products.

    PubMed

    Cenciani de Souza, Camila Prado; Aparecida de Abreu, Cleide; Coscione, Aline Renée; Alberto de Andrade, Cristiano; Teixeira, Luiz Antonio Junqueira; Consolini, Flavia

    2018-01-01

    Rapid, accurate, and low-cost alternative analytical methods for micronutrient quantification in fertilizers are fundamental in QC. The purpose of this study was to evaluate whether zinc (Zn) and copper (Cu) content in mineral fertilizers and industrial by-products determined by the alternative methods USEPA 3051a, 10% HCl, and 10% H2SO4 are statistically equivalent to the standard method, consisting of hot-plate digestion using concentrated HCl. The commercially marketed Zn and Cu sources in Brazil consisted of oxides, carbonate, and sulfate fertilizers and by-products consisting of galvanizing ash, galvanizing sludge, brass ash, and brass or scrap slag. The contents of sources ranged from 15 to 82% and 10 to 45%, respectively, for Zn and Cu. The Zn and Cu contents refer to the variation of the elements found in the different sources evaluated with the concentrated HCl method as shown in Table 1. A protocol based on the following criteria was used for the statistical analysis assessment of the methods: F-test modified by Graybill, t-test for the mean error, and linear correlation coefficient analysis. In terms of equivalents, 10% HCl extraction was equivalent to the standard method for Zn, and the results of the USEPA 3051a and 10% HCl methods indicated that these methods were equivalents for Cu. Therefore, these methods can be considered viable alternatives to the standard method of determination for Cu and Zn in mineral fertilizers and industrial by-products in future research for their complete validation.

  9. Light emitting ceramic device and method for fabricating the same

    DOEpatents

    Valentine, Paul; Edwards, Doreen D.; Walker Jr., William John; Slack, Lyle H.; Brown, Wayne Douglas; Osborne, Cathy; Norton, Michael; Begley, Richard

    2004-11-30

    A light-emitting ceramic based panel, hereafter termed "electroceramescent" panel, and alternative methods of fabrication for the same are claimed. The electroceramescent panel is formed on a substrate providing mechanical support as well as serving as the base electrode for the device. One or more semiconductive ceramic layers directly overlay the substrate, and electrical conductivity and ionic diffusion are controlled. Light emitting regions overlay the semiconductive ceramic layers, and said regions consist sequentially of a layer of a ceramic insulation layer and an electroluminescent layer, comprised of doped phosphors or the equivalent. One or more conductive top electrode layers having optically transmissive areas overlay the light emitting regions, and a multi-layered top barrier cover comprising one or more optically transmissive non-combustible insulation layers overlay said top electrode regions.

  10. Measurement equivalence of the German Job Satisfaction Survey used in a multinational organization: implications of Schwartz's culture model.

    PubMed

    Liu, Cong; Borg, Ingwer; Spector, Paul E

    2004-12-01

    The authors tested measurement equivalence of the German Job Satisfaction Survey (GJSS) using structural equation modeling methodology. Employees from 18 countries and areas provided data on 5 job satisfaction facets. The effects of language and culture on measurement equivalence were examined. A cultural distance hypothesis, based on S. H. Schwartz's (1999) theory, was tested with 4 cultural groups: West Europe, English speaking, Latin America, and Far East. Findings indicated the robustness of the GJSS in terms of measurement equivalence across countries. The survey maintained high transportability across countries speaking the same language and countries sharing similar cultural backgrounds. Consistent with Schwartz's model, a cultural distance effect on scale transportability among scales used in maximally dissimilar cultures was detected. Scales used in the West Europe group showed greater equivalence to scales used in the English-speaking and Latin America groups than scales used in the Far East group. 2004 APA, all rights reserved

  11. Statistical equivalence and test-retest reliability of delay and probability discounting using real and hypothetical rewards.

    PubMed

    Matusiewicz, Alexis K; Carter, Anne E; Landes, Reid D; Yi, Richard

    2013-11-01

    Delay discounting (DD) and probability discounting (PD) refer to the reduction in the subjective value of outcomes as a function of delay and uncertainty, respectively. Elevated measures of discounting are associated with a variety of maladaptive behaviors, and confidence in the validity of these measures is imperative. The present research examined (1) the statistical equivalence of discounting measures when rewards were hypothetical or real, and (2) their 1-week reliability. While previous research has partially explored these issues using the low threshold of nonsignificant difference, the present study fully addressed this issue using the more-compelling threshold of statistical equivalence. DD and PD measures were collected from 28 healthy adults using real and hypothetical $50 rewards during each of two experimental sessions, one week apart. Analyses using area-under-the-curve measures revealed a general pattern of statistical equivalence, indicating equivalence of real/hypothetical conditions as well as 1-week reliability. Exceptions are identified and discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    NASA Technical Reports Server (NTRS)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  13. Radiographic absorptiometry method in measurement of localized alveolar bone density changes.

    PubMed

    Kuhl, E D; Nummikoski, P V

    2000-03-01

    The objective of this study was to measure the accuracy and precision of a radiographic absorptiometry method by using an occlusal density reference wedge in quantification of localized alveolar bone density changes. Twenty-two volunteer subjects had baseline and follow-up radiographs taken of mandibular premolar-molar regions with an occlusal density reference wedge in both films and added bone chips in the baseline films. The absolute bone equivalent densities were calculated in the areas that contained bone chips from the baseline and follow-up radiographs. The differences in densities described the masses of the added bone chips that were then compared with the true masses by using regression analysis. The correlation between the estimated and true bone-chip masses ranged from R = 0.82 to 0.94, depending on the background bone density. There was an average 22% overestimation of the mass of the bone chips when they were in low-density background, and up to 69% overestimation when in high-density background. The precision error of the method, which was calculated from duplicate bone density measurements of non-changing areas in both films, was 4.5%. The accuracy of the intraoral radiographic absorptiometry method is low when used for absolute quantification of bone density. However, the precision of the method is good and the correlation is linear, indicating that the method can be used for serial assessment of bone density changes at individual sites.

  14. Irrigated areas of India derived using MODIS 500 m time series for the years 2001-2003

    USGS Publications Warehouse

    Dheeravath, V.; Thenkabail, P.S.; Chandrakantha, G.; Noojipady, P.; Reddy, G.P.O.; Biradar, C.M.; Gumma, M.K.; Velpuri, M.

    2010-01-01

    The overarching goal of this research was to develop methods and protocols for mapping irrigated areas using a Moderate Resolution Imaging Spectroradiometer (MODIS) 500 m time series, to generate irrigated area statistics, and to compare these with ground- and census-based statistics. The primary mega-file data-cube (MFDC), comparable to a hyper-spectral data cube, used in this study consisted of 952 bands of data in a single file that were derived from MODIS 500 m, 7-band reflectance data acquired every 8-days during 2001-2003. The methods consisted of (a) segmenting the 952-band MFDC based not only on elevation-precipitation-temperature zones but on major and minor irrigated command area boundaries obtained from India's Central Board of Irrigation and Power (CBIP), (b) developing a large ideal spectral data bank (ISDB) of irrigated areas for India, (c) adopting quantitative spectral matching techniques (SMTs) such as the spectral correlation similarity (SCS) R2-value, (d) establishing a comprehensive set of protocols for class identification and labeling, and (e) comparing the results with the National Census data of India and field-plot data gathered during this project for determining accuracies, uncertainties and errors. The study produced irrigated area maps and statistics of India at the national and the subnational (e.g., state, district) levels based on MODIS data from 2001-2003. The Total Area Available for Irrigation (TAAI) and Annualized Irrigated Areas (AIAs) were 113 and 147 million hectares (MHa), respectively. The TAAI does not consider the intensity of irrigation, and its nearest equivalent is the net irrigated areas in the Indian National Statistics. The AIA considers intensity of irrigation and is the equivalent of "irrigated potential utilized (IPU)" reported by India's Ministry of Water Resources (MoWR). The field-plot data collected during this project showed that the accuracy of TAAI classes was 88% with a 12% error of omission and 32% of error of commission. Comparisons between the AIA and IPU produced an R2-value of 0.84. However, AIA was consistently higher than IPU. The causes for differences were both in traditional approaches and remote sensing. The causes of uncertainties unique to traditional approaches were (a) inadequate accounting of minor irrigation (groundwater, small reservoirs and tanks), (b) unwillingness to share irrigated area statistics by the individual Indian states because of their stakes, (c) absence of comprehensive statistical analyses of reported data, and (d) subjectivity involved in observation-based data collection process. The causes of uncertainties unique to remote sensing approaches were (a) irrigated area fraction estimate and related sub-pixel area computations and (b) resolution of the imagery. The causes of uncertainties common in both traditional and remote sensing approaches were definitions and methodological issues. ?? 2009 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  15. Measurement Properties of DIBELS Oral Reading Fluency in Grade 2: Implications for Equating Studies

    ERIC Educational Resources Information Center

    Stoolmiller, Michael; Biancarosa, Gina; Fien, Hank

    2013-01-01

    Lack of psychometric equivalence of oral reading fluency (ORF) passages used within a grade for screening and progress monitoring has recently become an issue with calls for the use of equating methods to ensure equivalence. To investigate the nature of the nonequivalence and to guide the choice of equating method to correct for nonequivalence,…

  16. Compilation of Pilot Cognitive Ability Norms

    DTIC Science & Technology

    2011-12-01

    2.1.1 Change in Performance Method. The first method is a pretest , posttest paradigm. It is the most reliable but requires prior, premorbid...elements of the person’s own performance to make conclusions regarding cognitive change. A common approach uses the effects of aging on various types of...Percentile Equivalence for IQ Scores on the MAB-II ............................................... 7 4 Percentile Equivalence for Verbal Subtest

  17. 5 CFR 591.219 - How does OPM compute shelter price indexes?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... estimates in hedonic regressions (a type of multiple regression) to compute for each COLA survey area the... and rental equivalence prices and/or estimates, OPM obtains for each unit surveyed information about... survey area and the Washington, DC, area. [67 FR 22340, May 3, 2002, as amended at 69 FR 59763, Oct. 6...

  18. [Research on NIR equivalent spectral measurement].

    PubMed

    Wang, Zhi-Hong; Liu, Jie; Sun, Yu-Yang; Teng, Fei; Lin, Jun

    2013-04-01

    When the spectra of the diffuse reflectance of low reflectivity samples or the transmittance of low transmisivity samples are measured by a portable near infrared (NIR) spectrometer, because there is the noise of the spectrometer, the smaller the reflectance or transmittance of the sample, the lower its SNR. Even if treated by denoise methods, the spectra can not meet the requirement of NIR analysis. So the equivalent spectrum measure method was researched. Based on the intensity of the reflected or transmitted signal by the sample under the traditional measure conditions, the light current of the spectrometer was enlarged, and then the signal of the measured sample increased; the reflected or transmitted light of the measure reference was reduced to avoid the signal of the measure reference over range. Moreover the equivalent spectrum of the sample was calculated in order to make it identical with the spectrum measured by traditional method. Thus the NIR spectral SNR was improved. The results of theory analysis and experiments show that if the light signal of the spectrometer was properly increased according to the reflected or transmitted signal of the low reflectivity or transmisivity sample, the equivalent spectrum was the same as the spectrum measured by traditional method and its SNR was improved.

  19. Static and Vibration Analyses of General Wing Structures Using Equivalent Plate Models

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Liu, Youhua

    1999-01-01

    An efficient method, using equivalent plate model, is developed for studying the static and vibration analyses of general built-up wing structures composed of skins, spars, and ribs. The model includes the transverse shear effects by treating the built-up wing as a plate following the Reissner-Mindlin theory, the so-called First-order Shear Deformation Theory (FSDT). The Ritz method is used with the Legendre polynomials being employed as the trial functions. This is in contrast to previous equivalent plate model methods which have used simple polynomials, known to be prone to numerical ill-conditioning, as the trial functions. The present developments are evaluated by comparing the results with those obtained using MSC/NASTRAN, for a set of examples. These examples are: (i) free-vibration analysis of a clamped trapezoidal plate with (a) uniform thickness, and (b) non-uniform thickness varying as an airfoil, (ii) free-vibration and static analyses (including skin stress distribution) of a general built-up wing, and (iii) free-vibration and static analyses of a swept-back box wing. The results obtained by the present equivalent plate model are in good agreement with those obtained by the finite element method.

  20. Using indirect covariance spectra to identify artifact responses in unsymmetrical indirect covariance calculated spectra.

    PubMed

    Martin, Gary E; Hilton, Bruce D; Blinov, Kirill A; Williams, Antony J

    2008-02-01

    Several groups of authors have reported studies in the areas of indirect and unsymmetrical indirect covariance NMR processing methods. Efforts have recently focused on the use of unsymmetrical indirect covariance processing methods to combine various discrete two-dimensional NMR spectra to afford the equivalent of the much less sensitive hyphenated 2D NMR experiments, for example indirect covariance (icv)-heteronuclear single quantum coherence (HSQC)-COSY and icv-HSQC-nuclear Overhauser effect spectroscopy (NOESY). Alternatively, unsymmetrical indirect covariance processing methods can be used to combine multiple heteronuclear 2D spectra to afford icv-13C-15N HSQC-HMBC correlation spectra. We now report the use of responses contained in indirect covariance processed HSQC spectra as a means for the identification of artifacts in both indirect covariance and unsymmetrical indirect covariance processed 2D NMR spectra. Copyright (c) 2007 John Wiley & Sons, Ltd.

  1. Thermal quantum time-correlation functions from classical-like dynamics

    NASA Astrophysics Data System (ADS)

    Hele, Timothy J. H.

    2017-07-01

    Thermal quantum time-correlation functions are of fundamental importance in quantum dynamics, allowing experimentally measurable properties such as reaction rates, diffusion constants and vibrational spectra to be computed from first principles. Since the exact quantum solution scales exponentially with system size, there has been considerable effort in formulating reliable linear-scaling methods involving exact quantum statistics and approximate quantum dynamics modelled with classical-like trajectories. Here, we review recent progress in the field with the development of methods including centroid molecular dynamics , ring polymer molecular dynamics (RPMD) and thermostatted RPMD (TRPMD). We show how these methods have recently been obtained from 'Matsubara dynamics', a form of semiclassical dynamics which conserves the quantum Boltzmann distribution. We also apply the Matsubara formalism to reaction rate theory, rederiving t → 0+ quantum transition-state theory (QTST) and showing that Matsubara-TST, like RPMD-TST, is equivalent to QTST. We end by surveying areas for future progress.

  2. Studying the effect of the Semipalatinsk Test Site on radionuclide and elemental composition of water objects in the Irtysh River.

    PubMed

    Solodukhin, V; Аidarkhanov, A; Lukashenko, S; Gluchshenko, V; Poznyak, V; Lyahova, O

    2015-06-01

    The results of the field and laboratory studies of radiation and environmental state at the specific area of Irtysh River adjacent to the Semipalatinsk Test Site are provided. It was found that the radiation situation in this area is normal: equivalent dose of γ-radiation = (0.11-0.13) µSv h(-1). Determination of radionuclide composition of soil, bottom sediment and water samples was performed by the methods of instrumental γ-spectrometry, radiochemical analysis and the liquid scintillation β-spectrometry. It was found that concentrations of the studied natural and artificial radionuclides in these objects are very low; no contamination with radionuclides was detected in this segment of Irtysh River. The article provides the results of elemental composition determination for samples of soil and bottom sediment (by X-ray fluorescence method) and water samples (by inductively coupled plasma mass spectrometry method). It is shown that the content of some elements (Li, Be, B, V, Cu, Sr, Mo) in the water of Irtysh River increases downstream. The additional studies are required to explain this peculiarity. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. The neuroanatomy of general intelligence: sex matters.

    PubMed

    Haier, Richard J; Jung, Rex E; Yeo, Ronald A; Head, Kevin; Alkire, Michael T

    2005-03-01

    We examined the relationship between structural brain variation and general intelligence using voxel-based morphometric analysis of MRI data in men and women with equivalent IQ scores. Compared to men, women show more white matter and fewer gray matter areas related to intelligence. In men IQ/gray matter correlations are strongest in frontal and parietal lobes (BA 8, 9, 39, 40), whereas the strongest correlations in women are in the frontal lobe (BA10) along with Broca's area. Men and women apparently achieve similar IQ results with different brain regions, suggesting that there is no singular underlying neuroanatomical structure to general intelligence and that different types of brain designs may manifest equivalent intellectual performance.

  4. SU-F-T-408: On the Determination of Equivalent Squares for Rectangular Small MV Photon Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sauer, OA; Wegener, S; Exner, F

    Purpose: It is common practice to tabulate dosimetric data like output factors, scatter factors and detector signal correction factors for a set of square fields. In order to get the data for an arbitrary field, it is mapped to an equivalent square, having the same scatter as the field of interest. For rectangular fields both, tabulated data and empiric formula exist. We tested the applicability of such rules for very small fields. Methods: Using the Monte-Carlo method (EGSnrc-doseRZ), the dose to a point in 10cm depth in water was calculated for cylindrical impinging fluence distributions. Radii were from 0.5mm tomore » 11.5mm with 1mm thickness of the rings. Different photon energies were investigated. With these data a matrix was constructed assigning the amount of dose to the field center to each matrix element. By summing up the elements belonging to a certain field, the dose for an arbitrary point in 10cm depth could be determined. This was done for rectangles up to 21mm side length. Comparing the dose to square field results, equivalent squares could be assigned. The results were compared to using the geometrical mean and the 4Xperimeter/area rule. Results: For side length differences less than 2mm, the difference between all methods was in general less than 0.2mm. For more elongated fields, relevant differences of more than 1mm and up to 3mm for the fields investigated occurred. The mean square side length calculated from both empiric formulas fitted much better, deviating hardly more than 1mm and for the very elongated fields only. Conclusion: For small rectangular photon fields, deviating only moderately from square both investigated empiric methods are sufficiently accurate. As the deviations often differ regarding their sign, using the mean improves the accuracy and the useable elongation range. For ratios larger than 2, Monte-Carlo generated data are recommended. SW is funded by Deutsche Forschungsgemeinschaft (SA481/10-1)« less

  5. Automated design of genetic toggle switches with predetermined bistability.

    PubMed

    Chen, Shuobing; Zhang, Haoqian; Shi, Handuo; Ji, Weiyue; Feng, Jingchen; Gong, Yan; Yang, Zhenglin; Ouyang, Qi

    2012-07-20

    Synthetic biology aims to rationally construct biological devices with required functionalities. Methods that automate the design of genetic devices without post-hoc adjustment are therefore highly desired. Here we provide a method to predictably design genetic toggle switches with predetermined bistability. To accomplish this task, a biophysical model that links ribosome binding site (RBS) DNA sequence to toggle switch bistability was first developed by integrating a stochastic model with RBS design method. Then, to parametrize the model, a library of genetic toggle switch mutants was experimentally built, followed by establishing the equivalence between RBS DNA sequences and switch bistability. To test this equivalence, RBS nucleotide sequences for different specified bistabilities were in silico designed and experimentally verified. Results show that the deciphered equivalence is highly predictive for the toggle switch design with predetermined bistability. This method can be generalized to quantitative design of other probabilistic genetic devices in synthetic biology.

  6. Synthesis by extrusion: continuous, large-scale preparation of MOFs using little or no solvent† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c4sc03217a Click here for additional data file.

    PubMed Central

    Crawford, Deborah; Casaban, José; Haydon, Robert; Giri, Nicola; McNally, Tony

    2015-01-01

    Grinding solid reagents under solvent-free or low-solvent conditions (mechanochemistry) is emerging as a general synthetic technique which is an alternative to conventional solvent-intensive methods. However, it is essential to find ways to scale-up this type of synthesis if its promise of cleaner manufacturing is to be realised. Here, we demonstrate the use of twin screw and single screw extruders for the continuous synthesis of various metal complexes, including Ni(salen), Ni(NCS)2(PPh3)2 as well as the commercially important metal organic frameworks (MOFs) Cu3(BTC)2 (HKUST-1), Zn(2-methylimidazolate)2 (ZIF-8, MAF-4) and Al(fumarate)(OH). Notably, Al(fumarate)(OH) has not previously been synthesised mechanochemically. Quantitative conversions occur to give products at kg h–1 rates which, after activation, exhibit surface areas and pore volumes equivalent to those of materials produced by conventional solvent-based methods. Some reactions can be performed either under completely solvent-free conditions whereas others require the addition of small amounts of solvent (typically 3–4 mol equivalents). Continuous neat melt phase synthesis is also successfully demonstrated by both twin screw and single screw extrusion for ZIF-8. The latter technique provided ZIF-8 at 4 kg h–1. The space time yields (STYs) for these methods of up to 144 × 103 kg per m3 per day are orders of magnitude greater than STYs for other methods of making MOFs. Extrusion methods clearly enable scaling of mechanochemical and melt phase synthesis under solvent-free or low-solvent conditions, and may also be applied in synthesis more generally. PMID:29308131

  7. Analysis of multi-layered films. [determining dye densities by applying a regression analysis to the spectral response of the composite transparency

    NASA Technical Reports Server (NTRS)

    Scarpace, F. L.; Voss, A. W.

    1973-01-01

    Dye densities of multi-layered films are determined by applying a regression analysis to the spectral response of the composite transparency. The amount of dye in each layer is determined by fitting the sum of the individual dye layer densities to the measured dye densities. From this, dye content constants are calculated. Methods of calculating equivalent exposures are discussed. Equivalent exposures are a constant amount of energy over a limited band-width that will give the same dye content constants as the real incident energy. Methods of using these equivalent exposures for analysis of photographic data are presented.

  8. Method for assessing in-service motor efficiency and in-service motor/load efficiency

    DOEpatents

    Kueck, John D.; Otaduy, Pedro J.

    1997-01-01

    A method and apparatus for assessing the efficiency of an in-service motor. The operating characteristics of the in-service motor are remotely measured. The operating characteristics are then applied to an equivalent circuit for electrical motors. Finally the equivalent circuit is evaluated to determine the performance characteristics of said in-service motor. Based upon the evaluation an individual is able to determine the rotor speed, power output, efficiency, and toque of the in-service motor. Additionally, an individual is able to confirm the calculations by comparing measured values with values obtained as a result of the motor equivalent circuit evaluation.

  9. Eddy current measurement of the thickness of top Cu film of the multilayer interconnects in the integrated circuit (IC) manufacturing process

    NASA Astrophysics Data System (ADS)

    Qu, Zilian; Meng, Yonggang; Zhao, Qian

    2015-03-01

    This paper proposes a new eddy current method, named equivalent unit method (EUM), for the thickness measurement of the top copper film of multilayer interconnects in the chemical mechanical polishing (CMP) process, which is an important step in the integrated circuit (IC) manufacturing. The influence of the underneath circuit layers on the eddy current is modeled and treated as an equivalent film thickness. By subtracting this equivalent film component, the accuracy of the thickness measurement of the top copper layer with an eddy current sensor is improved and the absolute error is 3 nm for sampler measurement.

  10. A sampling method to determine insecticide residues on surfaces and its application to food-handling establishments.

    PubMed

    Leidy, R B; Wright, C G; Dupree, H E

    1987-07-01

    Known amounts of acephate, chlorpyrifos, and diazinon were applied to Formica, unfinished plywood, stainless steel, and vinyl tile. Cotton-ball and dental wick materials were dipped in 2-propanol and "swiped" over the treated surface area two time. More acephate was found on the second swipe compared to the first from vinyl tile, similar amounts on both swipes from plywood, and less on the second swipe from formica and stainless steel. The ratio of chlorpyrifos on Swipe 1 compared to Swipe 2 found with cotton-ball on both formica and stainless steel surfaces was equivalent (6:1), but a considerable difference was seen when two dental wick swipes were used. Residues of diazinon removed from formica and stainless steel were equivalent, regardless of the swiping material used. Residues of chlorpyrifos were detected by taking swipes of surfaces in two restaurants and a supermarket up to 6 mo after a prescribed application by a commercial pest control firm. The data show that measurable amounts of chloropyrifos can be detected on surfaces not treated with the insecticide for at least 6 mo.

  11. [Transcultural adaptation of the Antifat Attitudes Test to Brazilian Portuguese].

    PubMed

    Obara, Angélica Almeida; Alvarenga, Marle Dos Santos

    2018-05-01

    Obese individuals are often blamed for their own condition and the targets of discrimination and prejudice. The scope of this study is to describe the cross-cultural adaptation to Brazilian Portuguese and the validation of the Antifat Attitudes Test - specifically developed for evaluation of negative attitudes toward the obese individual. The scale has 34 statements distributed in three subscales - Social/Character Disparagement (15 items), Physical/Romantic Unattractiveness (10 items) and Weight Control/Blame (9 items). The method involved the translation of the scale; evaluation of the conceptual, operational and item equivalence; evaluation of the semantic equivalence using the paired t test, the Pearson correlation coefficient and the intraclass correlation coefficient (ICC); internal consistency evaluation (Cronbach's alpha) and test-retest reliability (ICC) and Confirmatory Factor Analysis - after application in 340 college students in the area of health. The results showed good global internal consistency and reliability (α 0.85; CCI 0.83), and factor analysis showed that the original subscales can be kept in the adaptation, and therefore the scale adapted to the Brazilian-Portuguese version is valid and useful in studies to explore negative attitudes toward obese individuals.

  12. Public exposure due to external gamma background radiation in boundary areas of Iran.

    PubMed

    Pooya, S M Hosseini; Dashtipour, M R; Enferadi, A; Orouji, T

    2015-09-01

    A monitoring program in boundary areas of a country is an appropriate way to indicate the level of public exposure. In this research, gamma background radiation was measured using TL dosimeters at 12 boundary areas as well as in the capital city of Iran during the period 2010 to 2011. The measurements were carried out in semi-annual time intervals from January to June and July to December in each year. The maximum average dose equivalent value measured was approximately 70 μSv/month for Tehran city. Also, the average dose values obtained were less than 40 μSv/month for all the cities located at the sea level except that of high level natural radiation area of Ramsar, and more than 55 μSv/month for the higher elevation cities. The public exposure due to ambient gamma dose equivalent in Iran is within the levels reported by UNSCEAR. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A Method for Snow Reanalysis: The Sierra Nevada (USA) Example

    NASA Technical Reports Server (NTRS)

    Girotto, Manuela; Margulis, Steven; Cortes, Gonzalo; Durand, Michael

    2017-01-01

    This work presents a state-of-the art methodology for constructing snow water equivalent (SWE) reanalysis. The method is comprised of two main components: (1) a coupled land surface model and snow depletion curve model, which is used to generate an ensemble of predictions of SWE and snow cover area for a given set of (uncertain) inputs, and (2) a reanalysis step, which updates estimation variables to be consistent with the satellite observed depletion of the fractional snow cover time series. This method was applied over the Sierra Nevada (USA) based on the assimilation of remotely sensed fractional snow covered area data from the Landsat 5-8 record (1985-2016). The verified dataset (based on a comparison with over 9000 station years of in situ data) exhibited mean and root-mean-square errors less than 3 and 13 cm, respectively, and correlation greater than 0.95 compared with in situ SWE observations. The method (fully Bayesian), resolution (daily, 90-meter), temporal extent (31 years), and accuracy provide a unique dataset for investigating snow processes. This presentation illustrates how the reanalysis dataset was used to provide a basic accounting of the stored snowpack water in the Sierra Nevada over the last 31 years and ultimately improve real-time streamflow predictions.

  14. An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1990-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.

  15. On the equivalence of LIST and DIIS methods for convergence acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garza, Alejandro J.; Scuseria, Gustavo E.

    2015-04-28

    Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay’s DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.

  16. Analysis of a Spatial Point Pattern: Examining the Damage to Pavement and Pipes in Santa Clara Valley Resulting from the Loma Prieta Earthquake

    USGS Publications Warehouse

    Phelps, G.A.

    2008-01-01

    This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.

  17. Improved method for retinotopy constrained source estimation of visual evoked responses

    PubMed Central

    Hagler, Donald J.; Dale, Anders M.

    2011-01-01

    Retinotopy constrained source estimation (RCSE) is a method for non-invasively measuring the time courses of activation in early visual areas using magnetoencephalography (MEG) or electroencephalography (EEG). Unlike conventional equivalent current dipole or distributed source models, the use of multiple, retinotopically-mapped stimulus locations to simultaneously constrain the solutions allows for the estimation of independent waveforms for visual areas V1, V2, and V3, despite their close proximity to each other. We describe modifications that improve the reliability and efficiency of this method. First, we find that increasing the number and size of visual stimuli results in source estimates that are less susceptible to noise. Second, to create a more accurate forward solution, we have explicitly modeled the cortical point spread of individual visual stimuli. Dipoles are represented as extended patches on the cortical surface, which take into account the estimated receptive field size at each location in V1, V2, and V3 as well as the contributions from contralateral, ipsilateral, dorsal, and ventral portions of the visual areas. Third, we implemented a map fitting procedure to deform a template to match individual subject retinotopic maps derived from functional magnetic resonance imaging (fMRI). This improves the efficiency of the overall method by allowing automated dipole selection, and it makes the results less sensitive to physiological noise in fMRI retinotopy data. Finally, the iteratively reweighted least squares (IRLS) method was used to reduce the contribution from stimulus locations with high residual error for robust estimation of visual evoked responses. PMID:22102418

  18. Experimental study and finite element analysis based on equivalent load method for laser ultrasonic measurement of elastic constants.

    PubMed

    Zhan, Yu; Liu, Changsheng; Zhang, Fengpeng; Qiu, Zhaoguo

    2016-07-01

    The laser ultrasonic generation of Rayleigh surface wave and longitudinal wave in an elastic plate is studied by experiment and finite element method. In order to eliminate the measurement error and the time delay of the experimental system, the linear fitting method of experimental data is applied. The finite element analysis software ABAQUS is used to simulate the propagation of Rayleigh surface wave and longitudinal wave caused by laser excitation on a sheet metal sample surface. The equivalent load method is proposed and applied. The pulsed laser is equivalent to the surface load in time and space domain to meet the Gaussian profile. The relationship between the physical parameters of the laser and the load is established by the correction factor. The numerical solution is in good agreement with the experimental result. The simple and effective numerical and experimental methods for laser ultrasonic measurement of the elastic constants are demonstrated. Copyright © 2016. Published by Elsevier B.V.

  19. Estimating Equivalency of Explosives Through A Thermochemical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maienschein, J L

    2002-07-08

    The Cheetah thermochemical computer code provides an accurate method for estimating the TNT equivalency of any explosive, evaluated either with respect to peak pressure or the quasi-static pressure at long time in a confined volume. Cheetah calculates the detonation energy and heat of combustion for virtually any explosive (pure or formulation). Comparing the detonation energy for an explosive with that of TNT allows estimation of the TNT equivalency with respect to peak pressure, while comparison of the heat of combustion allows estimation of TNT equivalency with respect to quasi-static pressure. We discuss the methodology, present results for many explosives, andmore » show comparisons with equivalency data from other sources.« less

  20. Optical equivalence of isotropic ensembles of ellipsoidal particles in the Rayleigh-Gans-Debye and anomalous diffraction approximations and its consequences

    NASA Astrophysics Data System (ADS)

    Paramonov, L. E.

    2012-05-01

    Light scattering by isotropic ensembles of ellipsoidal particles is considered in the Rayleigh-Gans-Debye approximation. It is proved that randomly oriented ellipsoidal particles are optically equivalent to polydisperse randomly oriented spheroidal particles and polydisperse spherical particles. Density functions of the shape and size distributions for equivalent ensembles of spheroidal and spherical particles are presented. In the anomalous diffraction approximation, equivalent ensembles of particles are shown to also have equal extinction, scattering, and absorption coefficients. Consequences of optical equivalence are considered. The results are illustrated by numerical calculations of the angular dependence of the scattering phase function using the T-matrix method and the Mie theory.

  1. EVALUATION OF A COMMUNITY INTERVENTION FOR PROMOTION OF SAFE MOTHERHOOD IN ERITREA

    PubMed Central

    Turan, Janet Molzan; Tesfagiorghis, Mekonnen; Polan, Mary Lake

    2010-01-01

    Objectives We evaluated a community-based intervention to promote safe motherhood, focusing on knowledge and behaviors that may prevent maternal mortality and birth complications. The intervention aimed to increase women’s birth preparedness, knowledge of birth danger signs, use of antenatal care (ANC) services, and delivery at a health facility. Methods Volunteers from a remote rural community in Northern Eritrea were trained to lead participatory educational sessions on safe motherhood with women and men. The evaluation used a quasi-experimental design (non-equivalent group pretest-posttest) including cross-sectional surveys with postpartum women (pretest N=466, posttest N=378) in the intervention area and in a similar remote rural comparison area. Results Women’s knowledge of birth danger signs increased significantly in the intervention area, but not in the comparison area. There was a significant increase in the proportion of women who had the recommended four or more ANC visits during pregnancy in the intervention area (from 18% to 80%, p<.001); while this proportion did not change significantly in the comparison area (from 53% to 47%, p=0.194). There was a greater increase in delivery in a health facility in the intervention area. Conclusions Participatory sessions led by community volunteers can increase safe motherhood knowledge and encourage use of essential maternity services. PMID:21323845

  2. Toward quantitative estimation of material properties with dynamic mode atomic force microscopy: a comparative study.

    PubMed

    Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti

    2017-08-11

    In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.

  3. Why were Matrix Mechanics and Wave Mechanics considered equivalent?

    NASA Astrophysics Data System (ADS)

    Perovic, Slobodan

    A recent rethinking of the early history of Quantum Mechanics deemed the late 1920s agreement on the equivalence of Matrix Mechanics and Wave Mechanics, prompted by Schrödinger's 1926 proof, a myth. Schrödinger supposedly failed to prove isomorphism, or even a weaker equivalence ("Schrödinger-equivalence") of the mathematical structures of the two theories; developments in the early 1930s, especially the work of mathematician von Neumann provided sound proof of mathematical equivalence. The alleged agreement about the Copenhagen Interpretation, predicated to a large extent on this equivalence, was deemed a myth as well. In response, I argue that Schrödinger's proof concerned primarily a domain-specific ontological equivalence, rather than the isomorphism or a weaker mathematical equivalence. It stemmed initially from the agreement of the eigenvalues of Wave Mechanics and energy-states of Bohr's Model that was discovered and published by Schrödinger in his first and second communications of 1926. Schrödinger demonstrated in this proof that the laws of motion arrived at by the method of Matrix Mechanics are satisfied by assigning the auxiliary role to eigenfunctions in the derivation of matrices (while he only outlined the reversed derivation of eigenfunctions from Matrix Mechanics, which was necessary for the proof of both isomorphism and Schrödinger-equivalence of the two theories). This result was intended to demonstrate the domain-specific ontological equivalence of Matrix Mechanics and Wave Mechanics, with respect to the domain of Bohr's atom. And although the mathematical equivalence of the theories did not seem out of the reach of existing theories and methods, Schrödinger never intended to fully explore such a possibility in his proof paper. In a further development of Quantum Mechanics, Bohr's complementarity and Copenhagen Interpretation captured a more substantial convergence of the subsequently revised (in light of the experimental results) Wave and Matrix Mechanics. I argue that both the equivalence and Copenhagen Interpretation can be deemed myths if one predicates the philosophical and historical analysis on a narrow model of physical theory which disregards its historical context, and focuses exclusively on its formal aspects and the exploration of the logical models supposedly implicit in it.

  4. USEPA PATHOGEN EQUIVALENCY COMMITTEE RETREAT

    EPA Science Inventory

    The Pathogen Equivalency Committee held its retreat from September 20-21, 2005 at Hueston Woods State Park in College Corner, Ohio. This presentation will update the PEC’s membership on emerging pathogens, analytical methods, disinfection techniques, risk analysis, preparat...

  5. Development of AC impedance methods for evaluating corroding metal surfaces and coatings

    NASA Technical Reports Server (NTRS)

    Knockemus, Ward

    1986-01-01

    In an effort to investigate metal surface corrosion and the breakdown of metal protective coatings the AC Impedance Method was applied to zinc chromate primer coated 2219-T87 aluminum. The model 368-1 AC Impedance Measurement System recently acquired by the MSFC Corrosion Research Branch was used to monitor changing properties of coated aluminum disks immersed in 3.5% NaCl buffered at ph 5.5 over three to four weeks. The DC polarization resistance runs were performed on the same samples. The corrosion system can be represented by an electronic analog called an equivalent circuit that consists of transistors and capacitors in specific arrangements. This equivalent circuit parallels the impedance behavior of the corrosion system during a frequency scan. Values for resistances and capacities that can be assigned in the equivalent circuit following a least squares analysis of the data describe changes that occur on the corroding metal surface and in the protective coating. A suitable equivalent circuit was determined that predicts the correct Bode phase and magnitude for the experimental sample. The DC corrosion current density data are related to equivalent circuit element parameters.

  6. The corrosion mechanisms for primer coated 2219-T87 aluminum

    NASA Technical Reports Server (NTRS)

    Danford, Merlin D.; Knockemus, Ward W.

    1987-01-01

    To investigate metal surface corrosion and the breakdown of metal protective coatings, the ac Impedance Method was applied to zinc chromate primer coated 2219-T87 aluminum. The EG&GPARC Model 368 ac Impedance Measurement System, along with dc measurements with the same system using the Polarization Resistance Method, was used to monitor changing properties of coated aluminum disks immersed in 3.5 percent NaCl solutions buffered at pH 5.5 and pH 8.2 over periods of 40 days each. The corrosion system can be represented by an electronic analog called an equivalent circuit consisting of resistors and capacitors in specific arrangements. This equivalent circuit parallels the impedance behavior of the corrosion system during a frequency scan. Values for resistances and capacitances, that can be assigned in the equivalent circuit following a least squares analysis of the data, describe changes occurring on the corroding metal surface and in the protective coatings. A suitable equivalent circuit has been determined which predicts the correct Bode phase and magnitude for the experimental sample. The dc corrosion current density data are related to equivalent circuit element parameters.

  7. Optimal Sensor-Based Motion Planning for Autonomous Vehicle Teams

    DTIC Science & Technology

    2017-03-01

    calculated for non -dimensional ranges with Equation (3.26) and DU = 100 meters (shown at right) are equivalent to propagation loss calculated for 72 0 100...sensor and uniform target PDF, both choices are equivalent and the probability of non -detection equals the fraction of un- searched area. Time...feasible. Another goal is maximizing sensor performance in the presence of uncertainty. Optimal control provides a useful frame- work for solving these

  8. An approximate method for solution to variable moment of inertia problems

    NASA Technical Reports Server (NTRS)

    Beans, E. W.

    1981-01-01

    An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.

  9. Multi-Input Multi-Output Flight Control System Design for the YF-16 Using Nonlinear QFT and Pilot Compensation

    DTIC Science & Technology

    1990-12-01

    methods are implemented in MATRIXx with the programs SISOTF and MIMOTF respectively. Following the mathe - matical development, the application of these...intent is not to teach any of the methods , it has been written in a manner to significantly assist an individual attempting follow on work. I would...equivalent plant models. A detailed mathematical development of the method used to develop these equivalent LTI plant models is provided. After this inner

  10. 40 CFR Table C-4 to Subpart C of... - Test Specifications for PM 10, PM 2.5 and PM 10-2.5 Candidate Equivalent Methods

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Test Specifications for PM 10, PM 2.5 and PM 10-2.5 Candidate Equivalent Methods C Table C-4 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-4 Table C-4 to Subpart C of Part 53—Test Specifications for PM...

  11. 40 CFR Table C-4 to Subpart C of... - Test Specifications for PM 10, PM 2.5 and PM 10-2.5 Candidate Equivalent Methods

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Test Specifications for PM 10, PM 2.5 and PM 10-2.5 Candidate Equivalent Methods C Table C-4 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-4 Table C-4 to Subpart C of Part 53—Test Specifications for PM...

  12. Ambient Dose Equivalent measured at the Instituto Nacional de Cancerología Department of Nuclear Medicine

    NASA Astrophysics Data System (ADS)

    Ávila, O.; Torres-Ulloa, C. L.; Medina, L. A.; Trujillo-Zamudio, F. E.; de Buen, I. Gamboa; Buenfil, A. E.; Brandan, M. E.

    2010-12-01

    Ambient dose equivalent values were determined in several sites at the Instituto Nacional de Cancerología, Departmento de Medicina Nuclear, using TLD-100 and TLD-900 thermoluminescent dosemeters. Additionally, ambient dose equivalent was measured at a corridor outside the hospitalization room for patients treated with 137Cs brachytherapy. Dosemeter calibration was performed at the Instituto Nacional de Investigaciones Nucleares, Laboratorio de Metrología, to known 137Cs gamma radiation air kerma. Radionuclides considered for this study are 131I, 18F, 67Ga, 99mTc, 111In, 201Tl and 137Cs, with main gamma energies between 93 and 662 keV. Dosemeters were placed during a five month period in the nuclear medicine rooms (containing gamma-cameras), injection corridor, patient waiting areas, PET/CT study room, hot lab, waste storage room and corridors next to the hospitalization rooms for patients treated with 131I and 137Cs. High dose values were found at the waste storage room, outside corridor of 137Cs brachytherapy patients and PET/CT area. Ambient dose equivalent rate obtained for the 137Cs brachytherapy corridor is equal to (18.51±0.02)×10-3 mSv/h. Sites with minimum doses are the gamma camera rooms, having ambient dose equivalent rates equal to (0.05±0.03)×10-3 mSv/h. Recommendations have been given to the Department authorities so that further actions are taken to reduce doses at high dose sites in order to comply with the ALARA principle (as low as reasonably achievable).

  13. A comparison of two neural network schemes for navigation

    NASA Technical Reports Server (NTRS)

    Munro, Paul W.

    1989-01-01

    Neural networks have been applied to tasks in several areas of artificial intelligence, including vision, speech, and language. Relatively little work has been done in the area of problem solving. Two approaches to path-finding are presented, both using neural network techniques. Both techniques require a training period. Training under the back propagation (BPL) method was accomplished by presenting representations of (current position, goal position) pairs as input and appropriate actions as output. The Hebbian/interactive activation (HIA) method uses the Hebbian rule to associate points that are nearby. A path to a goal is found by activating a representation of the goal in the network and processing until the current position is activated above some threshold level. BPL, using back-propagation learning, failed to learn, except in a very trivial fashion, that is equivalent to table lookup techniques. HIA, performed much better, and required storage of fewer weights. In drawing a comparison, it is important to note that back propagation techniques depend critically upon the forms of representation used, and can be sensitive to parameters in the simulations; hence the BPL technique may yet yield strong results.

  14. A comparison of two neural network schemes for navigation

    NASA Technical Reports Server (NTRS)

    Munro, Paul

    1990-01-01

    Neural networks have been applied to tasks in several areas of artificial intelligence, including vision, speech, and language. Relatively little work has been done in the area of problem solving. Two approaches to path-finding are presented, both using neural network techniques. Both techniques require a training period. Training under the back propagation (BPL) method was accomplished by presenting representations of current position, goal position pairs as input and appropriate actions as output. The Hebbian/interactive activation (HIA) method uses the Hebbian rule to associate points that are nearby. A path to a goal is found by activating a representation of the goal in the network and processing until the current position is activated above some threshold level. BPL, using back-propagation learning, failed to learn, except in a very trivial fashion, that is equivalent to table lookup techniques. HIA, performed much better, and required storage of fewer weights. In drawing a comparison, it is important to note that back propagation techniques depend critically upon the forms of representation used, and can be sensitive to parameters in the simulations; hence the BPL technique may yet yield strong results.

  15. Geoid Recovery Using Geophysical Inverse Theory Applied to Satellite to Satellite Tracking Data

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    2000-01-01

    This report describes a new method for determination of the geopotential, or the equivalent geoid. It is based on Satellite-to-Satellite Tracking (SST) of two co-orbiting low earth satellites separated by a few hundred kilometers. The analysis is aimed at the GRACE Mission, though it is generally applicable to any SST data. It is proposed that the SST be viewed as a mapping mission. That is, the result will be maps of the geoid or gravity, as contrasted with determination of spherical harmonics or Fourier coefficients. A method has been developed, based on Geophysical Inverse Theory (GIT), that can provide maps at a prescribed (desired) resolution and the corresponding error map from the SST data. This computation can be done area by area avoiding simultaneous recovery of all the geopotential information. The necessary elements of potential theory, celestial mechanics, and Geophysical Inverse Theory are described, a computation architecture is described, and the results of several simulations presented. Centimeter accuracy geoids with 50 to 100 km resolution can be recovered with a 30 to 60 day mission.

  16. Establishing an NP-staffed minor emergency area.

    PubMed

    Buchanan, L; Powers, R D

    1997-04-01

    Patients with problems of high acuity need fully trained emergency physicians and nurses. Some patients with nonurgent problems can be cared for within the emergency department (ED) in a lower-cost setting designed and staffed specifically for this purpose. Staffing a fast track or minor emergency area (MEA) with nurse practitioners (NPs) is one way to satisfy the ED's care needs. One site analysis of the effectiveness of NPs indicates that patients are satisfied with their care, that nurses' interpersonal skills are better than those of physicians, that technical skills are equivalent, that patient outcomes are equivalent or superior and that NPs improve access to care. A nurse practitioner-staffed minor emergency area provides high quality care for approximately 21% of this site's adult emergency department population. Patients are triaged based on set criteria, allowing for short treatment times. The physical layout, triage criteria, and the NPs' scope of practice in the level 1 trauma center's ED are detailed.

  17. California's transition from conventional snowpack measurements to a developing remote sensing capability for water supply forecasting

    NASA Technical Reports Server (NTRS)

    Brown, A. J.; Peterson, N.

    1980-01-01

    California's Snow Survey Program and water supply forecasting procedures are described. A review is made of current activities and program direction on such matters as: the growing statewide network of automatic snow sensors; restrictions on the gathering hydrometeorological data in areas designated as wilderness; the use of satellite communications, which both provides a flexible network without mountaintop repeaters and satisfies the need for unobtrusiveness in wilderness areas; and the increasing operational use of snow covered area (SCA) obtained from satellite imagery, which, combined with water equivalent from snow sensors, provides a high correlation to the volumes and rates of snowmelt runoff. Also examined are the advantages of remote sensing; the anticipated effects of a new input of basin wide index of water equivalent, such as the obtained through microwave techniques, on future forecasting opportunities; and the future direction and goals of the California Snow Surveys Program.

  18. Two Perspectives of the 2D Unit Area Quantum Sphere and Their Equivalence

    NASA Astrophysics Data System (ADS)

    Aru, Juhan; Huang, Yichao; Sun, Xin

    2017-11-01

    2D Liouville quantum gravity (LQG) is used as a toy model for 4D quantum gravity and is the theory of world-sheet in string theory. Recently there has been growing interest in studying LQG in the realm of probability theory: David et al. (Liouville quantum gravity on the Riemann sphere. Commun Math Phys 342(3):869-907, 2016) and Duplantier et al. (Liouville quantum gravity as a mating of trees. ArXiv e-prints: arXiv:1409.7055, 2014) both provide a probabilistic perspective of the LQG on the 2D sphere. In particular, in each of them one may find a definition of the so-called unit area quantum sphere. We examine these two perspectives and prove their equivalence by showing that the respective unit area quantum spheres are the same. This is done by considering a unified limiting procedure for defining both objects.

  19. Lower Miocene stratigraphy of the Gebel Shabrawet area, north Eastern desert Egypt

    NASA Astrophysics Data System (ADS)

    Abdelghany, Osman

    2002-05-01

    The Lower Miocene carbonate/siliciclastic sequence of the Shabrawet area, comprises a complex alternation of autochthonous and allogenic sediments. The sequence can be subdivided into two lithostratigraphic units. The lower unit (unit I) is equivalent to the Gharra Formation. It is mainly clastic and composed of sandstones, siltstones and shales with minor limestone intercalations. These sediments are rich in Clypeaster spp., Scutella spp., Miogypsina intermedia, Operculina complanata, and smaller foraminifera. The upper unit (unit II) was considered by previous workers as being equivalent to the Marmarica Formation. It consists mainly of non-clastic rocks, dominated by sandy and chalky limestones rich in larger foraminifera (miogypsinids and nummulitids). This unit is topped by a highly fossiliferous ( Heterostegina, Operculina and Planostegina) sandy limestone. The present study places both units in the Gharra Formation and reports for the first time M. intermedia from the Miocene sequence of the Shabrawet area.

  20. Lagged Associations of Metropolitan Statistical Area- and State-Level Income Inequality with Cognitive Function: The Health and Retirement Study

    PubMed Central

    Kim, Daniel; Griffin, Beth Ann; Kabeto, Mohammed; Escarce, José; Langa, Kenneth M.; Shih, Regina A.

    2016-01-01

    Purpose Much variation in individual-level cognitive function in late life remains unexplained, with little exploration of area-level/contextual factors to date. Income inequality is a contextual factor that may plausibly influence cognitive function. Methods In a nationally-representative cohort of older Americans from the Health and Retirement Study, we examined state- and metropolitan statistical area (MSA)-level income inequality as predictors of individual-level cognitive function measured by the 27-point Telephone Interview for Cognitive Status (TICS-m) scale. We modeled latency periods of 8–20 years, and controlled for state-/metropolitan statistical area (MSA)-level and individual-level factors. Results Higher MSA-level income inequality predicted lower cognitive function 16–18 years later. Using a 16-year lag, living in a MSA in the highest income inequality quartile predicted a 0.9-point lower TICS-m score (β = -0.86; 95% CI = -1.41, -0.31), roughly equivalent to the magnitude associated with five years of aging. We observed no associations for state-level income inequality. The findings were robust to sensitivity analyses using propensity score methods. Conclusions Among older Americans, MSA-level income inequality appears to influence cognitive function nearly two decades later. Policies reducing income inequality levels within cities may help address the growing burden of declining cognitive function among older populations within the United States. PMID:27332986

  1. Equivalent-Continuum Modeling With Application to Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Odegard, Gregory M.; Gates, Thomas S.; Nicholson, Lee M.; Wise, Kristopher E.

    2002-01-01

    A method has been proposed for developing structure-property relationships of nano-structured materials. This method serves as a link between computational chemistry and solid mechanics by substituting discrete molecular structures with equivalent-continuum models. It has been shown that this substitution may be accomplished by equating the vibrational potential energy of a nano-structured material with the strain energy of representative truss and continuum models. As important examples with direct application to the development and characterization of single-walled carbon nanotubes and the design of nanotube-based devices, the modeling technique has been applied to determine the effective-continuum geometry and bending rigidity of a graphene sheet. A representative volume element of the chemical structure of graphene has been substituted with equivalent-truss and equivalent continuum models. As a result, an effective thickness of the continuum model has been determined. This effective thickness has been shown to be significantly larger than the interatomic spacing of graphite. The effective thickness has been shown to be significantly larger than the inter-planar spacing of graphite. The effective bending rigidity of the equivalent-continuum model of a graphene sheet was determined by equating the vibrational potential energy of the molecular model of a graphene sheet subjected to cylindrical bending with the strain energy of an equivalent continuum plate subjected to cylindrical bending.

  2. Grades, Student Satisfaction and Retention in Online and Face-to-Face Introductory Psychology Units: A Test of Equivalency Theory.

    PubMed

    Garratt-Reed, David; Roberts, Lynne D; Heritage, Brody

    2016-01-01

    There has been a recent rapid growth in the number of psychology courses offered online through institutions of higher education. The American Psychological Association has highlighted the importance of ensuring the effectiveness of online psychology courses (Halonen et al., 2013). Despite this, there have been inconsistent findings regarding student grades, satisfaction, and retention in online psychology units. Equivalency Theory (Simonson, 1999; Simonson et al., 1999) posits that online and classroom-based learners will attain equivalent learning outcomes when equivalent learning experiences are provided. We present a study of an online introductory psychology unit designed to provide equivalent learning experiences to the pre-existing face-to-face version of the unit. Using quasi-experimental methods, academic performance, student feedback, and retention data from 866 Australian undergraduate psychology students were examined to assess whether the online unit developed to provide equivalent learning experiences produced comparable outcomes to the 'traditional' unit delivered face-to-face. Student grades did not significantly differ between modes of delivery, except for a group-work based assessment where online students performed more poorly. Student satisfaction was generally high in both modes of the unit, with group-work the key source of dissatisfaction in the online unit. The results provide partial support for Equivalency Theory. The group-work based assessment did not provide an equivalent learning experience for students in the online unit highlighting the need for further research to determine effective methods of engaging students in online group activities. Consistent with previous research, retention rates were significantly lower in the online unit, indicating the need to develop effective strategies to increase online retention rates. While this study demonstrates successes in presenting students with an equivalent learning experience, we recommend that future research investigate means of successfully facilitating collaborative group-work assessment, and to explore contributing factors to actual student retention in online units beyond that of non-equivalent learning experiences.

  3. Grades, Student Satisfaction and Retention in Online and Face-to-Face Introductory Psychology Units: A Test of Equivalency Theory

    PubMed Central

    Garratt-Reed, David; Roberts, Lynne D.; Heritage, Brody

    2016-01-01

    There has been a recent rapid growth in the number of psychology courses offered online through institutions of higher education. The American Psychological Association has highlighted the importance of ensuring the effectiveness of online psychology courses (Halonen et al., 2013). Despite this, there have been inconsistent findings regarding student grades, satisfaction, and retention in online psychology units. Equivalency Theory (Simonson, 1999; Simonson et al., 1999) posits that online and classroom-based learners will attain equivalent learning outcomes when equivalent learning experiences are provided. We present a study of an online introductory psychology unit designed to provide equivalent learning experiences to the pre-existing face-to-face version of the unit. Using quasi-experimental methods, academic performance, student feedback, and retention data from 866 Australian undergraduate psychology students were examined to assess whether the online unit developed to provide equivalent learning experiences produced comparable outcomes to the ‘traditional’ unit delivered face-to-face. Student grades did not significantly differ between modes of delivery, except for a group-work based assessment where online students performed more poorly. Student satisfaction was generally high in both modes of the unit, with group-work the key source of dissatisfaction in the online unit. The results provide partial support for Equivalency Theory. The group-work based assessment did not provide an equivalent learning experience for students in the online unit highlighting the need for further research to determine effective methods of engaging students in online group activities. Consistent with previous research, retention rates were significantly lower in the online unit, indicating the need to develop effective strategies to increase online retention rates. While this study demonstrates successes in presenting students with an equivalent learning experience, we recommend that future research investigate means of successfully facilitating collaborative group-work assessment, and to explore contributing factors to actual student retention in online units beyond that of non-equivalent learning experiences. PMID:27242587

  4. Design and experimental verification of an equivalent forebody to produce disturbances equivalent to those of a forebody with flowing inlets

    NASA Technical Reports Server (NTRS)

    Haynes, Davy A.; Miller, David S.; Klein, John R.; Louie, Check M.

    1988-01-01

    A method by which a simple equivalent faired body can be designed to replace a more complex body with flowing inlets has been demonstrated for supersonic flow. An analytically defined, geometrically simple faired inlet forebody has been designed using a linear potential code to generate flow perturbations equivalent to those produced by a much more complex forebody with inlets. An equivalent forebody wind-tunnel model was fabricated and a test was conducted in NASA Langley Research Center's Unitary Plan Wind Tunnel. The test Mach number range was 1.60 to 2.16 for angles of attack of -4 to 16 deg. Test results indicate that, for the purposes considered here, the equivalent forebody simulates the original flowfield disturbances to an acceptable degree of accuracy.

  5. A simple combined floating and anchored collagen gel for enhancing mechanical strength of culture system.

    PubMed

    Harada, Ichiro; Kim, Sung-Gon; Cho, Chong Su; Kurosawa, Hisashi; Akaike, Toshihiro

    2007-01-01

    In this study, a simple combined method consisting of floating and anchored collagen gel in a ligament or tendon equivalent culture system was used to produce the oriented fibrils in fibroblast-populated collagen matrices (FPCMs) during the remodeling and contraction of the collagen gel. Orientation of the collagen fibrils along single axis occurred over the whole area of the floating section and most of the fibroblasts were elongated and aligned along the oriented collagen fibrils, whereas no significant orientation of fibrils was observed in normally contracted FPCMs by the floating method. Higher elasticity and enhanced mechanical strength were obtained using our simple method compared with normally contracted floating FPCMs. The Young's modulus and the breaking point of the FPCMs were dependent on the initial cell densities. This simple method will be applied as a convenient bioreactor to study cellular processes of the fibroblasts in the tissues with highly oriented fibrils such as ligaments or tendons. (c) 2006 Wiley Periodicals, Inc.

  6. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  7. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  8. Spin Seebeck devices using local on-chip heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Stephen M.; Fradin, Frank Y.; Hoffman, Jason

    2015-05-07

    A micro-patterned spin Seebeck device is fabricated using an on-chip heater. Current is driven through a Au heater layer electrically isolated from a bilayer consisting of Fe3O4 (insulating ferrimagnet) and a spin detector layer. It is shown that through this method it is possible to measure the longitudinal spin Seebeck effect (SSE) for small area magnetic devices, equivalent to traditional macroscopic SSE experiments. Using a lock-in detection technique, it is possible to more sensitively characterize both the SSE and the anomalous Nernst effect (ANE), as well as the inverse spin Hall effect in various spin detector materials. By using themore » spin detector layer as a thermometer, we can obtain a value for the temperature gradient across the device. These results are well matched to values obtained through electromagnetic/thermal modeling of the device structure and with large area spin Seebeck measurements.« less

  9. Spin Seebeck devices using local on-chip heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Stephen M., E-mail: swu@anl.gov; Fradin, Frank Y.; Hoffman, Jason

    2015-05-07

    A micro-patterned spin Seebeck device is fabricated using an on-chip heater. Current is driven through a Au heater layer electrically isolated from a bilayer consisting of Fe{sub 3}O{sub 4} (insulating ferrimagnet) and a spin detector layer. It is shown that through this method it is possible to measure the longitudinal spin Seebeck effect (SSE) for small area magnetic devices, equivalent to traditional macroscopic SSE experiments. Using a lock-in detection technique, it is possible to more sensitively characterize both the SSE and the anomalous Nernst effect (ANE), as well as the inverse spin Hall effect in various spin detector materials. Bymore » using the spin detector layer as a thermometer, we can obtain a value for the temperature gradient across the device. These results are well matched to values obtained through electromagnetic/thermal modeling of the device structure and with large area spin Seebeck measurements.« less

  10. The challenges of achieving good electrical and mechanical properties when making structural supercapacitors

    NASA Astrophysics Data System (ADS)

    Ciocanel, C.; Browder, C.; Simpson, C.; Colburn, R.

    2013-04-01

    The paper presents results associated with the electro-mechanical characterization of a composite material with power storage capability, identified throughout the paper as a structural supercapacitor. The structural supercapacitor uses electrodes made of carbon fiber weave, a separator made of Celgard 3501, and a solid PEG-based polymer blend electrolyte. To be a viable structural supercapacitor, the material has to have good mechanical and power storage/electrical properties. The literature in this area is inconsistent on which electrical properties are evaluated, and how those properties are assessed. In general, measurements of capacitance or specific capacitance (i.e. capacitance per unit area or per unit volume) are made, without considering other properties such as leakage resistance and equivalent series resistance of the supercapacitor. This paper highlights the significance of these additional electrical properties, discusses the fluctuation of capacitance over time, and proposes methods to improve the stability of the material's electric properties over time.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penna, M.L.; Duchiade, M.P.

    This study examines the relationship between air pollution, measured as concentration of suspended particulates in the atmosphere, and infant mortality due to pneumonia in the metropolitan area of Rio de Janeiro. Multiple linear regression (progressive or stepwise method) was used to analyze infant mortality due to pneumonia, diarrhea, and all causes in 1980, by geographic area, income level, and degree of contamination. While the variable proportion of families with income equivalent to more than two minimum wages was included in the regressions corresponding to the three types of infant mortality, the average contamination index had a statistically significant coefficient (bmore » = 0.2208; t = 2.670; P = 0.0137) only in the case of mortality due to pneumonia. This would suggest a biological association, but, as in any ecological study, such conclusions should be viewed with caution. The authors believe that air quality indicators are essential to consider in studies of acute respiratory infections in developing countries.« less

  12. Effects of fracture surface roughness and shear displacement on geometrical and hydraulic properties of three-dimensional crossed rock fracture models

    NASA Astrophysics Data System (ADS)

    Huang, Na; Liu, Richeng; Jiang, Yujing; Li, Bo; Yu, Liyuan

    2018-03-01

    While shear-flow behavior through fractured media has been so far studied at single fracture scale, a numerical analysis of the shear effect on the hydraulic response of 3D crossed fracture model is presented. The analysis was based on a series of crossed fracture models, in which the effects of fracture surface roughness and shear displacement were considered. The rough fracture surfaces were generated using the modified successive random additions (SRA) algorithm. The shear displacement was applied on one fracture, and at the same time another fracture shifted along with the upper and lower surfaces of the sheared fracture. The simulation results reveal the development and variation of preferential flow paths through the model during the shear, accompanied by the change of the flow rate ratios between two flow planes at the outlet boundary. The average contact area accounts for approximately 5-27% of the fracture planes during shear, but the actual calculated flow area is about 38-55% of the fracture planes, which is much smaller than the noncontact area. The equivalent permeability will either increase or decrease as shear displacement increases from 0 to 4 mm, depending on the aperture distribution of intersection part between two fractures. When the shear displacement continuously increases by up to 20 mm, the equivalent permeability increases sharply first, and then keeps increasing with a lower gradient. The equivalent permeability of rough fractured model is about 26-80% of that calculated from the parallel plate model, and the equivalent permeability in the direction perpendicular to shear direction is approximately 1.31-3.67 times larger than that in the direction parallel to shear direction. These results can provide a fundamental understanding of fluid flow through crossed fracture model under shear.

  13. Equivalent circuit consideration of frequency-shift-type acceleration sensor

    NASA Astrophysics Data System (ADS)

    Sasaki, Yoshifumi; Sugawara, Sumio; Kudo, Subaru

    2018-07-01

    In this paper, an electrical equivalent circuit for the piezoelectrically driven frequency-shift-type acceleration sensor model is represented, and the equivalent circuit constants including the effect of the axial force are clarified for the first time. The results calculated by the finite element method are compared with the experimentally measured ones of the one-axis sensor of trial production. The result shows that the analyzed values almost agree with the measured ones, and that the equivalent circuit representation of the sensor is useful for electrical engineers in order to easily analyze the characteristics of the sensors.

  14. 40 CFR 35.925-13 - Sewage collection system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... design capacity equivalent to that of the existing system plus a reasonable amount for future growth. For purposes of this section, a community would include any area with substantial human habitation on October... October 18, 1972; (b) The collection system is cost-effective; (c) The population density of the area to...

  15. 40 CFR 35.925-13 - Sewage collection system.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... design capacity equivalent to that of the existing system plus a reasonable amount for future growth. For purposes of this section, a community would include any area with substantial human habitation on October... October 18, 1972; (b) The collection system is cost-effective; (c) The population density of the area to...

  16. 29 CFR 1926.152 - Flammable and combustible liquids.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... access way to permit approach of fire control apparatus. (3) The storage area shall be graded in a manner... to permit approach of fire control apparatus. (5) Storage areas shall be kept free of weeds, debris... wooden storage cabinets shall be constructed in the following manner, or equivalent: The bottom, sides...

  17. 29 CFR 1926.152 - Flammable and combustible liquids.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... access way to permit approach of fire control apparatus. (3) The storage area shall be graded in a manner... to permit approach of fire control apparatus. (5) Storage areas shall be kept free of weeds, debris... wooden storage cabinets shall be constructed in the following manner, or equivalent: The bottom, sides...

  18. Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.

    1981-01-01

    To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.

  19. A research on snow distribution in mountainous area using airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Nishihara, T.; Tanise, A.

    2015-12-01

    In snowy cold regions, the snowmelt water stored in dams in early spring meets the water demand for the summer season. Thus, snowmelt water serves as an important water resource. However, snowmelt water also can cause snowmelt floods. Therefore, it's necessary to estimate snow water equivalent in a dam basin as accurately as possible. For this reason, the dam operation offices in Hokkaido, Japan conduct snow surveys every March to estimate snow water equivalent in the dam basin. In estimating, we generally apply a relationship between elevation and snow water equivalent. But above the forest line, snow surveys are generally conducted along ridges due to the risk of avalanches or other hazards. As a result, snow water equivalent above the forest line is significantly underestimated. In this study, we conducted airborne laser scanning to measure snow depth in the high elevation area including above the forest line twice in the same target area (in 2012 and 2015) and analyzed the relationships of snow depth above the forest line and some indicators of terrain. Our target area was the Chubetsu dam basin. It's located in central Hokkaido, a high elevation area in a mountainous region. Hokkaido is a northernmost island of Japan. Therefore it's a cold and snowy region. The target range for airborne laser scanning was 10km2. About 60% of the target range was above the forest line. First, we analyzed the relationship between elevation and snow depth. Below the forest line, the snow depth increased linearly with elevation increase. On the other hand, above the forest line, the snow depth varied greatly. Second, we analyzed the relationship between overground-openness and snow depth above the forest line. Overground-openness is an indicator quantifying how far a target point is above or below the surrounding surface. As a result, a simple relationship was clarified. Snow depth decreased linearly as overground-openness increases. This means that areas with heavy snow cover are distributed in valleys and that of light cover are on ridges. Lastly we compared the result of 2012 and that of 2015. The same characteristic of snow depth, above mentioned, was found. However, regression coefficients of linear equations were different according to the weather conditions of each year.

  20. Isolator-combustor interaction in a dual-mode scramjet engine

    NASA Technical Reports Server (NTRS)

    Pratt, David T.; Heiser, William H.

    1993-01-01

    A constant-area diffuser, or 'isolator', is required in both the ramjet and scramjet operating regimes of a dual-mode engine configuration in order to prevent unstarts due to pressure feedback from the combustor. Because the nature of the combustor-isolator interaction is different in the two operational modes, however, attention is presently given to the use of thermal vs kinetic energy coordinates for these interaction processes' visualization. The results of the analysis thus conducted indicate that the isolator requires severe flow separation at combustor entry, and that its entropy-generating characteristics are more severe than an equivalent oblique shock. A constant-area diffuser is only marginally able to contain the equivalent normal shock required for subsonic combustor entry.

  1. Estimation of the REV Size and Equivalent Permeability Coefficient of Fractured Rock Masses with an Emphasis on Comparing the Radial and Unidirectional Flow Configurations

    NASA Astrophysics Data System (ADS)

    Wang, Zhechao; Li, Wei; Bi, Liping; Qiao, Liping; Liu, Richeng; Liu, Jie

    2018-05-01

    A method to estimate the representative elementary volume (REV) size for the permeability and equivalent permeability coefficient of rock mass with a radial flow configuration was developed. The estimations of the REV size and equivalent permeability for the rock mass around an underground oil storage facility using a radial flow configuration were compared with those using a unidirectional flow configuration. The REV sizes estimated using the unidirectional flow configuration are much higher than those estimated using the radial flow configuration. The equivalent permeability coefficient estimated using the radial flow configuration is unique, while those estimated using the unidirectional flow configuration depend on the boundary conditions and flow directions. The influences of the fracture trace length, spacing and gap on the REV size and equivalent permeability coefficient were investigated. The REV size for the permeability of fractured rock mass increases with increasing the mean trace length and fracture spacing. The influence of the fracture gap length on the REV size is insignificant. The equivalent permeability coefficient decreases with the fracture spacing, while the influences of the fracture trace length and gap length are not determinate. The applicability of the proposed method to the prediction of groundwater inflow into rock caverns was verified using the measured groundwater inflow into the facility. The permeability coefficient estimated using the radial flow configuration is more similar to the representative equivalent permeability coefficient than those estimated with different boundary conditions using the unidirectional flow configuration.

  2. Evaluation of the radiation dose in the thyroid gland using different protective collars in panoramic imaging.

    PubMed

    Hafezi, Ladan; Arianezhad, S Marjan; Hosseini Pooya, Seyed Mahdi

    2018-04-25

    The value for the use of thyroid shield is one of the issues in radiation protection of patients in dental panoramic imaging. The objective of this research is to investigate the attenuation characteristics of some models of thyroid shielding in dental panoramic examinations. The effects of five different types of lead and lead-free (Pb-equivalent) shields on dose reduction of thyroid gland were investigated using implanted Thermoluminescence Dosemeters (TLDs) in head-neck parts of a Rando phantom. The results show that frontal lead and Pb-equivalent shields can reduce the thyroid dose around 50% and 19%, respectively. It can be concluded that the effective shielding area is an important parameter in thyroid gland dose reduction. Lead frontal collars with large effective shielding areas (>~300 cm 2 but not necessarily very large) are appropriate for an optimized thyroid gland dose reduction particularly for the critical patients in dental panoramic imaging. Regardless of the shape and thickness, using the Pb-equivalent shields is not justifiable in dental panoramic imaging.

  3. Using machine learning for real-time estimates of snow water equivalent in the watersheds of Afghanistan

    NASA Astrophysics Data System (ADS)

    Bair, Edward H.; Abreu Calfa, Andre; Rittger, Karl; Dozier, Jeff

    2018-05-01

    In the mountains, snowmelt often provides most of the runoff. Operational estimates use imagery from optical and passive microwave sensors, but each has its limitations. An accurate approach, which we validate in Afghanistan and the Sierra Nevada USA, reconstructs spatially distributed snow water equivalent (SWE) by calculating snowmelt backward from a remotely sensed date of disappearance. However, reconstructed SWE estimates are available only retrospectively; they do not provide a forecast. To estimate SWE throughout the snowmelt season, we consider physiographic and remotely sensed information as predictors and reconstructed SWE as the target. The period of analysis matches the AMSR-E radiometer's lifetime from 2003 to 2011, for the months of April through June. The spatial resolution of the predictions is 3.125 km, to match the resolution of a microwave brightness temperature product. Two machine learning techniques - bagged regression trees and feed-forward neural networks - produced similar mean results, with 0-14 % bias and 46-48 mm RMSE on average. Nash-Sutcliffe efficiencies averaged 0.68 for all years. Daily SWE climatology and fractional snow-covered area are the most important predictors. We conclude that these methods can accurately estimate SWE during the snow season in remote mountains, and thereby provide an independent estimate to forecast runoff and validate other methods to assess the snow resource.

  4. EPA Region 1 Environmentally Sensitive Areas

    EPA Pesticide Factsheets

    This coverage represents polygon equivalents of environmentally sensitive areas (ESA) in EPA Region I. ESAs were developed as part of an EPA headquarters initiative based on reviews of various regulatory and guidance documents, as well as phone interviews with federal/state/local government agencies and private organizations. ESAs include, but are not limited to, wetlands, biological resources, habitats, national parks, archaeological/historic sites, natural heritage areas, tribal lands, drinking water intakes, marinas/boat ramps, wildlife areas, etc.

  5. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  6. Low-Lift Drag of the Grumman F9F-9 Airplane as Obtained by a 1/7.5-Scale Rocket-Boosted Model and by Three 1/45.85-Scale Equivalent-Body Models between Mach Numbers of 0.8 and 1.3, TED No. NACA DE 391

    NASA Technical Reports Server (NTRS)

    Stevens, Joseph E.

    1955-01-01

    Low-lift drag data are presented herein for one 1/7.5-scale rocket-boosted model and three 1/45.85-scale equivalent-body models of the Grumman F9F-9 airplane, The data were obtained over a Reynolds number range of about 5 x 10(exp 6) to 10 x 10(exp 6) based on wing mean aerodynamic chord for the rocket model and total body length for the equivalent-body models. The rocket-boosted model showed a drag rise of about 0,037 (based on included wing area) between the subsonic level and the peak supersonic drag coefficient at the maximum Mach number of this test. The base drag coefficient measured on this model varied from a value of -0,0015 in the subsonic range to a maximum of about 0.0020 at a Mach number of 1.28, Drag coefficients for the equivalent-body models varied from about 0.125 (based on body maximum area) in the subsonic range to about 0.300 at a Mach number of 1.25. Increasing the total fineness ratio by a small amount raised the drag-rise Mach number slightly.

  7. Estimating prevalence of coronary heart disease for small areas using collateral indicators of morbidity.

    PubMed

    Congdon, Peter

    2010-01-01

    Different indicators of morbidity for chronic disease may not necessarily be available at a disaggregated spatial scale (e.g., for small areas with populations under 10 thousand). Instead certain indicators may only be available at a more highly aggregated spatial scale; for example, deaths may be recorded for small areas, but disease prevalence only at a considerably higher spatial scale. Nevertheless prevalence estimates at small area level are important for assessing health need. An instance is provided by England where deaths and hospital admissions for coronary heart disease are available for small areas known as wards, but prevalence is only available for relatively large health authority areas. To estimate CHD prevalence at small area level in such a situation, a shared random effect method is proposed that pools information regarding spatial morbidity contrasts over different indicators (deaths, hospitalizations, prevalence). The shared random effect approach also incorporates differences between small areas in known risk factors (e.g., income, ethnic structure). A Poisson-multinomial equivalence may be used to ensure small area prevalence estimates sum to the known higher area total. An illustration is provided by data for London using hospital admissions and CHD deaths at ward level, together with CHD prevalence totals for considerably larger local health authority areas. The shared random effect involved a spatially correlated common factor, that accounts for clustering in latent risk factors, and also provides a summary measure of small area CHD morbidity.

  8. Estimating Prevalence of Coronary Heart Disease for Small Areas Using Collateral Indicators of Morbidity

    PubMed Central

    Congdon, Peter

    2010-01-01

    Different indicators of morbidity for chronic disease may not necessarily be available at a disaggregated spatial scale (e.g., for small areas with populations under 10 thousand). Instead certain indicators may only be available at a more highly aggregated spatial scale; for example, deaths may be recorded for small areas, but disease prevalence only at a considerably higher spatial scale. Nevertheless prevalence estimates at small area level are important for assessing health need. An instance is provided by England where deaths and hospital admissions for coronary heart disease are available for small areas known as wards, but prevalence is only available for relatively large health authority areas. To estimate CHD prevalence at small area level in such a situation, a shared random effect method is proposed that pools information regarding spatial morbidity contrasts over different indicators (deaths, hospitalizations, prevalence). The shared random effect approach also incorporates differences between small areas in known risk factors (e.g., income, ethnic structure). A Poisson-multinomial equivalence may be used to ensure small area prevalence estimates sum to the known higher area total. An illustration is provided by data for London using hospital admissions and CHD deaths at ward level, together with CHD prevalence totals for considerably larger local health authority areas. The shared random effect involved a spatially correlated common factor, that accounts for clustering in latent risk factors, and also provides a summary measure of small area CHD morbidity. PMID:20195439

  9. Snow Water Equivalent estimation based on satellite observation

    NASA Astrophysics Data System (ADS)

    Macchiavello, G.; Pesce, F.; Boni, G.; Gabellani, S.

    2009-09-01

    The availability of remotely sensed images and them analysis is a powerful tool for monitoring the extension and typology of snow cover over territory where the in situ measurements are often difficult. Information on snow are fundamental for monitoring and forecasting the available water above all in regions at mid latitudes as Mediterranean where snowmelt may cause floods. The hydrological model requirements and the daily acquisitions of MODIS (Moderate Resolution Imaging Spectroradiometer), drove, in previous research activities, to the development of a method to automatically map the snow cover from multi-spectral images. But, the major hydrological parameter related to the snow pack is the Snow Water Equivalent (SWE). This represents a direct measure of stored water in the basin. Because of it, the work was focused to the daily estimation of SWE from MODIS images. But, the complexity of this aim, based only on optical data, doesn’t find any information in literature. Since, from the spectral range of MODIS data it is not possible to extract a direct relation between spectral information and the SWE. Then a new method, respectful of the physic of the snow, was defined and developed. Reminding that the snow water equivalent is the product of the three factors as snow density, snow depth and the snow covered areas, the proposed approach works separately on each of these physical behaviors. Referring to the physical characteristic of snow, the snow density is function of the snow age, then it was studied a new method to evaluate this. Where, a module for snow age simulation from albedo information was developed. It activates an age counter updated by new snow information set to estimate snow age from zero accumulation status to the end of melting season. The height of the snow pack, can be retrieved by adopting relation between vegetation and snow depth distributions. This computes snow height distribution by the relation between snow cover fraction and the forest canopy density. Finally, the SWE has to be calculated for the snow covered areas, detected by means of a previously developed decision tree classifier able to classify snow cover by self selecting rules in a statistically optimum way. The advantages introduced from this work are many. Firstly, applying a suitable method with data features, it is possible to automatically obtain snow cover description with high frequency. Moreover, the advantages of the modularity in the proposed approach allows to improve the three factors estimation in an independent way. Limitations lie into clouds problem that affects results by obscuring the observed territory, that is bounded by fusing temporal and spatial information. Then the spatial resolution of data, satisfactory with the scale of hydrological models, mismatch with the available in situ point information, causing difficulties for a method validation or calibration. However this working flow results computationally cost-effectiveness, robust to the radiometric noise of the original data, provides spatially extended and frequent information.

  10. Using CFD Surface Solutions to Shape Sonic Boom Signatures Propagated from Off-Body Pressure

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Li, Wu

    2013-01-01

    The conceptual design of a low-boom and low-drag supersonic aircraft remains a challenge despite significant progress in recent years. Inverse design using reversed equivalent area and adjoint methods have been demonstrated to be effective in shaping the ground signature propagated from computational fluid dynamics (CFD) off-body pressure distributions. However, there is still a need to reduce the computational cost in the early stages of design to obtain a baseline that is feasible for low-boom shaping, and in the search for a robust low-boom design over the entire sonic boom footprint. The proposed design method addresses the need to reduce the computational cost for robust low-boom design by using surface pressure distributions from CFD solutions to shape sonic boom ground signatures propagated from CFD off-body pressure.

  11. Dark matter and the equivalence principle

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  12. The relationship between mortality caused by cardiovascular diseases and two climatic factors in densely populated areas in Norway and Ireland.

    PubMed

    Eng, H; Mercer, J B

    2000-10-01

    Seasonal variations in mortality due to cardiovascular disease have been demonstrated in many countries, with the highest levels occurring during the coldest months of the year. It has been suggested that this can be explained by cold climate. In this study, we examined the relationship between mortality and two different climatic factors in two densely populated areas (Dublin, Ireland and Oslo/Akershus, Norway). Meteorological data (mean daily air temperatures and wind speed) and registered daily mortality data for three groups of cardiovascular disease for the period 1985-1994 were obtained for the two respective areas. The daily mortality ratio for both men and women of 60 years and older was calculated from the mortality data. The wind chill temperature equivalent was calculated from the Siple and Passels formula. The seasonal variations in mortality were greater in Dublin than in Oslo/Akershus, with mortality being highest in winter. This pattern was similar to that previously shown for the two respective countries as a whole. There was a negative correlation between mortality and both air temperature and wind chill temperature equivalent for all three groups of diseases. The slopes of the linear regression lines describing the relationship between mortality and air temperature were a lot steeper for the Irish data than for the Norwegian data. However, the difference between the steepness of the linear regression lines for the relationship between mortality and wind chill temperature equivalent was considerably less between the two areas. This can be explained by the fact that Dublin is a much windier area than Oslo/Akershus. The results of this study demonstrate that the inclusion of two climatic factors rather than just one changes the impression of the relationship between climate and cardiovascular disease mortality.

  13. Exploration of the psychophysics of a motion displacement hyperacuity stimulus.

    PubMed

    Verdon-Roe, Gay Mary; Westcott, Mark C; Viswanathan, Ananth C; Fitzke, Frederick W; Garway-Heath, David F

    2006-11-01

    To explore the summation properties of a motion-displacement hyperacuity stimulus with respect to stimulus area and luminance, with the goal of applying the results to the development of a motion-displacement test (MDT) for the detection of early glaucoma. A computer-generated line stimulus was presented with displacements randomized between 0 and 40 minutes of arc (min arc). Displacement thresholds (50% seen) were compared for stimuli of equal area but different edge length (orthogonal to the direction of motion) at four retinal locations. Also, MDT thresholds were recorded at five values of Michelson contrast (25%-84%) for each of five line lengths (11-128 min arc) at a single nasal location (-27,3). Frequency-of-seeing (FOS) curves were generated and displacement thresholds and interquartile ranges (IQR, 25%-75% seen) determined by probit analysis. Equivalent displacement thresholds were found for stimuli of equal area but half the edge length. Elevations of thresholds and IQR were demonstrated as line length and contrast were reduced. Equivalent displacement thresholds were also found for stimuli of equivalent energy (stimulus area x [stimulus luminance - background luminance]), in accordance with Ricco's law. There was a linear relationship (slope -0.5) between log MDT threshold and log stimulus energy. Stimulus area, rather than edge length, determined displacement thresholds within the experimental conditions tested. MDT thresholds are linearly related to the square root of the total energy of the stimulus. A new law, the threshold energy-displacement (TED) law, is proposed to apply to MDT summation properties, giving the relationship T = K logE where, T is the MDT threshold, Kis the constant, and E is the stimulus energy.

  14. Surface roughness effects on contact line motion with small capillary number

    NASA Astrophysics Data System (ADS)

    Yang, Feng-Chao; Chen, Xiao-Peng; Yue, Pengtao

    2018-01-01

    In this work, we investigate how surface roughness influences contact line dynamics by simulating forced wetting in a capillary tube. The tube wall is decorated with microgrooves and is intrinsically hydrophilic. A phase-field method is used to capture the fluid interface and the moving contact line. According to the numerical results, a criterion is proposed to judge whether the grooves are entirely wetted or not at vanishing capillary numbers. When the contact line moves over a train of grooves, the apparent contact angle exhibits a periodic nature, no matter whether the Cassie-Baxter or the Wenzel state is achieved. The oscillation amplitude of apparent contact angle is analyzed and found to be inversely proportional to the interface area. The contact line motion can be characterized as stick-jump-slip in the Cassie-Baxter state and stick-slip in the Wenzel state. By comparing to the contact line dynamics on smooth surfaces, equivalent microscopic contact angles and slip lengths are obtained. The equivalent slip length in the Cassie-Baxter state agrees well with the theoretical model in the literature. The equivalent contact angles are, however, much greater than the predictions of the Cassie-Baxter model and the Wenzel model for equilibrium stable states. Our results reveal that the pinning of the contact line at surface defects effectively enhances the hydrophobicity of rough surfaces, even when the surface material is intrinsically hydrophilic and the flow is under the Wenzel state.

  15. Individual preferences modulate incentive values: Evidence from functional MRI

    PubMed Central

    Koeneke, Susan; Pedroni, Andreas F; Dieckmann, Anja; Bosch, Volker; Jäncke, Lutz

    2008-01-01

    Background In most studies on human reward processing, reward intensity has been manipulated on an objective scale (e.g., varying monetary value). Everyday experience, however, teaches us that objectively equivalent rewards may differ substantially in their subjective incentive values. One factor influencing incentive value in humans is branding. The current study explores the hypothesis that individual brand preferences modulate activity in reward areas similarly to objectively measurable differences in reward intensity. Methods A wheel-of-fortune game comprising an anticipation phase and a subsequent outcome evaluation phase was implemented. Inside a 3 Tesla MRI scanner, 19 participants played for chocolate bars of three different brands that differed in subjective attractiveness. Results Parametrical analysis of the obtained fMRI data demonstrated that the level of activity in anatomically distinct neural networks was linearly associated with the subjective preference hierarchy of the brands played for. During the anticipation phases, preference-dependent neural activity has been registered in premotor areas, insular cortex, orbitofrontal cortex, and in the midbrain. During the outcome phases, neural activity in the caudate nucleus, precuneus, lingual gyrus, cerebellum, and in the pallidum was influenced by individual preference. Conclusion Our results suggest a graded effect of differently preferred brands onto the incentive value of objectively equivalent rewards. Regarding the anticipation phase, the results reflect an intensified state of wanting that facilitates action preparation when the participants play for their favorite brand. This mechanism may underlie approach behavior in real-life choice situations. PMID:19032746

  16. Hydrothermal synthesis of high surface area ZIF-8 with minimal use of TEA

    NASA Astrophysics Data System (ADS)

    Butova, V. V.; Budnyk, A. P.; Bulanova, E. A.; Lamberti, C.; Soldatov, A. V.

    2017-07-01

    In this paper we present, for the first time, a simple hydrothermal recipe for the synthesis of ZIF-8 Metal-Organic Framework (MOF) with a large specific surface area (1340 m2/g by BET). An important feature of the method is that the product forms in aqueous medium under standard hydrothermal conditions without DMF and great excess of linker with the use of TEA as structure directing agent. The ZIF-8 crystal phase of the product was confirmed by XRD; this technique has been also exploited to check the crystallinity and to follow the changes in the MOF structure induced by heating. TGA and temperature dependent XRD testify the high thermal stability of the material (470 °C in N2 and at 400 °C in air). The IR spectral profile of the material provides a complete picture of vibrations assigned to the linker and the metal center. The systematic investigation of the products obtained by increasing the TEA amount in the reacting medium from 0 to 25.5 mol equivalent Zn2+, allowed us to understand its role and to find 2.6 mol equivalent Zn2+ as the minimum amount needed to obtain a single phase ZIF-8 material with the high standard reported above. The stability of the material under severe basic conditions makes it a promising candidate in heterogeneous catalysis. The material has shown high capacity in I2 uptake, making it interesting also for selective molecular adsorption.

  17. Effective implementation of the weak Galerkin finite element methods for the biharmonic equation

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu

    2017-07-06

    The weak Galerkin (WG) methods have been introduced in [11, 12, 17] for solving the biharmonic equation. The purpose of this paper is to develop an algorithm to implement the WG methods effectively. This can be achieved by eliminating local unknowns to obtain a global system with significant reduction of size. In fact this reduced global system is equivalent to the Schur complements of the WG methods. The unknowns of the Schur complement of the WG method are those defined on the element boundaries. The equivalence of theWG method and its Schur complement is established. The numerical results demonstrate themore » effectiveness of this new implementation technique.« less

  18. Effective implementation of the weak Galerkin finite element methods for the biharmonic equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    The weak Galerkin (WG) methods have been introduced in [11, 12, 17] for solving the biharmonic equation. The purpose of this paper is to develop an algorithm to implement the WG methods effectively. This can be achieved by eliminating local unknowns to obtain a global system with significant reduction of size. In fact this reduced global system is equivalent to the Schur complements of the WG methods. The unknowns of the Schur complement of the WG method are those defined on the element boundaries. The equivalence of theWG method and its Schur complement is established. The numerical results demonstrate themore » effectiveness of this new implementation technique.« less

  19. Evaluation of SNODAS snow depth and snow water equivalent estimates for the Colorado Rocky Mountains, USA

    USGS Publications Warehouse

    Clow, David W.; Nanus, Leora; Verdin, Kristine L.; Schmidt, Jeffrey

    2012-01-01

    The National Weather Service's Snow Data Assimilation (SNODAS) program provides daily, gridded estimates of snow depth, snow water equivalent (SWE), and related snow parameters at a 1-km2 resolution for the conterminous USA. In this study, SNODAS snow depth and SWE estimates were compared with independent, ground-based snow survey data in the Colorado Rocky Mountains to assess SNODAS accuracy at the 1-km2 scale. Accuracy also was evaluated at the basin scale by comparing SNODAS model output to snowmelt runoff in 31 headwater basins with US Geological Survey stream gauges. Results from the snow surveys indicated that SNODAS performed well in forested areas, explaining 72% of the variance in snow depths and 77% of the variance in SWE. However, SNODAS showed poor agreement with measurements in alpine areas, explaining 16% of the variance in snow depth and 30% of the variance in SWE. At the basin scale, snowmelt runoff was moderately correlated (R2 = 0.52) with SNODAS model estimates. A simple method for adjusting SNODAS SWE estimates in alpine areas was developed that uses relations between prevailing wind direction, terrain, and vegetation to account for wind redistribution of snow in alpine terrain. The adjustments substantially improved agreement between measurements and SNODAS estimates, with the R2 of measured SWE values against SNODAS SWE estimates increasing from 0.42 to 0.63 and the root mean square error decreasing from 12 to 6 cm. Results from this study indicate that SNODAS can provide reliable data for input to moderate-scale to large-scale hydrologic models, which are essential for creating accurate runoff forecasts. Refinement of SNODAS SWE estimates for alpine areas to account for wind redistribution of snow could further improve model performance. Published 2011. This article is a US Government work and is in the public domain in the USA.

  20. SU-E-T-353: Verification of Water Equivalent Thickness (WET) and Water Equivalent Spreadness (WES) of Proton Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demez, N; Lee, T; Keppel, Cynthia

    Purpose: To verify calculated water equivalent thickness (WET) and water equivalent spreadness (WES) in various tissue equivalent media for proton therapy Methods: Water equivalent thicknesses (WET) of tissue equivalent materials have been calculated using the Bragg-Kleeman rule. Lateral spreadness and fluence reduction of proton beams both in those media were calculated using proton loss model (PLM) algorithm. In addition, we calculated lateral spreadness ratios with respect to that in water at the same WET depth and so the WES was defined. The WETs of those media for different proton beam energies were measured using MLIC (Multi-Layered Ionization Chamber). Also, fluencemore » and field sizes in those materials of various thicknesses were measured with ionization chambers and films Results: Calculated WETs are in agreement with measured WETs within 0.5%. We found that water equivalent spreadness (WES) is constant and the fluence and field size measurements verify that fluence can be estimated using the concept of WES. Conclusions: Calculation of WET based on the Bragg-Kleeman rule as well as the constant WES of proton beams for tissue equivalent phantoms can be used to predict fluence and field sizes at the depths of interest both in tissue equivalent media accurately for clinically available protonenergies.« less

  1. The Kadison–Singer Problem in mathematics and engineering

    PubMed Central

    Casazza, Peter G.; Tremain, Janet Crandell

    2006-01-01

    We will see that the famous intractible 1959 Kadison–Singer Problem in C*-algebras is equivalent to fundamental open problems in a dozen different areas of research in mathematics and engineering. This work gives all these areas common ground on which to interact as well as explaining why each area has volumes of literature on their respective problems without a satisfactory resolution. PMID:16461465

  2. Skin integrated with perfusable vascular channels on a chip.

    PubMed

    Mori, Nobuhito; Morimoto, Yuya; Takeuchi, Shoji

    2017-02-01

    This paper describes a method for fabricating perfusable vascular channels coated with endothelial cells within a cultured skin-equivalent by fixing it to a culture device connected to an external pump and tubes. A histological analysis showed that vascular channels were constructed in the skin-equivalent, which showed a conventional dermal/epidermal morphology, and the endothelial cells formed tight junctions on the vascular channel wall. The barrier function of the skin-equivalent was also confirmed. Cell distribution analysis indicated that the vascular channels supplied nutrition to the skin-equivalent. Moreover, the feasibility of a skin-equivalent containing vascular channels as a model for studying vascular absorption was demonstrated by measuring test molecule permeation from the epidermal layer into the vascular channels. The results suggested that this skin-equivalent can be used for skin-on-a-chip applications including drug development, cosmetics testing, and studying skin biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Equivalent isotropic scattering formulation for transient short-pulse radiative transfer in anisotropic scattering planar media.

    PubMed

    Guo, Z; Kumar, S

    2000-08-20

    An isotropic scaling formulation is evaluated for transient radiative transfer in a one-dimensional planar slab subject to collimated and/or diffuse irradiation. The Monte Carlo method is used to implement the equivalent scattering and exact simulations of the transient short-pulse radiation transport through forward and backward anisotropic scattering planar media. The scaled equivalent isotropic scattering results are compared with predictions of anisotropic scattering in various problems. It is found that the equivalent isotropic scaling law is not appropriate for backward-scattering media in transient radiative transfer. Even for an optically diffuse medium, the differences in temporal transmittance and reflectance profiles between predictions of backward anisotropic scattering and equivalent isotropic scattering are large. Additionally, for both forward and backward anisotropic scattering media, the transient equivalent isotropic results are strongly affected by the change of photon flight time, owing to the change of flight direction associated with the isotropic scaling technique.

  4. MEthods of ASsessing blood pressUre: identifying thReshold and target valuEs (MeasureBP): a review & study protocol.

    PubMed

    Blom, Kimberly C; Farina, Sasha; Gomez, Yessica-Haydee; Campbell, Norm R C; Hemmelgarn, Brenda R; Cloutier, Lyne; McKay, Donald W; Dawes, Martin; Tobe, Sheldon W; Bolli, Peter; Gelfer, Mark; McLean, Donna; Bartlett, Gillian; Joseph, Lawrence; Featherstone, Robin; Schiffrin, Ernesto L; Daskalopoulou, Stella S

    2015-04-01

    Despite progress in automated blood pressure measurement (BPM) technology, there is limited research linking hard outcomes to automated office BPM (OBPM) treatment targets and thresholds. Equivalences for automated BPM devices have been estimated from approximations of standardized manual measurements of 140/90 mmHg. Until outcome-driven targets and thresholds become available for automated measurement methods, deriving evidence-based equivalences between automated methods and standardized manual OBPM is the next best solution. The MeasureBP study group was initiated by the Canadian Hypertension Education Program to close this critical knowledge gap. MeasureBP aims to define evidence-based equivalent values between standardized manual OBPM and automated BPM methods by synthesizing available evidence using a systematic review and individual subject-level data meta-analyses. This manuscript provides a review of the literature and MeasureBP study protocol. These results will lay the evidenced-based foundation to resolve uncertainties within blood pressure guidelines which, in turn, will improve the management of hypertension.

  5. A new self-shielding method based on a detailed cross-section representation in the resolved energy domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saygin, H.; Hebert, A.

    The calculation of a dilution cross section {bar {sigma}}{sub e} is the most important step in the self-shielding formalism based on the equivalence principle. If a dilution cross section that accurately characterizes the physical situation can be calculated, it can then be used for calculating the effective resonance integrals and obtaining accurate self-shielded cross sections. A new technique for the calculation of equivalent cross sections based on the formalism of Riemann integration in the resolved energy domain is proposed. This new method is compared to the generalized Stamm`ler method, which is also based on an equivalence principle, for a two-regionmore » cylindrical cell and for a small pressurized water reactor assembly in two dimensions. The accuracy of each computing approach is obtained using reference results obtained from a fine-group slowing-down code named CESCOL. It is shown that the proposed method leads to slightly better performance than the generalized Stamm`ler approach.« less

  6. Improved Mirror Source Method in Roomacoustics

    NASA Astrophysics Data System (ADS)

    Mechel, F. P.

    2002-10-01

    Most authors in room acoustics qualify the mirror source method (MS-method) as the only exact method to evaluate sound fields in auditoria. But evidently nobody applies it. The reason for this discrepancy is the abundantly high numbers of needed mirror sources which are reported in the literature, although such estimations of needed numbers of mirror sources mostly are used for the justification of more or less heuristic modifications of the MS-method. The present, intentionally tutorial article accentuates the analytical foundations of the MS-method whereby the number of needed mirror sources is reduced already. Further, the task of field evaluation in three-dimensional spaces is reduced to a sequence of tasks in two-dimensional room edges. This not only allows the use of easier geometrical computations in two dimensions, but also the sound field in corner areas can be represented by a single (directional) source sitting on the corner line, so that only this "corner source" must be mirror-reflected in the further process. This procedure gives a drastic reduction of the number of needed equivalent sources. Finally, the traditional MS-method is not applicable in rooms with convex corners (the angle between the corner flanks, measured on the room side, exceeds 180°). In such cases, the MS-method is combined below with the second principle of superposition(PSP). It reduces the scattering task at convex corners to two sub-tasks between one flank and the median plane of the room wedge, i.e., always in concave corner areas where the MS-method can be applied.

  7. Determination of lead equivalent values according to IEC 61331-1:2014—Report and short guidelines for testing laboratories

    NASA Astrophysics Data System (ADS)

    Büermann, L.

    2016-09-01

    Materials used for the production of protective devices against diagnostic medical X-radiation described in the international standard IEC 61331-3 need to be specified in terms of their lead attenuation equivalent thickness according to the methods described in IEC 61331-1. In May 2014 the IEC published the second edition of these standards which contain improved methods for the determination of attenuation ratios and the corresponding lead attenuation equivalent thicknesses of lead-reduced or lead-free materials. These methods include the measurement of scattered photons behind the protective material which were hitherto neglected but are becoming more important because of the increasing use of lead-reduced or even lead-free materials. They can offer the same protective effect but are up to 20% lighter and also easier to dispose of. The new method is based on attenuation ratios measured with the so-called ``inverse broad beam condition''. Since the corresponding measurement procedure is new and in some respects more complex than the methods used in the past, it was regarded as being helpful to have a description of how such measurements can reliably be performed. This technical report describes in detail the attenuation ratio measurements and corresponding procedures for the lead equivalent determinations of sample materials using the method with the inverse broad beam condition as carried out at the Physikalisch-Technische Bundesanstalt (PTB). PTB still offers material testing and certification for the German responsible notified body. In addition to the description of the measurements at PTB, a short technical guide is provided for testing laboratories which intend to establish this kind of protective material certification. The guide includes technical recommendations for the testing equipment like X-ray facilities, reference lead sheets and radiation detectors; special procedures for the determination of the lead attenuation equivalent thickness; their uncertainties and the necessary contents of the test certificate.

  8. Research on the analytical method about influence of gas leakage and explosion on subway

    NASA Astrophysics Data System (ADS)

    Ji, Wendong; Yang, Ligong; Chen, Lin

    2018-05-01

    With the construction and development of city subway, the cross impact of underground rail transit and gas pipe network is becoming more and more serious, but there is no analytical method for the impact of gas explosions on the subway. According to this paper, the gas leakage is equivalent to the TNT explosion equivalent, based on which, the calculation of the explosive impact load is carried out. On the basis of the concrete manifestation of gas explosion, it is more convenient to carry out the subsequent calculation by equivalently treating the explosive impact load as a uniform load within a certain range. The overlying soil of the subway station has played a protective role for the subway, making the displacement of the subway structure in the explosion process significantly reduced. The analysis on the actual case shows that this method can be successfully applied to the quantitative analysis of such accidents.

  9. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    PubMed Central

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  10. On the equivalence of spherical splines with least-squares collocation and Stokes's formula for regional geoid computation

    NASA Astrophysics Data System (ADS)

    Ophaug, Vegard; Gerlach, Christian

    2017-11-01

    This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.

  11. Averaged ratio between complementary profiles for evaluating shape distortions of map projections and spherical hierarchical tessellations

    NASA Astrophysics Data System (ADS)

    Yan, Jin; Song, Xiao; Gong, Guanghong

    2016-02-01

    We describe a metric named averaged ratio between complementary profiles to represent the distortion of map projections, and the shape regularity of spherical cells derived from map projections or non-map-projection methods. The properties and statistical characteristics of our metric are investigated. Our metric (1) is a variable of numerical equivalence to both scale component and angular deformation component of Tissot indicatrix, and avoids the invalidation when using Tissot indicatrix and derived differential calculus for evaluating non-map-projection based tessellations where mathematical formulae do not exist (e.g., direct spherical subdivisions), (2) exhibits simplicity (neither differential nor integral calculus) and uniformity in the form of calculations, (3) requires low computational cost, while maintaining high correlation with the results of differential calculus, (4) is a quasi-invariant under rotations, and (5) reflects the distortions of map projections, distortion of spherical cells, and the associated distortions of texels. As an indicator of quantitative evaluation, we investigated typical spherical tessellation methods, some variants of tessellation methods, and map projections. The tessellation methods we evaluated are based on map projections or direct spherical subdivisions. The evaluation involves commonly used Platonic polyhedrons, Catalan polyhedrons, etc. Quantitative analyses based on our metric of shape regularity and an essential metric of area uniformity implied that (1) Uniform Spherical Grids and its variant show good qualities in both area uniformity and shape regularity, and (2) Crusta, Unicube map, and a variant of Unicube map exhibit fairly acceptable degrees of area uniformity and shape regularity.

  12. Ambient Dose Equivalent measured at the Instituto Nacional de Cancerologia Department of Nuclear Medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, O.; Torres-Ulloa, C. L.; Facultad de Ciencias, Universidad Nacional Autonoma de Mexico, AP 70-542, 04510, DF

    2010-12-07

    Ambient dose equivalent values were determined in several sites at the Instituto Nacional de Cancerologia, Departmento de Medicina Nuclear, using TLD-100 and TLD-900 thermoluminescent dosemeters. Additionally, ambient dose equivalent was measured at a corridor outside the hospitalization room for patients treated with {sup 137}Cs brachytherapy. Dosemeter calibration was performed at the Instituto Nacional de Investigaciones Nucleares, Laboratorio de Metrologia, to known {sup 137}Cs gamma radiation air kerma. Radionuclides considered for this study are {sup 131}I, {sup 18}F, {sup 67}Ga, {sup 99m}Tc, {sup 111}In, {sup 201}Tl and {sup 137}Cs, with main gamma energies between 93 and 662 keV. Dosemeters were placedmore » during a five month period in the nuclear medicine rooms (containing gamma-cameras), injection corridor, patient waiting areas, PET/CT study room, hot lab, waste storage room and corridors next to the hospitalization rooms for patients treated with {sup 131}I and {sup 137}Cs. High dose values were found at the waste storage room, outside corridor of {sup 137}Cs brachytherapy patients and PET/CT area. Ambient dose equivalent rate obtained for the {sup 137}Cs brachytherapy corridor is equal to (18.51{+-}0.02)x10{sup -3} mSv/h. Sites with minimum doses are the gamma camera rooms, having ambient dose equivalent rates equal to (0.05{+-}0.03)x10{sup -3} mSv/h. Recommendations have been given to the Department authorities so that further actions are taken to reduce doses at high dose sites in order to comply with the ALARA principle (as low as reasonably achievable).« less

  13. Use of a "Super-child" Approach to Assess the Vitamin A Equivalence of Moringa oleifera Leaves, Develop a Compartmental Model for Vitamin A Kinetics, and Estimate Vitamin A Total Body Stores in Young Mexican Children.

    PubMed

    Lopez-Teros, Veronica; Ford, Jennifer Lynn; Green, Michael H; Tang, Guangwen; Grusak, Michael A; Quihui-Cota, Luis; Muzhingi, Tawanda; Paz-Cassini, Mariela; Astiazaran-Garcia, Humberto

    2017-12-01

    Background: Worldwide, an estimated 250 million children <5 y old are vitamin A (VA) deficient. In Mexico, despite ongoing efforts to reduce VA deficiency, it remains an important public health problem; thus, food-based interventions that increase the availability and consumption of provitamin A-rich foods should be considered. Objective: The objectives were to assess the VA equivalence of 2 H-labeled Moringa oleifera (MO) leaves and to estimate both total body stores (TBS) of VA and plasma retinol kinetics in young Mexican children. Methods: β-Carotene was intrinsically labeled by growing MO plants in a 2 H 2 O nutrient solution. Fifteen well-nourished children (17-35 mo old) consumed puréed MO leaves (1 mg β-carotene) and a reference dose of [ 13 C 10 ]retinyl acetate (1 mg) in oil. Blood (2 samples/child) was collected 10 times (2 or 3 children each time) over 35 d. The bioefficacy of MO leaves was calculated from areas under the composite "super-child" plasma isotope response curves, and MO VA equivalence was estimated through the use of these values; a compartmental model was developed to predict VA TBS and retinol kinetics through the use of composite plasma [ 13 C 10 ]retinol data. TBS were also estimated with isotope dilution. Results: The relative bioefficacy of β-carotene retinol activity equivalents from MO was 28%; VA equivalence was 3.3:1 by weight (0.56 μmol retinol:1 μmol β-carotene). Kinetics of plasma retinol indicate more rapid plasma appearance and turnover and more extensive recycling in these children than are observed in adults. Model-predicted mean TBS (823 μmol) was similar to values predicted using a retinol isotope dilution equation applied to data from 3 to 6 d after dosing (mean ± SD: 832 ± 176 μmol; n = 7). Conclusions: The super-child approach can be used to estimate population carotenoid bioefficacy and VA equivalence, VA status, and parameters of retinol metabolism from a composite data set. Our results provide initial estimates of retinol kinetics in well-nourished young children with adequate VA stores and demonstrate that MO leaves may be an important source of VA. © 2017 American Society for Nutrition.

  14. Students’ misconception on equal sign

    NASA Astrophysics Data System (ADS)

    Kusuma, N. F.; Subanti, S.; Usodo, B.

    2018-04-01

    Equivalence is a very general relation in mathematics. The focus of this article is narrowed specifically to an equal sign in the context of equations. The equal sign is a symbol of mathematical equivalence. Studies have found that many students do not have a deep understanding of equivalence. Students often misinterpret the equal sign as an operational rather than a symbol of mathematical equivalence. This misinterpretation of the equal sign will be label as a misconception. It is important to discuss and must resolve immediately because it can lead to the problems in students’ understanding. The purpose of this research is to describe students’ misconception about the meaning of equal sign on equal matrices. Descriptive method was used in this study involving five students of Senior High School in Boyolali who were taking Equal Matrices course. The result of this study shows that all of the students had the misconception about the meaning of the equal sign. They interpret the equal sign as an operational symbol rather than a symbol of mathematical equivalence. Students merely solve the problem only single way, which is a computational method, so that students stuck in a monotonous way of thinking and unable to develop their creativity.

  15. Generalization of Equivalent Crystal Theory to Include Angular Dependence

    NASA Technical Reports Server (NTRS)

    Ferrante, John; Zypman, Fredy R.

    2004-01-01

    In the original Equivalent Crystal Theory, each atomic site in the real crystal is assigned an equivalent lattice constant, in general different from the ground state one. This parameter corresponds to a local compression or expansion of the lattice. The basic method considers these volumetric transformations and, in addition, introduces the possibility that the reference lattice is anisotropically distorted. These distortions however, were introduced ad-hoc. In this work, we generalize the original Equivalent Crystal Theory by systematically introducing site-dependent directional distortions of the lattice, whose corresponding distortions account for the dependence of the energy on anisotropic local density variations. This is done in the spirit of the original framework, but including a gradient term in the density. This approach is introduced to correct a deficiency in the original Equivalent Crystal Theory and other semiempirical methods in quantitatively obtaining the correct ratios of the surface energies of low index planes of cubic metals (100), (110), and (111). We develop here the basic framework, and apply it to the calculation of Fe (110) and Fe (111) surface energy formation. The results, compared with first principles calculations, show an improvement over previous semiempirical approaches.

  16. Snow water equivalent in the Alps as seen by gridded data sets, CMIP5 and CORDEX climate models

    NASA Astrophysics Data System (ADS)

    Terzago, Silvia; von Hardenberg, Jost; Palazzi, Elisa; Provenzale, Antonello

    2017-07-01

    The estimate of the current and future conditions of snow resources in mountain areas would require reliable, kilometre-resolution, regional-observation-based gridded data sets and climate models capable of properly representing snow processes and snow-climate interactions. At the moment, the development of such tools is hampered by the sparseness of station-based reference observations. In past decades passive microwave remote sensing and reanalysis products have mainly been used to infer information on the snow water equivalent distribution. However, the investigation has usually been limited to flat terrains as the reliability of these products in mountain areas is poorly characterized.This work considers the available snow water equivalent data sets from remote sensing and from reanalyses for the greater Alpine region (GAR), and explores their ability to provide a coherent view of the snow water equivalent distribution and climatology in this area. Further we analyse the simulations from the latest-generation regional and global climate models (RCMs, GCMs), participating in the Coordinated Regional Climate Downscaling Experiment over the European domain (EURO-CORDEX) and in the Fifth Coupled Model Intercomparison Project (CMIP5) respectively. We evaluate their reliability in reproducing the main drivers of snow processes - near-surface air temperature and precipitation - against the observational data set EOBS, and compare the snow water equivalent climatology with the remote sensing and reanalysis data sets previously considered. We critically discuss the model limitations in the historical period and we explore their potential in providing reliable future projections.The results of the analysis show that the time-averaged spatial distribution of snow water equivalent and the amplitude of its annual cycle are reproduced quite differently by the different remote sensing and reanalysis data sets, which in fact exhibit a large spread around the ensemble mean. We find that GCMs at spatial resolutions equal to or finer than 1.25° longitude are in closer agreement with the ensemble mean of satellite and reanalysis products in terms of root mean square error and standard deviation than lower-resolution GCMs. The set of regional climate models from the EURO-CORDEX ensemble provides estimates of snow water equivalent at 0.11° resolution that are locally much larger than those indicated by the gridded data sets, and only in a few cases are these differences smoothed out when snow water equivalent is spatially averaged over the entire Alpine domain. ERA-Interim-driven RCM simulations show an annual snow cycle that is comparable in amplitude to those provided by the reference data sets, while GCM-driven RCMs present a large positive bias. RCMs and higher-resolution GCM simulations are used to provide an estimate of the snow reduction expected by the mid-21st century (RCP 8.5 scenario) compared to the historical climatology, with the main purpose of highlighting the limits of our current knowledge and the need for developing more reliable snow simulations.

  17. Precise SAR measurements in the near-field of RF antenna systems

    NASA Astrophysics Data System (ADS)

    Hakim, Bandar M.

    Wireless devices must meet specific safety radiation limits, and in order to assess the health affects of such devices, standard procedures are used in which standard phantoms, tissue-equivalent liquids, and miniature electric field probes are used. The accuracy of such measurements depend on the precision in measuring the dielectric properties of the tissue-equivalent liquids and the associated calibrations of the electric-field probes. This thesis describes work on the theoretical modeling and experimental measurement of the complex permittivity of tissue-equivalent liquids, and associated calibration of miniature electric-field probes. The measurement method is based on measurements of the field attenuation factor and power reflection coefficient of a tissue-equivalent sample. A novel method, to the best of the authors knowledge, for determining the dielectric properties and probe calibration factors is described and validated. The measurement system is validated using saline at different concentrations, and measurements of complex permittivity and calibration factors have been made on tissue-equivalent liquids at 900MHz and 1800MHz. Uncertainty analysis have been conducted to study the measurement system sensitivity. Using the same waveguide to measure tissue-equivalent permittivity and calibrate e-field probes eliminates a source of uncertainty associated with using two different measurement systems. The measurement system is used to test GSM cell-phones at 900MHz and 1800MHz for Specific Absorption Rate (SAR) compliance using a Specific Anthropomorphic Mannequin phantom (SAM).

  18. Water, Ice, and Meteorological Measurements at South Cascade Glacier, Washington, Balance Years 2004 and 2005

    USGS Publications Warehouse

    Bidlake, William R.; Josberger, Edward G.; Savoca, Mark E.

    2007-01-01

    Winter snow accumulation and summer snow and ice ablation were measured at South Cascade Glacier, Washington, to estimate glacier mass-balance quantities for balance years 2004 and 2005. The North Cascade Range in the vicinity of South Cascade Glacier accumulated smaller than normal winter snowpacks during water years 2004 and 2005. Correspondingly, the balance years 2004 and 2005 maximum winter snow balances of South Cascade Glacier, 2.08 and 1.97 meters water equivalent, respectively, were smaller than the average of such balances since 1959. The 2004 glacier summer balance (-3.73 meters water equivalent) was the eleventh most negative during 1959 to 2005 and the 2005 glacier summer balance (-4.42 meters water equivalent) was the third most negative. The relatively small winter snow balances and unusually negative summer balances of 2004 and 2005 led to an overall loss of glacier mass. The 2004 and 2005 glacier net balances, -1.65 and -2.45 meters water equivalent, respectively, were the seventh and second most negative during 1953 to 2005. For both balance years, the accumulation area ratio was less than 0.05 and the equilibrium line altitude was higher than the glacier. The unusually negative 2004 and 2005 glacier net balances, combined with a negative balance previously reported for 2003, resulted in a cumulative 3-year net balance of -6.20 meters water equivalent. No equal or greater 3-year mass loss has occurred previously during the more than 4 decades of U.S. Geological Survey mass-balance measurements at South Cascade Glacier. Accompanying the glacier mass losses were retreat of the terminus and reduction of total glacier area. The terminus retreated at a rate of about 17 meters per year during balance year 2004 and 15 meters per year during balance year 2005. Glacier area near the end of balance years 2004 and 2005 was 1.82 and 1.75 square kilometers, respectively. Runoff from the basin containing the glacier and from an adjacent nonglacierized basin was gaged during all or parts of water years 2004 and 2005. Air temperature, wind speed, precipitation, and incoming solar radiation were measured at selected locations on and near the glacier.

  19. Effect on skin hydration of using baby wipes to clean the napkin area of newborn babies: assessor-blinded randomised controlled equivalence trial.

    PubMed

    Lavender, Tina; Furber, Christine; Campbell, Malcolm; Victor, Suresh; Roberts, Ian; Bedwell, Carol; Cork, Michael J

    2012-06-01

    Some national guidelines recommend the use of water alone for napkin cleansing. Yet, there is a readiness, amongst many parents, to use baby wipes. Evidence from randomised controlled trials, of the effect of baby wipes on newborn skin integrity is lacking. We conducted a study to examine the hypothesis that the use of a specifically formulated cleansing wipe on the napkin area of newborn infants (<1 month) has an equivalent effect on skin hydration when compared with using cotton wool and water (usual care). A prospective, assessor-blinded, randomised controlled equivalence trial was conducted during 2010. Healthy, term babies (n=280), recruited within 48 hours of birth, were randomly assigned to have their napkin area cleansed with an alcohol-free baby wipe (140 babies) or cotton wool and water (140 babies). Primary outcome was change in hydration from within 48 hours of birth to 4 weeks post-birth. Secondary outcomes comprised changes in trans-epidermal water loss, skin surface pH and erythema, presence of microbial skin contaminants/irritants at 4 weeks and napkin dermatitis reported by midwife at 4 weeks and mother during the 4 weeks. Complete hydration data were obtained for 254 (90.7 %) babies. Wipes were shown to be equivalent to water and cotton wool in terms of skin hydration (intention-to-treat analysis: wipes 65.4 (SD 12.4) vs. water 63.5 (14.2), p=0.47, 95% CI -2.5 to 4.2; per protocol analysis: wipes 64.6 (12.4) vs. water 63.6 (14.3), p=0.53, 95% CI -2.4 to 4.2). No significant differences were found in the secondary outcomes, except for maternal-reported napkin dermatitis, which was higher in the water group (p=0.025 for complete responses). Baby wipes had an equivalent effect on skin hydration when compared with cotton wool and water. We found no evidence of any adverse effects of using these wipes. These findings offer reassurance to parents who choose to use baby wipes and to health professionals who support their use. Current Controlled Trials ISRCTN86207019.

  20. 32 CFR 256.10 - Air installations compatible use zone noise descriptors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... available in the Office of the Assistant Secretary of Defense (Installations and Logistics)—IO, Washington... NEF, for meters of policy, noise planning and decisionmaking, areas quieter than Ldn 65 shall be considered approximately equivalent to the previously used CNR Zone 1 and to areas quieter than NEF 30. The...

  1. 32 CFR 256.10 - Air installations compatible use zone noise descriptors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... available in the Office of the Assistant Secretary of Defense (Installations and Logistics)—IO, Washington... NEF, for meters of policy, noise planning and decisionmaking, areas quieter than Ldn 65 shall be considered approximately equivalent to the previously used CNR Zone 1 and to areas quieter than NEF 30. The...

  2. Essential features of long-term changes of areas and diameters of sunspot groups in solar activity cycles 12-24

    NASA Astrophysics Data System (ADS)

    Efimenko, V. M.; Lozitsky, V. G.

    2018-06-01

    We analyze the Greenwich catalog data on areas of sunspot groups of last thirteen solar cycles. Various parameters of sunspots are considered, namely: average monthly smoothed areas, maximum area for each year and equivalent diameters of groups of sunspots. The first parameter shows an exceptional power of the 19th cycle of solar activity, which appears here more contrastively than in the numbers of spots (that is, in Wolf's numbers). It was found that in the maximum areas of sunspot groups for a year there is a unique phenomenon: a short and high jump in the 18th cycle (in 1946-1947) that has no analogues in other cycles. We also studied the integral distributions for equivalent diameters and found the following: (a) the average value of the index of power-law approximation is 5.4 for the last 13 cycles and (b) there is reliable evidence of Hale's double cycle (about 44 years). Since this indicator reflects the dispersion of sunspot group diameters, the results obtained show that the convective zone of the Sun generates embryos of active regions in different statistical regimes which change with a cycle of about 44 years.

  3. Perifoveal function in patients with North Carolina macular dystrophy: the importance of accounting for fixation locus.

    PubMed

    Seiple, William; Szlyk, Janet P; Paliga, Jennifer; Rabb, Maurice F

    2006-04-01

    To quantify the extent of visual function losses in patients with North Carolina Macular Dystrophy (NCMD) and to demonstrate the importance of accounting for eccentric fixation when making comparisons with normal data. Five patients with NCMD who were from a single family were examined. Multifocal electroretinograms (mfERGs) and psychophysical assessments of acuity and luminance visual field sensitivities were measured throughout the central retina. Comparisons of responses from equivalent retinal areas were accomplished by shifting normal templates to be centered at the locus of fixation for each patient. Losses of psychophysically measured visual function in patients with NCMD extend to areas adjacent to the locations of visible lesions. The multifocal ERG amplitude was reduced only within the area of visible lesion. Multifocal ERG implicit times were delayed throughout the entire central retinal area assessed. ERG timing is a sensitive assay of retinal function, and our results indicate that NCMD has a widespread effect at the level of the mid and outer retina. The findings also demonstrated that it is necessary to account for fixation locus and to ensure that equivalent retinal areas are compared when testing patients with macular disease who have eccentric fixation.

  4. Radioactivity concentrations in soils in the Qingdao area, China.

    PubMed

    Qu, Limei; Yao, De; Cong, Pifu; Xia, Ning

    2008-10-01

    The specific activity concentrations of radionuclides (238)U, (232)Th, and (40)K of 2300 sampling points in the Qingdao area were measured by an FD-3022 gamma-ray spectrometer. The radioactivity concentrations of (238)U, (232)Th, and (40)K ranged from 3.3 to 185.3, from 6.9 to 157.2, and from 115.8 to 7834.4 Bq kg(-1), respectively. The air-absorbed dose at 1 meter above ground, effective annual dose, external hazard index, and radium equivalent activity were also calculated to systematically evaluate the radiological hazards of the natural radioactivity in Qingdao. The air-absorbed dose, effective annual dose, external hazard index, and radium equivalent activity in the study area were 98.6 nGy h(-1), 0.12 mSv, 0.56, 197 Bq kg(-1), respectively. Compared with the worldwide value, the air-absorbed dose is slightly high, but the other factors are all lower than the recommended value. The natural external exposure will not pose significant radiological threat to the population. In conclusion, the Qingdao area is safe with regard to the radiological level and suitable for living.

  5. Explosive materials equivalency, test methods and evaluation

    NASA Technical Reports Server (NTRS)

    Koger, D. M.; Mcintyre, F. L.

    1980-01-01

    Attention is given to concepts of explosive equivalency of energetic materials based on specific airblast parameters. A description is provided of a wide bandwidth high accuracy instrumentation system which has been used extensively in obtaining pressure time profiles of energetic materials. The object of the considered test method is to determine the maximum output from the detonation of explosive materials in terms of airblast overpressure and positive impulse. The measured pressure and impulse values are compared with known characteristics of hemispherical TNT data to determine the equivalency of the test material in relation to TNT. An investigation shows that meaningful comparisons between various explosives and a standard reference material such as TNT should be based upon the same parameters. The tests should be conducted under the same conditions.

  6. State-space reduction and equivalence class sampling for a molecular self-assembly model.

    PubMed

    Packwood, Daniel M; Han, Patrick; Hitosugi, Taro

    2016-07-01

    Direct simulation of a model with a large state space will generate enormous volumes of data, much of which is not relevant to the questions under study. In this paper, we consider a molecular self-assembly model as a typical example of a large state-space model, and present a method for selectively retrieving 'target information' from this model. This method partitions the state space into equivalence classes, as identified by an appropriate equivalence relation. The set of equivalence classes H, which serves as a reduced state space, contains none of the superfluous information of the original model. After construction and characterization of a Markov chain with state space H, the target information is efficiently retrieved via Markov chain Monte Carlo sampling. This approach represents a new breed of simulation techniques which are highly optimized for studying molecular self-assembly and, moreover, serves as a valuable guideline for analysis of other large state-space models.

  7. A model predicting the evolution of ice particle size spectra and radiative properties of cirrus clouds. Part 2: Dependence of absorption and extinction on ice crystal morphology

    NASA Technical Reports Server (NTRS)

    Mitchell, David L.; Arnott, W. Patrick

    1994-01-01

    This study builds upon the microphysical modeling described in Part 1 by deriving formulations for the extinction and absorption coefficients in terms of the size distribution parameters predicted from the micro-physical model. The optical depth and single scatter albedo of a cirrus cloud can then be determined, which, along with the asymmetry parameter, are the input parameters needed by cloud radiation models. Through the use of anomalous diffraction theory, analytical expressions were developed describing the absorption and extinction coefficients and the single scatter albedo as functions of size distribution parameters, ice crystal shapes (or habits), wavelength, and refractive index. The extinction coefficient was formulated in terms of the projected area of the size distribution, while the absorption coefficient was formulated in terms of both the projected area and mass of the size distribution. These properties were formulated as explicit functions of ice crystal geometry and were not based on an 'effective radius.' Based on simulations of the second cirrus case study described in Part 1, absorption coefficients predicted in the near infrared for hexagonal columns and rosettes were up to 47% and 71% lower, respectively, than absorption coefficients predicted by using equivalent area spheres. This resulted in single scatter albedos in the near-infrared that were considerably greater than those predicted by the equivalent area sphere method. Reflectances in this region should therefore be underestimated using the equivalent area sphere approach. Cloud optical depth was found to depend on ice crystal habit. When the simulated cirrus cloud contained only bullet rosettes, the optical depth was 142% greater than when the cloud contained only hexagonal columns. This increase produced a doubling in cloud albedo. In the near-infrared (IR), the single scatter albedo also exhibited a significant dependence on ice crystal habit. More research is needed on the geometrical properties of ice crystals before the influence of ice crystal shape on cirrus radiative properties can be adequately understood. This study provides a way of coupling the radiative properties of absorption, extinction, and single scatter albedo to the microphysical properties of cirrus clouds. The dependence of extinction and absorption on ice crystal shape was not just due to geometrical differences between crystal types, but was also due to the effect these differences had on the evolution of ice particle size spectra. The ice particle growth model in Part 1 and the radiative properties treated here are based on analytical formulations, and thus represent a computationally efficient means of modeling the microphysical and radiative properties of cirrus clouds.

  8. ENGINEERING ECONOMIC ANALYSIS OF A PROGRAM FOR ARTIFICIAL GROUNDWATER RECHARGE.

    USGS Publications Warehouse

    Reichard, Eric G.; Bredehoeft, John D.

    1984-01-01

    This study describes and demonstrates two alternate methods for evaluating the relative costs and benefits of artificial groundwater recharge using percolation ponds. The first analysis considers the benefits to be the reduction of pumping lifts and land subsidence; the second considers benefits as the alternative costs of a comparable surface delivery system. Example computations are carried out for an existing artificial recharge program in Santa Clara Valley in California. A computer groundwater model is used to estimate both the average long term and the drought period effects of artificial recharge in the study area. Results indicate that the costs of artificial recharge are considerably smaller than the alternative costs of an equivalent surface system. Refs.

  9. The design of two sonic boom wind tunnel models from conceptual aircraft which cruise at Mach numbers of 2.0 and 3.0

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Needleman, Kathy E.

    1990-01-01

    A method for designing wind tunnel models of conceptual, low-boom, supersonic cruise aircraft is presented. Also included is a review of the procedures used to design the conceptual low-boom aircraft. In the discussion, problems unique to, and encountered during, the design of both the conceptual aircraft and the wind tunnel models are outlined. The sensitivity of low-boom characteristics in the aircraft design to control the volume and lift equivalent area distributions was emphasized. Solutions to these problems are reported; especially the two which led to the design of the wind tunnel model support stings.

  10. Expert system for surveillance and diagnosis of breach fuel elements

    DOEpatents

    Gross, K.C.

    1988-01-21

    An apparatus and method are disclosed for surveillance and diagnosis of breached fuel elements in a nuclear reactor. A delayed neutron monitoring system provides output signals indicating the delayed neutron activity and age and the equivalent recoil area of a breached fuel element. Sensors are used to provide outputs indicating the status of each component of the delayed neutron monitoring system. Detectors also generate output signals indicating the reactor power level and the primary coolant flow rate of the reactor. The outputs from the detectors and sensors are interfaced with an artificial intelligence-based knowledge system which implements predetermined logic and generates output signals indicating the operability of the reactor. 2 figs.

  11. Expert system for surveillance and diagnosis of breach fuel elements

    DOEpatents

    Gross, Kenny C.

    1989-01-01

    An apparatus and method are disclosed for surveillance and diagnosis of breached fuel elements in a nuclear reactor. A delayed neutron monitoring system provides output signals indicating the delayed neutron activity and age and the equivalent recoil areas of a breached fuel element. Sensors are used to provide outputs indicating the status of each component of the delayed neutron monitoring system. Detectors also generate output signals indicating the reactor power level and the primary coolant flow rate of the reactor. The outputs from the detectors and sensors are interfaced with an artificial intelligence-based knowledge system which implements predetermined logic and generates output signals indicating the operability of the reactor.

  12. Fabrication of resonant patterns using thermal nano-imprint lithography for thin-film photovoltaic applications.

    PubMed

    Khaleque, Tanzina; Svavarsson, Halldor Gudfinnur; Magnusson, Robert

    2013-07-01

    A single-step, low-cost fabrication method to generate resonant nano-grating patterns on poly-methyl-methacrylate (PMMA; plexiglas) substrates using thermal nano-imprint lithography is reported. A guided-mode resonant structure is obtained by subsequent deposition of thin films of transparent conductive oxide and amorphous silicon on the imprinted area. Referenced to equivalent planar structures, around 25% and 45% integrated optical absorbance enhancement is observed over the 450-nm to 900-nm wavelength range in one- and two-dimensional patterned samples, respectively. The fabricated elements provided have 300-nm periods. Thermally imprinted thermoplastic substrates hold potential for low-cost fabrication of nano-patterned thin-film solar cells for efficient light management.

  13. Tracing molecular dephasing in biological tissue

    NASA Astrophysics Data System (ADS)

    Mokim, M.; Carruba, C.; Ganikhanov, F.

    2017-10-01

    We demonstrate the quantitative spectroscopic characterization and imaging of biological tissue using coherent time-domain microscopy with a femtosecond resolution. We identify tissue constituents and perform dephasing time (T2) measurements of characteristic Raman active vibrations. This was shown in subcutaneous mouse fat embedded within collagen rich areas of the dermis and the muscle connective tissue. The demonstrated equivalent spectral resolution (<0.3 cm-1) is an order of magnitude better compared to commonly used frequency-domain methods for characterization of biological media. This provides with the important dimensions and parameters in biological media characterization and can become an effective tool in detecting minute changes in the bio-molecular composition and environment that is critical for molecular level diagnosis.

  14. Empirical source noise prediction method with application to subsonic coaxial jet mixing noise

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.; Weir, D. S.

    1982-01-01

    A general empirical method, developed for source noise predictions, uses tensor splines to represent the dependence of the acoustic field on frequency and direction and Taylor's series to represent the dependence on source state parameters. The method is applied to prediction of mixing noise from subsonic circular and coaxial jets. A noise data base of 1/3-octave-band sound pressure levels (SPL's) from 540 tests was gathered from three countries: United States, United Kingdom, and France. The SPL's depend on seven variables: frequency, polar direction angle, and five source state parameters: inner and outer nozzle pressure ratios, inner and outer stream total temperatures, and nozzle area ratio. A least-squares seven-dimensional curve fit defines a table of constants which is used for the prediction method. The resulting prediction has a mean error of 0 dB and a standard deviation of 1.2 dB. The prediction method is used to search for a coaxial jet which has the greatest coaxial noise benefit as compared with an equivalent single jet. It is found that benefits of about 6 dB are possible.

  15. Spectral K-edge subtraction imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Samadi, N.; Martinson, M.; Bassey, B.; Wei, Z.; Belev, G.; Chapman, D.

    2014-05-01

    We describe a spectral x-ray transmission method to provide images of independent material components of an object using a synchrotron x-ray source. The imaging system and process is similar to K-edge subtraction (KES) imaging where two imaging energies are prepared above and below the K-absorption edge of a contrast element and a quantifiable image of the contrast element and a water equivalent image are obtained. The spectral method, termed ‘spectral-KES’ employs a continuous spectrum encompassing an absorption edge of an element within the object. The spectrum is prepared by a bent Laue monochromator with good focal and energy dispersive properties. The monochromator focuses the spectral beam at the object location, which then diverges onto an area detector such that one dimension in the detector is an energy axis. A least-squares method is used to interpret the transmitted spectral data with fits to either measured and/or calculated absorption of the contrast and matrix material-water. The spectral-KES system is very simple to implement and is comprised of a bent Laue monochromator, a stage for sample manipulation for projection and computed tomography imaging, and a pixelated area detector. The imaging system and examples of its applications to biological imaging are presented. The system is particularly well suited for a synchrotron bend magnet beamline with white beam access.

  16. Life cycle assessment and economic analysis of a low concentrating photovoltaic system.

    PubMed

    De Feo, G; Forni, M; Petito, F; Renno, C

    2016-10-01

    Many new photovoltaic (PV) applications, such as the concentrating PV (CPV) systems, are appearing on the market. The main characteristic of CPV systems is to concentrate sunlight on a receiver by means of optical devices and to decrease the solar cells area required. A low CPV (LCPV) system allows optimizing the PV effect with high increase of generated electric power as well as decrease of active surface area. In this paper, an economic analysis and a life cycle assessment (LCA) study of a particular LCPV scheme is presented and its environmental impacts are compared with those of a PV traditional system. The LCA study was performed with the software tool SimaPro 8.0.2, using the Econinvent 3.1 database. A functional unit of 1 kWh of electricity produced was chosen. Carbon Footprint, Ecological Footprint and ReCiPe 2008 were the methods used to assess the environmental impacts of the LCPV plant compared with a corresponding traditional system. All the methods demonstrated the environmental convenience of the LCPV system. The innovative system allowed saving 16.9% of CO2 equivalent in comparison with the traditional PV plant. The environmental impacts saving was 17% in terms of Ecological Footprint, and, finally, 15.8% with the ReCiPe method.

  17. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  18. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE PAGES

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    2017-02-05

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  19. Degeneracy relations in QCD and the equivalence of two systematic all-orders methods for setting the renormalization scale

    DOE PAGES

    Bi, Huan -Yu; Wu, Xing -Gang; Ma, Yang; ...

    2015-06-26

    The Principle of Maximum Conformality (PMC) eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I); the other, more recent, method (PMC-II) uses the R δ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfymore » all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio R e+e– and the Higgs partial width I'(H→bb¯). Both methods lead to the same resummed (‘conformal’) series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {β i}-terms in the pQCD expansion are taken into account. In addition, we show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.« less

  20. In-plane structuring of proton exchange membrane fuel cell cathodes: Effect of ionomer equivalent weight structuring on performance and current density distribution

    NASA Astrophysics Data System (ADS)

    Herden, Susanne; Riewald, Felix; Hirschfeld, Julian A.; Perchthaler, Markus

    2017-07-01

    Within the active area of a fuel cell inhomogeneous operating conditions occur, however, state of the art electrodes are homogenous over the complete active area. This study uses current density distribution measurements to analyze which ionomer equivalent weight (EW) shows locally the highest current densities. With this information a segmented cathode electrode is manufactured by decal transfer. The segmented electrode shows better performance especially at high current densities compared to homogenous electrodes. Furthermore this segmented catalyst coated membrane (CCM) performs optimal in wet as well as dry conditions, both operating conditions arise in automotive fuel cell applications. Thus, cathode electrodes with an optimized ionomer EW distribution might have a significant impact on future automotive fuel cell development.

Top