Sample records for constant comparative methodology

  1. Methodology for extracting local constants from petroleum cracking flows

    DOEpatents

    Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.

    2000-01-01

    A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.

  2. Constant and Progressive Time Delay Procedures for Teaching Children with Autism: A Literature Review

    ERIC Educational Resources Information Center

    Walker, Gabriela

    2008-01-01

    A review of 22 empirical studies examining the use of constant (CTD) and progressive (PTD) time delay procedures employed with children with autism frames an indirect analysis of the demographic, procedural, methodological, and outcome parameters of existing research. None of the previous manuscripts compared the two response prompting procedures.…

  3. Hit-Validation Methodologies for Ligands Isolated from DNA-Encoded Chemical Libraries.

    PubMed

    Zimmermann, Gunther; Li, Yizhou; Rieder, Ulrike; Mattarella, Martin; Neri, Dario; Scheuermann, Jörg

    2017-05-04

    DNA-encoded chemical libraries (DECLs) are large collections of compounds linked to DNA fragments, serving as amplifiable barcodes, which can be screened on target proteins of interest. In typical DECL selections, preferential binders are identified by high-throughput DNA sequencing, by comparing their frequency before and after the affinity capture step. Hits identified in this procedure need to be confirmed, by resynthesis and by performing affinity measurements. In this article we present new methods based on hybridization of oligonucleotide conjugates with fluorescently labeled complementary oligonucleotides; these facilitate the determination of affinity constants and kinetic dissociation constants. The experimental procedures were demonstrated with acetazolamide, a binder to carbonic anhydrase IX with a dissociation constant in the nanomolar range. The detection of binding events was compatible not only with fluorescence polarization methodologies, but also with Alphascreen technology and with microscale thermophoresis. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Nonlinear maneuver autopilot for the F-15 aircraft

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Badgett, M. E.; Walker, R. A.

    1989-01-01

    A methodology is described for the development of flight test trajectory control laws based on singular perturbation methodology and nonlinear dynamic modeling. The control design methodology is applied to a detailed nonlinear six degree-of-freedom simulation of the F-15 and results for a level accelerations, pushover/pullup maneuver, zoom and pushover maneuver, excess thrust windup turn, constant thrust windup turn, and a constant dynamic pressure/constant load factor trajectory are presented.

  5. Identification of elastic, dielectric, and piezoelectric constants in piezoceramic disks.

    PubMed

    Perez, Nicolas; Andrade, Marco A B; Buiochi, Flavio; Adamowski, Julio C

    2010-12-01

    Three-dimensional modeling of piezoelectric devices requires a precise knowledge of piezoelectric material parameters. The commonly used piezoelectric materials belong to the 6mm symmetry class, which have ten independent constants. In this work, a methodology to obtain precise material constants over a wide frequency band through finite element analysis of a piezoceramic disk is presented. Given an experimental electrical impedance curve and a first estimate for the piezoelectric material properties, the objective is to find the material properties that minimize the difference between the electrical impedance calculated by the finite element method and that obtained experimentally by an electrical impedance analyzer. The methodology consists of four basic steps: experimental measurement, identification of vibration modes and their sensitivity to material constants, a preliminary identification algorithm, and final refinement of the material constants using an optimization algorithm. The application of the methodology is exemplified using a hard lead zirconate titanate piezoceramic. The same methodology is applied to a soft piezoceramic. The errors in the identification of each parameter are statistically estimated in both cases, and are less than 0.6% for elastic constants, and less than 6.3% for dielectric and piezoelectric constants.

  6. Research on Internet-Supported Learning: A Review

    ERIC Educational Resources Information Center

    Bekele, Teklu Abate; Menchaca, Michael Paul

    2008-01-01

    How did the Internet affect learning in higher education? What methodological and theoretical issues characterized research on Internet-Supported Learning (ISL)? What implications existed for future research? A constant comparative, qualitative analysis of 29 studies indicated grade achievement was the prime measure of effectiveness in ISL…

  7. Application of an Artificial Neural Network to the Prediction of OH Radical Reaction Rate Constants for Evaluating Global Warming Potential.

    PubMed

    Allison, Thomas C

    2016-03-03

    Rate constants for reactions of chemical compounds with hydroxyl radical are a key quantity used in evaluating the global warming potential of a substance. Experimental determination of these rate constants is essential, but it can also be difficult and time-consuming to produce. High-level quantum chemistry predictions of the rate constant can suffer from the same issues. Therefore, it is valuable to devise estimation schemes that can give reasonable results on a variety of chemical compounds. In this article, the construction and training of an artificial neural network (ANN) for the prediction of rate constants at 298 K for reactions of hydroxyl radical with a diverse set of molecules is described. Input to the ANN consists of counts of the chemical bonds and bends present in the target molecule. The ANN is trained using 792 (•)OH reaction rate constants taken from the NIST Chemical Kinetics Database. The mean unsigned percent error (MUPE) for the training set is 12%, and the MUPE of the testing set is 51%. It is shown that the present methodology yields rate constants of reasonable accuracy for a diverse set of inputs. The results are compared to high-quality literature values and to another estimation scheme. This ANN methodology is expected to be of use in a wide range of applications for which (•)OH reaction rate constants are required. The model uses only information that can be gathered from a 2D representation of the molecule, making the present approach particularly appealing, especially for screening applications.

  8. Downsizings, Mergers, and Acquisitions: Perspectives of Human Resource Development Practitioners

    ERIC Educational Resources Information Center

    Shook, LaVerne; Roth, Gene

    2011-01-01

    Purpose: This paper seeks to provide perspectives of HR practitioners based on their experiences with mergers, acquisitions, and/or downsizings. Design/methodology/approach: This qualitative study utilized interviews with 13 HR practitioners. Data were analyzed using a constant comparative method. Findings: HR practitioners were not involved in…

  9. Rapid estimation of glucosinolate thermal degradation rate constants in leaves of Chinese kale and broccoli (Brassica oleracea) in two seasons.

    PubMed

    Hennig, Kristin; Verkerk, Ruud; Bonnema, Guusje; Dekker, Matthijs

    2012-08-15

    Kinetic modeling was used as a tool to quantitatively estimate glucosinolate thermal degradation rate constants. Literature shows that thermal degradation rates differ in different vegetables. Well-characterized plant material, leaves of broccoli and Chinese kale plants grown in two seasons, was used in the study. It was shown that a first-order reaction is appropriate to model glucosinolate degradation independent from the season. No difference in degradation rate constants of structurally identical glucosinolates was found between broccoli and Chinese kale leaves when grown in the same season. However, glucosinolate degradation rate constants were highly affected by the season (20-80% increase in spring compared to autumn). These results suggest that differences in glucosinolate degradation rate constants can be due to variation in environmental as well as genetic factors. Furthermore, a methodology to estimate rate constants rapidly is provided to enable the analysis of high sample numbers for future studies.

  10. Calculation of the exchange coupling constants of copper binuclear systems based on spin-flip constricted variational density functional theory.

    PubMed

    Zhekova, Hristina R; Seth, Michael; Ziegler, Tom

    2011-11-14

    We have recently developed a methodology for the calculation of exchange coupling constants J in weakly interacting polynuclear metal clusters. The method is based on unrestricted and restricted second order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) and is here applied to eight binuclear copper systems. Comparison of the SF-CV(2)-DFT results with experiment and with results obtained from other DFT and wave function based methods has been made. Restricted SF-CV(2)-DFT with the BH&HLYP functional yields consistently J values in excellent agreement with experiment. The results acquired from this scheme are comparable in quality to those obtained by accurate multi-reference wave function methodologies such as difference dedicated configuration interaction and the complete active space with second-order perturbation theory. © 2011 American Institute of Physics

  11. Computation of reliable textural indices from multimodal brain MRI: suggestions based on a study of patients with diffuse intrinsic pontine glioma

    NASA Astrophysics Data System (ADS)

    Goya-Outi, Jessica; Orlhac, Fanny; Calmon, Raphael; Alentorn, Agusti; Nioche, Christophe; Philippe, Cathy; Puget, Stéphanie; Boddaert, Nathalie; Buvat, Irène; Grill, Jacques; Frouin, Vincent; Frouin, Frederique

    2018-05-01

    Few methodological studies regarding widely used textural indices robustness in MRI have been reported. In this context, this study aims to propose some rules to compute reliable textural indices from multimodal 3D brain MRI. Diagnosis and post-biopsy MR scans including T1, post-contrast T1, T2 and FLAIR images from thirty children with diffuse intrinsic pontine glioma (DIPG) were considered. The hybrid white stripe method was adapted to standardize MR intensities. Sixty textural indices were then computed for each modality in different regions of interest (ROI), including tumor and white matter (WM). Three types of intensity binning were compared : constant bin width and relative bounds; constant number of bins and relative bounds; constant number of bins and absolute bounds. The impact of the volume of the region was also tested within the WM. First, the mean Hellinger distance between patient-based intensity distributions decreased by a factor greater than 10 in WM and greater than 2.5 in gray matter after standardization. Regarding the binning strategy, the ranking of patients was highly correlated for 188/240 features when comparing with , but for only 20 when comparing with , and nine when comparing with . Furthermore, when using or texture indices reflected tumor heterogeneity as assessed visually by experts. Last, 41 features presented statistically significant differences between contralateral WM regions when ROI size slightly varies across patients, and none when using ROI of the same size. For regions with similar size, 224 features were significantly different between WM and tumor. Valuable information from texture indices can be biased by methodological choices. Recommendations are to standardize intensities in MR brain volumes, to use intensity binning with constant bin width, and to define regions with the same volumes to get reliable textural indices.

  12. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  13. Computational study of configurational and vibrational contributions to the thermodynamics of substitutional alloys: The case of Ni3Al

    NASA Astrophysics Data System (ADS)

    Michelon, M. F.; Antonelli, A.

    2010-03-01

    We have developed a methodology to study the thermodynamics of order-disorder transformations in n -component substitutional alloys that combines nonequilibrium methods, which can efficiently compute free energies, with Monte Carlo simulations, in which configurational and vibrational degrees of freedom are simultaneously considered on an equal footing basis. Furthermore, with this methodology one can easily perform simulations in the canonical and in the isobaric-isothermal ensembles, which allow the investigation of the bulk volume effect. We have applied this methodology to calculate configurational and vibrational contributions to the entropy of the Ni3Al alloy as functions of temperature. The simulations show that when the volume of the system is kept constant, the vibrational entropy does not change upon transition while constant-pressure calculations indicate that the volume increase at the order-disorder transition causes a vibrational entropy increase of 0.08kB/atom . This is significant when compared to the configurational entropy increase of 0.27kB/atom . Our calculations also indicate that the inclusion of vibrations reduces in about 30% the order-disorder transition temperature determined solely considering the configurational degrees of freedom.

  14. Computation of reliable textural indices from multimodal brain MRI: suggestions based on a study of patients with diffuse intrinsic pontine glioma.

    PubMed

    Goya-Outi, Jessica; Orlhac, Fanny; Calmon, Raphael; Alentorn, Agusti; Nioche, Christophe; Philippe, Cathy; Puget, Stéphanie; Boddaert, Nathalie; Buvat, Irène; Grill, Jacques; Frouin, Vincent; Frouin, Frederique

    2018-05-10

    Few methodological studies regarding widely used textural indices robustness in MRI have been reported. In this context, this study aims to propose some rules to compute reliable textural indices from multimodal 3D brain MRI. Diagnosis and post-biopsy MR scans including T1, post-contrast T1, T2 and FLAIR images from thirty children with diffuse intrinsic pontine glioma (DIPG) were considered. The hybrid white stripe method was adapted to standardize MR intensities. Sixty textural indices were then computed for each modality in different regions of interest (ROI), including tumor and white matter (WM). Three types of intensity binning were compared [Formula: see text]: constant bin width and relative bounds; [Formula: see text] constant number of bins and relative bounds; [Formula: see text] constant number of bins and absolute bounds. The impact of the volume of the region was also tested within the WM. First, the mean Hellinger distance between patient-based intensity distributions decreased by a factor greater than 10 in WM and greater than 2.5 in gray matter after standardization. Regarding the binning strategy, the ranking of patients was highly correlated for 188/240 features when comparing [Formula: see text] with [Formula: see text], but for only 20 when comparing [Formula: see text] with [Formula: see text], and nine when comparing [Formula: see text] with [Formula: see text]. Furthermore, when using [Formula: see text] or [Formula: see text] texture indices reflected tumor heterogeneity as assessed visually by experts. Last, 41 features presented statistically significant differences between contralateral WM regions when ROI size slightly varies across patients, and none when using ROI of the same size. For regions with similar size, 224 features were significantly different between WM and tumor. Valuable information from texture indices can be biased by methodological choices. Recommendations are to standardize intensities in MR brain volumes, to use intensity binning with constant bin width, and to define regions with the same volumes to get reliable textural indices.

  15. Comparative Effectiveness Research in Oncology

    PubMed Central

    2013-01-01

    Although randomized controlled trials represent the gold standard for comparative effective research (CER), a number of additional methods are available when randomized controlled trials are lacking or inconclusive because of the limitations of such trials. In addition to more relevant, efficient, and generalizable trials, there is a need for additional approaches utilizing rigorous methodology while fully recognizing their inherent limitations. CER is an important construct for defining and summarizing evidence on effectiveness and safety and comparing the value of competing strategies so that patients, providers, and policymakers can be offered appropriate recommendations for optimal patient care. Nevertheless, methodological as well as political and social challenges for CER remain. CER requires constant and sophisticated methodological oversight of study design and analysis similar to that required for randomized trials to reduce the potential for bias. At the same time, if appropriately conducted, CER offers an opportunity to identify the most effective and safe approach to patient care. Despite rising and unsustainable increases in health care costs, an even greater challenge to the implementation of CER arises from the social and political environment questioning the very motives and goals of CER. Oncologists and oncology professional societies are uniquely positioned to provide informed clinical and methodological expertise to steer the appropriate application of CER toward critical discussions related to health care costs, cost-effectiveness, and the comparative value of the available options for appropriate care of patients with cancer. PMID:23697601

  16. Methodologies for extracting kinetic constants for multiphase reacting flow simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, S.L.; Lottes, S.A.; Golchert, B.

    1997-03-01

    Flows in industrial reactors often involve complex reactions of many species. A computational fluid dynamics (CFD) computer code, ICRKFLO, was developed to simulate multiphase, multi-species reacting flows. The ICRKFLO uses a hybrid technique to calculate species concentration and reaction for a large number of species in a reacting flow. This technique includes a hydrodynamic and reacting flow simulation with a small but sufficient number of lumped reactions to compute flow field properties followed by a calculation of local reaction kinetics and transport of many subspecies (order of 10 to 100). Kinetic rate constants of the numerous subspecies chemical reactions aremore » difficult to determine. A methodology has been developed to extract kinetic constants from experimental data efficiently. A flow simulation of a fluid catalytic cracking (FCC) riser was successfully used to demonstrate this methodology.« less

  17. Chairside CAD/CAM materials. Part 1: Measurement of elastic constants and microstructural characterization.

    PubMed

    Belli, Renan; Wendler, Michael; de Ligny, Dominique; Cicconi, Maria Rita; Petschelt, Anselm; Peterlik, Herwig; Lohbauer, Ulrich

    2017-01-01

    A deeper understanding of the mechanical behavior of dental restorative materials requires an insight into the materials elastic constants and microstructure. Here we aim to use complementary methodologies to thoroughly characterize chairside CAD/CAM materials and discuss the benefits and limitations of different analytical strategies. Eight commercial CAM/CAM materials, ranging from polycrystalline zirconia (e.max ZirCAD, Ivoclar-Vivadent), reinforced glasses (Vitablocs Mark II, VITA; Empress CAD, Ivoclar-Vivadent) and glass-ceramics (e.max CAD, Ivoclar-Vivadent; Suprinity, VITA; Celtra Duo, Dentsply) to hybrid materials (Enamic, VITA; Lava Ultimate, 3M ESPE) have been selected. Elastic constants were evaluated using three methods: Resonant Ultrasound Spectroscopy (RUS), Resonant Beam Technique (RBT) and Ultrasonic Pulse-Echo (PE). The microstructures were characterized using Scanning Electron Microscopy (SEM), Energy Dispersive X-ray Spectroscopy (EDX), Raman Spectroscopy and X-ray Diffraction (XRD). Young's modulus (E), Shear modulus (G), Bulk modulus (B) and Poisson's ratio (ν) were obtained for each material. E and ν reached values ranging from 10.9 (Lava Ultimate) to 201.4 (e.max ZirCAD) and 0.173 (Empress CAD) to 0.47 (Lava Ultimate), respectively. RUS showed to be the most complex and reliable method, while the PE method the easiest to perform but most unreliable. All dynamic methods have shown limitations in measuring the elastic constants of materials showing high damping behavior (hybrid materials). SEM images, Raman spectra and XRD patterns were made available for each material, showing to be complementary tools in the characterization of their crystal phases. Here different methodologies are compared for the measurement of elastic constants and microstructural characterization of CAD/CAM restorative materials. The elastic properties and crystal phases of eight materials are herein fully characterized. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  18. A Methodology for Surface Soil Moisture and Vegetation Optical Depth Retrieval Using the Microwave Polarization Difference Index

    NASA Technical Reports Server (NTRS)

    Owe, Manfred; deJeu, Richard; Walker, Jeffrey; Zukor, Dorothy J. (Technical Monitor)

    2001-01-01

    A methodology for retrieving surface soil moisture and vegetation optical depth from satellite microwave radiometer data is presented. The procedure is tested with historical 6.6 GHz brightness temperature observations from the Scanning Multichannel Microwave Radiometer over several test sites in Illinois. Results using only nighttime data are presented at this time, due to the greater stability of nighttime surface temperature estimation. The methodology uses a radiative transfer model to solve for surface soil moisture and vegetation optical depth simultaneously using a non-linear iterative optimization procedure. It assumes known constant values for the scattering albedo and roughness. Surface temperature is derived by a procedure using high frequency vertically polarized brightness temperatures. The methodology does not require any field observations of soil moisture or canopy biophysical properties for calibration purposes and is totally independent of wavelength. Results compare well with field observations of soil moisture and satellite-derived vegetation index data from optical sensors.

  19. A methodology for constraining power in finite element modeling of radiofrequency ablation.

    PubMed

    Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng

    2017-07-01

    Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Foundational Performance Analyses of Pressure Gain Combustion Thermodynamic Benefits for Gas Turbines

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Kaemming, Thomas A.

    2012-01-01

    A methodology is described whereby the work extracted by a turbine exposed to the fundamentally nonuniform flowfield from a representative pressure gain combustor (PGC) may be assessed. The method uses an idealized constant volume cycle, often referred to as an Atkinson or Humphrey cycle, to model the PGC. Output from this model is used as input to a scalable turbine efficiency function (i.e., a map), which in turn allows for the calculation of useful work throughout the cycle. Integration over the entire cycle yields mass-averaged work extraction. The unsteady turbine work extraction is compared to steady work extraction calculations based on various averaging techniques for characterizing the combustor exit pressure and temperature. It is found that averages associated with momentum flux (as opposed to entropy or kinetic energy) provide the best match. This result suggests that momentum-based averaging is the most appropriate figure-of-merit to use as a PGC performance metric. Using the mass-averaged work extraction methodology, it is also found that the design turbine pressure ratio for maximum work extraction is significantly higher than that for a turbine fed by a constant pressure combustor with similar inlet conditions and equivalence ratio. Limited results are presented whereby the constant volume cycle is replaced by output from a detonation-based PGC simulation. The results in terms of averaging techniques and design pressure ratio are similar.

  1. Temperature dependence of (+)-catechin pyran ring proton coupling constants as measured by NMR and modeled using GMMX search methodology

    Treesearch

    Fred L. Tobiason; Stephen S. Kelley; M. Mark Midland; Richard W. Hemingway

    1997-01-01

    The pyran ring proton coupling constants for (+)-catechin have been experimentally determined in deuterated methanol over a temperature range of 213 K to 313 K. The experimental coupling constants were simulated to 0.04 Hz on the average at a 90 percent confidence limit using a LAOCOON method. The temperature dependence of the coupling constants was reproduced from the...

  2. Fundamental Physics from Observations of White Dwarf Stars

    NASA Astrophysics Data System (ADS)

    Bainbridge, M. B.; Barstow, M. A.; Reindl, N.; Barrow, J. D.; Webb, J. K.; Hu, J.; Preval, S. P.; Holberg, J. B.; Nave, G.; Tchang-Brillet, L.; Ayres, T. R.

    2017-03-01

    Variation in fundamental constants provide an important test of theories of grand unification. Potentially, white dwarf spectra allow us to directly observe variation in fundamental constants at locations of high gravitational potential. We study hot, metal polluted white dwarf stars, combining far-UV spectroscopic observations, atomic physics, atmospheric modelling and fundamental physics, in the search for variation in the fine structure constant. This registers as small but measurable shifts in the observed wavelengths of highly ionized Fe and Ni lines when compared to laboratory wavelengths. Measurements of these shifts were performed by Berengut et al (2013) using high-resolution STIS spectra of G191-B2B, demonstrating the validity of the method. We have extended this work by; (a) using new (high precision) laboratory wavelengths, (b) refining the analysis methodology (incorporating robust techniques from previous studies towards quasars), and (c) enlarging the sample of white dwarf spectra. A successful detection would be the first direct measurement of a gravitational field effect on a bare constant of nature. We describe our approach and present preliminary results.

  3. Filtrates & Residues: An Experiment on the Molar Solubility and Solubility Product of Barium Nitrate.

    ERIC Educational Resources Information Center

    Wruck, Betty; Reinstein, Jesse

    1989-01-01

    Provides a two hour experiment using direct gravimetric methods to determine solubility constants. Provides methodology and sample results. Discusses the effect of the common ion on the solubility constant. (MVL)

  4. Review of calcium methodologies.

    PubMed

    Zak, B; Epstein, E; Baginski, E S

    1975-01-01

    A review of calcium methodologies for serum has been described. The analytical systems developed over the past century have been classified as to type beginning with gravimetry and extending to isotope dilution-mass spectrometry by covering all of the commonly used technics which have evolved during that period. Screening and referee procedures are discussed along with comparative sensitivities encountered between atomic absorption spectrophotometry and molecular absorption spectrophotometry. A procedure involving a simple direct reaction for serum calcium using cresolphthalein complexone is recommended in which high blanks are minimized by repressing the ionization of the color reagent on lowering the dielectric constant characteristics of the mixture with dimethylsulfoxide. Reaction characteristics, errors which can be encountered, normal ranges and an interpretative resume are included in its discussion.

  5. Use of Ground Penetrating Radar at the FAA's National Airport Pavement Test Facility

    NASA Astrophysics Data System (ADS)

    Injun, Song

    2015-04-01

    The Federal Aviation Administration (FAA) in the United States has used a ground-coupled Ground Penetrating Radar (GPR) at the National Airport Pavement Test Facility (NAPTF) since 2005. One of the primary objectives of the testing at the facility is to provide full-scale pavement response and failure information for use in airplane landing gear design and configuration studies. During the traffic testing at the facility, a GSSI GPR system was used to develop new procedures for monitoring Hot Mix Asphalt (HMA) pavement density changes that is directly related to pavement failure. After reviewing current setups for data acquisition software and procedures for identifying different pavement layers, dielectric constant and pavement thickness were selected as dominant parameters controlling HMA properties provided by GPR. A new methodology showing HMA density changes in terms of dielectric constant variations, called dielectric sweep test, was developed and applied in full-scale pavement test. The dielectric constant changes were successfully monitored with increasing airplane traffic numbers. The changes were compared to pavement performance data (permanent deformation). The measured dielectric constants based on the known HMA thicknesses were also compared with computed dielectric constants using an equation from ASTM D4748-98 Standard Test Method for Determining the Thickness of Bound Pavement Layers Using Short-Pulse Radar. Six inches diameter cylindrical cores were taken after construction and traffic testing for the HMA layer bulk specific gravity. The measured bulk specific gravity was also compared to monitor HMA density changes caused by aircraft traffic conditions. Additionally this presentation will review the applications of the FAA's ground-coupled GPR on embedded rebar identification in concrete pavement, sewer pipes in soil, and gage identifications in 3D plots.

  6. Plasma Parameters From Reentry Signal Attenuation

    DOE PAGES

    Statom, T. K.

    2018-02-27

    This study presents the application of a theoretically developed method that provides plasma parameter solution space information from measured RF attenuation that occurs during reentry. The purpose is to provide reentry plasma parameter information from the communication signal attenuation. The theoretical development centers around the attenuation and the complex index of refraction. The methodology uses an imaginary index of the refraction matching algorithm with a tolerance to find suitable solutions that satisfy the theory. The imaginary matching terms are then used to determine the real index of refraction resulting in the complex index of refraction. Then a filter is usedmore » to reject nonphysical solutions. Signal attenuation-based plasma parameter properties investigated include the complex index of refraction, plasma frequency, electron density, collision frequency, propagation constant, attenuation constant, phase constant, complex plasma conductivity, and electron mobility. RF plasma thickness attenuation is investigated and compared to the literature. Finally, similar plasma thickness for a specific signal attenuation can have different plasma properties.« less

  7. Plasma Parameters From Reentry Signal Attenuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Statom, T. K.

    This study presents the application of a theoretically developed method that provides plasma parameter solution space information from measured RF attenuation that occurs during reentry. The purpose is to provide reentry plasma parameter information from the communication signal attenuation. The theoretical development centers around the attenuation and the complex index of refraction. The methodology uses an imaginary index of the refraction matching algorithm with a tolerance to find suitable solutions that satisfy the theory. The imaginary matching terms are then used to determine the real index of refraction resulting in the complex index of refraction. Then a filter is usedmore » to reject nonphysical solutions. Signal attenuation-based plasma parameter properties investigated include the complex index of refraction, plasma frequency, electron density, collision frequency, propagation constant, attenuation constant, phase constant, complex plasma conductivity, and electron mobility. RF plasma thickness attenuation is investigated and compared to the literature. Finally, similar plasma thickness for a specific signal attenuation can have different plasma properties.« less

  8. A three-compartment model for micropollutants sorption in sludge: methodological approach and insights.

    PubMed

    Barret, Maialen; Patureau, Dominique; Latrille, Eric; Carrère, Hélène

    2010-01-01

    In sludge resulting from wastewater treatment, organic micropollutants sorb to particles and to dissolved/colloidal matter (DCM). Both interactions may influence their physical and biological fate throughout the wastewater treatment processes. To our knowledge, sludge has never been considered as a three-compartment matrix, in which micropollutants coexist in three states: freely dissolved, sorbed-to-particles and sorbed-to-DCM. A methodology is proposed to concomitantly determine equilibrium constants of sorption to particles (K(part)) and to DCM (K(DCM)). Polycyclic Aromatic Hydrocarbons (PAHs) were chosen as model compounds for the experiments. The logarithm of estimated equilibrium constants ranged from 3.1 to 4.3 and their usual correlation to PAH hydrophobicity was verified. Moreover, PAH affinities for particles and for DCM could be compared. Affinity for particles was found to be stronger, probably due to their physical and chemical characteristics. This work provided a useful tool to assess the freely dissolved, sorbed-to-particles and sorbed-to-DCM concentrations of contaminants, which are necessary to accurately predict their fate. Besides, guidelines to investigate the link between sorption and the fundamental concept of bioavailability were proposed. (c) 2009 Elsevier Ltd. All rights reserved.

  9. Performance in physiology evaluation: possible improvement by active learning strategies.

    PubMed

    Montrezor, Luís H

    2016-12-01

    The evaluation process is complex and extremely important in the teaching/learning process. Evaluations are constantly employed in the classroom to assist students in the learning process and to help teachers improve the teaching process. The use of active methodologies encourages students to participate in the learning process, encourages interaction with their peers, and stimulates thinking about physiological mechanisms. This study examined the performance of medical students on physiology over four semesters with and without active engagement methodologies. Four activities were used: a puzzle, a board game, a debate, and a video. The results show that engaging in activities with active methodologies before a physiology cognitive monitoring test significantly improved student performance compared with not performing the activities. We integrate the use of these methodologies with classic lectures, and this integration appears to improve the teaching/learning process in the discipline of physiology and improves the integration of physiology with cardiology and neurology. In addition, students enjoy the activities and perform better on their evaluations when they use them. Copyright © 2016 The American Physiological Society.

  10. Diffusion and decay chain of radioisotopes in stagnant water in saturated porous media.

    PubMed

    Guzmán, Juan; Alvarez-Ramirez, Jose; Escarela-Pérez, Rafael; Vargas, Raúl Alejandro

    2014-09-01

    The analysis of the diffusion of radioisotopes in stagnant water in saturated porous media is important to validate the performance of barrier systems used in radioactive repositories. In this work a methodology is developed to determine the radioisotope concentration in a two-reservoir configuration: a saturated porous medium with stagnant water is surrounded by two reservoirs. The concentrations are obtained for all the radioisotopes of the decay chain using the concept of overvalued concentration. A methodology, based on the variable separation method, is proposed for the solution of the transport equation. The novelty of the proposed methodology involves the factorization of the overvalued concentration in two factors: one that describes the diffusion without decay and another one that describes the decay without diffusion. It is possible with the proposed methodology to determine the required time to obtain equal injective and diffusive concentrations in reservoirs. In fact, this time is inversely proportional to the diffusion coefficient. In addition, the proposed methodology allows finding the required time to get a linear and constant space distribution of the concentration in porous mediums. This time is inversely proportional to the diffusion coefficient. In order to validate the proposed methodology, the distributions in the radioisotope concentrations are compared with other experimental and numerical works. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Navigating the Road to Recovery: Assessment of the Coordination, Communication, and Financing of the Disaster Case Management Pilot in Louisiana

    DTIC Science & Technology

    2010-01-01

    analysis methodology called constant comparative analysis ( Lincoln and Guba, 1985; Denzin and Lincoln , 2000). First, two RAND researchers independently...and Yvonna S. Lincoln , “Introduction: The Discipline and Practice of Qualitative Research,” in Norman K. Denzin and Yvonna S. Lincoln , eds...Management Services,” undated web page. As of June 3, 2010: http://www.coastandards.org/standards.php?navView=private§ion_id=114 Denzin , Norman K

  12. High Throughput pharmacokinetic modeling using computationally predicted parameter values: dissociation constants (TDS)

    EPA Science Inventory

    Estimates of the ionization association and dissociation constant (pKa) are vital to modeling the pharmacokinetic behavior of chemicals in vivo. Methodologies for the prediction of compound sequestration in specific tissues using partition coefficients require a parameter that ch...

  13. Alternative Methods of Base Level Demand Forecasting for Economic Order Quantity Items,

    DTIC Science & Technology

    1975-12-01

    Note .. . . . . . . . . . . . . . . . . . . . . . . . 21 AdaptivC Single Exponential Smooti-ing ........ 21 Choosing the Smoothiing Constant... methodology used in the study, an analysis of results, .And a detailed summary. Chapter I. Methodology , contains a description o the data, a...Chapter IV. Detailed Summary, presents a detailed summary of the findings, lists the limitations inherent in the 7’" research methodology , and

  14. Virtual Enterprise: Transforming Entrepreneurship Education

    ERIC Educational Resources Information Center

    Borgese, Anthony

    2011-01-01

    Entrepreneurship education is ripe for utilizing experiential learning methods. Experiential methods are best learned when there is constant immersion into the subject matter. One such transformative learning methodology is Virtual Enterprise (VE). Virtual Enterprise is a multi-faceted, experiential learning methodology disseminated by the City…

  15. Approximate furrow infiltration model for time-variable ponding depth

    USDA-ARS?s Scientific Manuscript database

    A methodology is proposed for estimating furrow infiltration under time-variable ponding depth conditions. The methodology approximates the solution to the two-dimensional Richards equation, and is a modification of a procedure that was originally proposed for computing infiltration under constant ...

  16. Applying a contemporary grounded theory methodology.

    PubMed

    Licqurish, Sharon; Seibold, Carmel

    2011-01-01

    The aim of this paper is to discuss the application of a contemporary grounded theory methodology to a research project exploring the experiences of students studying for a degree in midwifery. Grounded theory is a qualitative research approach developed by Glaser and Strauss in the 1950s but the methodology for this study was modelled on Clarke's (2005) approach and was underpinned by a symbolic interactionist theoretical perspective, post-structuralist theories of Michel Foucault and a constructionist epistemology. The study participants were 19 midwifery students completing their final placement. Data were collected through individual in-depth interviews and participant observation, and analysed using the grounded theory analysis techniques of coding, constant comparative analysis and theoretical sampling, as well as situational maps. The analysis focused on social action and interaction and the operation of power in the students' environment. The social process in which the students were involved, as well as the actors and discourses that affected the students' competency development, were highlighted. The methodology allowed a thorough exploration of the students' experiences of achieving competency. However, some difficulties were encountered. One of the major issues related to the understanding and application of complex sociological theories that challenged positivist notions of truth and power. Furthermore, the mapping processes were complex. Despite these minor challenges, the authors recommend applying this methodology to other similar research projects.

  17. A Model of Self-Monitoring Blood Glucose Measurement Error.

    PubMed

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  18. Four-Component Relativistic Density-Functional Theory Calculations of Nuclear Spin-Rotation Constants: Relativistic Effects in p-Block Hydrides.

    PubMed

    Komorovsky, Stanislav; Repisky, Michal; Malkin, Elena; Demissie, Taye B; Ruud, Kenneth

    2015-08-11

    We present an implementation of the nuclear spin-rotation (SR) constants based on the relativistic four-component Dirac-Coulomb Hamiltonian. This formalism has been implemented in the framework of the Hartree-Fock and Kohn-Sham theory, allowing assessment of both pure and hybrid exchange-correlation functionals. In the density-functional theory (DFT) implementation of the response equations, a noncollinear generalized gradient approximation (GGA) has been used. The present approach enforces a restricted kinetic balance condition for the small-component basis at the integral level, leading to very efficient calculations of the property. We apply the methodology to study relativistic effects on the spin-rotation constants by performing calculations on XHn (n = 1-4) for all elements X in the p-block of the periodic table and comparing the effects of relativity on the nuclear SR tensors to that observed for the nuclear magnetic shielding tensors. Correlation effects as described by the density-functional theory are shown to be significant for the spin-rotation constants, whereas the differences between the use of GGA and hybrid density functionals are much smaller. Our calculated relativistic spin-rotation constants at the DFT level of theory are only in fair agreement with available experimental data. It is shown that the scaling of the relativistic effects for the spin-rotation constants (varying between Z(3.8) and Z(4.5)) is as strong as for the chemical shieldings but with a much smaller prefactor.

  19. Efficient calculation of nuclear spin-rotation constants from auxiliary density functional theory.

    PubMed

    Zuniga-Gutierrez, Bernardo; Camacho-Gonzalez, Monica; Bendana-Castillo, Alfonso; Simon-Bastida, Patricia; Calaminici, Patrizia; Köster, Andreas M

    2015-09-14

    The computation of the spin-rotation tensor within the framework of auxiliary density functional theory (ADFT) in combination with the gauge including atomic orbital (GIAO) scheme, to treat the gauge origin problem, is presented. For the spin-rotation tensor, the calculation of the magnetic shielding tensor represents the most demanding computational task. Employing the ADFT-GIAO methodology, the central processing unit time for the magnetic shielding tensor calculation can be dramatically reduced. In this work, the quality of spin-rotation constants obtained with the ADFT-GIAO methodology is compared with available experimental data as well as with other theoretical results at the Hartree-Fock and coupled-cluster level of theory. It is found that the agreement between the ADFT-GIAO results and the experiment is good and very similar to the ones obtained by the coupled-cluster single-doubles-perturbative triples-GIAO methodology. With the improved computational performance achieved, the computation of the spin-rotation tensors of large systems or along Born-Oppenheimer molecular dynamics trajectories becomes feasible in reasonable times. Three models of carbon fullerenes containing hundreds of atoms and thousands of basis functions are used for benchmarking the performance. Furthermore, a theoretical study of temperature effects on the structure and spin-rotation tensor of the H(12)C-(12)CH-DF complex is presented. Here, the temperature dependency of the spin-rotation tensor of the fluorine nucleus can be used to identify experimentally the so far unknown bent isomer of this complex. To the best of our knowledge this is the first time that temperature effects on the spin-rotation tensor are investigated.

  20. A Methodology to Determine Self-Similarity, Illustrated by Example: Transient Heat Transfer with Constant Flux

    ERIC Educational Resources Information Center

    Monroe, Charles; Newman, John

    2005-01-01

    This simple example demonstrates the physical significance of similarity solutions and the utility of dimensional and asymptotic analysis of partial differential equations. A procedure to determine the existence of similarity solutions is proposed and subsequently applied to transient constant-flux heat transfer. Short-time expressions follow from…

  1. Broadband cross-polarization-based heteronuclear dipolar recoupling for structural and dynamic NMR studies of rigid and soft solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharkov, B. B.; Chizhik, V. I.; Dvinskikh, S. V., E-mail: sergeid@kth.se

    2016-01-21

    Dipolar recoupling is an essential part of current solid-state NMR methodology for probing atomic-resolution structure and dynamics in solids and soft matter. Recently described magic-echo amplitude- and phase-modulated cross-polarization heteronuclear recoupling strategy aims at efficient and robust recoupling in the entire range of coupling constants both in rigid and highly dynamic molecules. In the present study, the properties of this recoupling technique are investigated by theoretical analysis, spin-dynamics simulation, and experimentally. The resonance conditions and the efficiency of suppressing the rf field errors are examined and compared to those for other recoupling sequences based on similar principles. The experimental datamore » obtained in a variety of rigid and soft solids illustrate the scope of the method and corroborate the results of analytical and numerical calculations. The technique benefits from the dipolar resolution over a wider range of coupling constants compared to that in other state-of-the-art methods and thus is advantageous in studies of complex solids with a broad range of dynamic processes and molecular mobility degrees.« less

  2. Dynamic calibration of a wheelchair dynamometer.

    PubMed

    DiGiovine, C P; Cooper, R A; Boninger, M L

    2001-01-01

    The inertia and resistance of a wheelchair dynamometer must be determined in order to compare the results of one study to another, independent of the type of device used. The purpose of this study was to describe and implement a dynamic calibration test for characterizing the electro-mechanical properties of a dynamometer. The inertia, the viscous friction, the kinetic friction, the motor back-electromotive force constant, and the motor constant were calculated using three different methods. The methodology based on a dynamic calibration test along with a nonlinear regression analysis produced the best results. The coefficient of determination comparing the dynamometer model output to the measured angular velocity and torque was 0.999 for a ramp input and 0.989 for a sinusoidal input. The inertia and resistance were determined for the rollers and the wheelchair wheels. The calculation of the electro-mechanical parameters allows for the complete description of the propulsive torque produced by an individual, given only the angular velocity and acceleration. The measurement of the electro-mechanical properties of the dynamometer as well as the wheelchair/human system provides the information necessary to simulate real-world conditions.

  3. Multi-point estimation of total energy expenditure: a comparison between zinc-reduction and platinum-equilibration methodologies.

    PubMed

    Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V

    2003-12-15

    Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.

  4. Numerical and experimental investigation of a beveled trailing-edge flow field and noise emission

    NASA Astrophysics Data System (ADS)

    van der Velden, W. C. P.; Pröbsting, S.; van Zuijlen, A. H.; de Jong, A. T.; Guan, Y.; Morris, S. C.

    2016-12-01

    Efficient tools and methodology for the prediction of trailing-edge noise experience substantial interest within the wind turbine industry. In recent years, the Lattice Boltzmann Method has received increased attention for providing such an efficient alternative for the numerical solution of complex flow problems. Based on the fully explicit, transient, compressible solution of the Lattice Boltzmann Equation in combination with a Ffowcs-Williams and Hawking aeroacoustic analogy, an estimation of the acoustic radiation in the far field is obtained. To validate this methodology for the prediction of trailing-edge noise, the flow around a flat plate with an asymmetric 25° beveled trailing edge and obtuse corner in a low Mach number flow is analyzed. Flow field dynamics are compared to data obtained experimentally from Particle Image Velocimetry and Hot Wire Anemometry, and compare favorably in terms of mean velocity field and turbulent fluctuations. Moreover, the characteristics of the unsteady surface pressure, which are closely related to the acoustic emission, show good agreement between simulation and experiment. Finally, the prediction of the radiated sound is compared to the results obtained from acoustic phased array measurements in combination with a beamforming methodology. Vortex shedding results in a strong narrowband component centered at a constant Strouhal number in the acoustic spectrum. At higher frequency, a good agreement between simulation and experiment for the broadband noise component is obtained and a typical cardioid-like directivity is recovered.

  5. Using the salutogenic approach to unravel informal caregivers' resources to health: theory and methodology.

    PubMed

    Wennerberg, Mia M T; Lundgren, Solveig M; Danielson, Ella

    2012-01-01

    This article describes the theoretical foundation and methodology used in a study intended to increase knowledge concerning informal caregivers' resources to health (in salutogenesis; General Resistance Resources, GRRs). A detailed description of how the approach derived from salutogenic theory was used and how it permeated the entire study, from design to findings, is provided. How participation in the study was experienced is discussed and methodological improvements and implications suggested. Using an explorative, mixed method design, data was collected through salutogenically guided interviews with 32 Swedish caregivers to older adults. A constant comparative method of analysis was used to identify caregiver-GRRs, content analysis was further used to describe how participation was experienced. The methodology unraveled GRRs caregivers used to obtain positive experiences of caregiving, but also hindrances for such usage contributing to negative experiences. Mixed data made it possible to venture beyond actual findings to derive a synthesis describing the experienced, communal context of the population reliant on these GRRs; Caregivinghood. Participating in the salutogenic data-collection was found to be a reflective, mainly positive, empowering and enlightening experience. The methodology was advantageous, even if time-consuming, as it in one study unravelled caregiver-GRRs and hindrances for their usage on individual, communal and contextual levels. It is suggested that the ability to describe Caregivinghood may be essential when developing health-promoting strategies for caregivers at individual, municipal and national levels. The methodology makes such a description possible and suggested methodological improvements may enhance its usability and adaptability to other populations.

  6. The Modern Measurement Technology And Checking Of Shafs Parameters

    NASA Astrophysics Data System (ADS)

    Tichá, Šárka; Botek, Jan

    2015-12-01

    This paper is focused on rationalization checking parameters of shaft in companies engaged in the production of components of electric motors, wind turbines and vacuum systems. Customers increasing constantly their requirements to ensure the overall quality of the product, i.e. the quality of machining, dimensional and shape accuracy and overall purity of the subscribed products. The aim of this paper is to introduce using modern measurement technology in controlling these components and compare the results with existing control methodology. The main objective of this rationalization is to eliminate mistakes and shortcomings of current inspection methods.

  7. A computational study of photo-induced electron transfer rate constants in subphthalocyanine/C60 organic photovoltaic materials via Fermi's golden rule

    NASA Astrophysics Data System (ADS)

    Lee, Myeong H.; Dunietz, Barry D.; Geva, Eitan

    2014-03-01

    We present a methodology to obtain the photo-induced electron transfer rate constant in organic photovoltaic (OPV) materials within the framework of Fermi's golden rule, using inputs obtained from first-principles electronic structure calculation. Within this approach, the nuclear vibrational modes are treated quantum-mechanically and a short-time approximation is avoided in contrast to the classical Marcus theory where these modes are treated classically within the high-temperature and short-time limits. We demonstrate our methodology on boron-subphthalocyanine-chloride/C60 OPV system to determine the rate constants of electron transfer and electron recombination processes upon photo-excitation. We consider two representative donor/acceptor interface configurations to investigate the effect of interface configuration on the charge transfer characteristics of OPV materials. In addition, we determine the time scale of excited states population by employing a master equation after obtaining the rate constants for all accessible electronic transitions. This work is pursued as part of the Center for Solar and Thermal Energy Conversion, an Energy Frontier Research Center funded by the US Department of Energy Office of Science, Office of Basic Energy Sciences under 390 Award No. DE-SC0000957.

  8. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate ('dynamic fatigue') testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rate in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  9. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate (dynamic fatigue) testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rates in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  10. Is there a necessity for individual blood water corrections when conductivity-based access blood flow measurements are made?

    PubMed

    Huang, Shih-Han S; Heidenheim, Paul A; Gallo, Kerri; Jayakumar, Saumya; Lindsay, Robert M

    2011-01-01

    Access blood water flow rate (Qaw) can be measured during hemodialysis using an online effective ionic dialysance (EID) methodology. Fresenius employ this methodology in their 2008K dialysis machine. The machine computer converts Qaw to an access blood flow rate (Fresenius Qa) using a generic blood water constant (BWC). We wished to validate this BWC. 18 patients had Fresenius Qa measurements using the EID and these were compared with a 'gold standard' ultrasound dilution methodology (Transonic Qa). Qa values were also obtained by removing the BWC from Fresenius Qa values to obtain the Qaw and recorrecting it with individualized patient factors using hematocrit and total protein values (HctTp Qa). The measurements were repeated after 1 h. There were no significant differences between Fresenius and Transonic, nor between HctTp and Transonic Qa values (p > 0.17). There were strong correlations between both sets of values (r > 0.856; p < 0.001). There was a significant correlation between the pairs of Transonic Qa values (r = 0.823; p < 0.007), but not for Fresenius Qa pairs (r = 0.573; p > 0.07). It was surmised that the BWC was not valid post-dialysis. The generic BWC is comparable to individualized blood water correction factors when Qa measures are made early in dialysis and prior to ultrafiltration treatment. Copyright © 2011 S. Karger AG, Basel.

  11. Effect of the tether on the Mg(II), Ca(II), Cu(II) and Fe(III) stability constants and pM values of chelating agents related to EDDHA.

    PubMed

    Sierra, Miguel A; Gómez-Gallego, Mar; Alcázar, Roberto; Lucena, Juan J; Yunta, Felipe; García-Marco, Sonia

    2004-11-07

    The effect of the length and the structure of the tether on the chelating ability of EDDHA-like chelates have not been established. In this work, PDDHA (propylenediamine-N,N'-bis(o-hydroxyphenyl)acetic acid), BDDHA (butylenediamine-N,N'-bis(o-hydroxyphenyl)acetic acid) and XDDHA (p-xylylenediamine-N,N'-bis(o-hydroxyphenyl)acetic acid) have been obtained and their chemical behaviour has been studied and compared with that of EDDHA following our methodology. The purity of the chelating agents, and their protonation, Ca(II), Mg(II), Fe(III) and Cu(II) stability constants and pM values have been determined. The stability constants and pM values indicate that EDDHA forms the most stable chelates followed by PDDHA. However, the differences among the pFe values are small when a nutrient solution is used, and in these conditions the XDDHA/Fe(III) chelate is the most stable. The results obtained in this work indicate that all the chelating agents studied can be used as iron chlorosis correctors and they can be applied to soil/plant systems.

  12. Re-evaluation of constant versus varied punishers using empirically derived consequences.

    PubMed

    Toole, Lisa M; DeLeon, Iser G; Kahng, SungWoo; Ruffin, Geri E; Pletcher, Carrie A; Bowman, Lynn G

    2004-01-01

    Charlop, Burgio, Iwata, and Ivancic [J. Appl. Behav. Anal. 21 (1988) 89] demonstrated that varied punishment procedures produced greater or more consistent reductions of problem behavior than a constant punishment procedure. More recently, Fisher and colleagues [Res. Dev. Disabil. 15 (1994) 133; J. Appl. Behav. Anal. 27 (1994) 447] developed a systematic methodology for predicting the efficacy of various punishment procedures. Their procedure identified reinforcers and punishers (termed "empirically derived consequences" or EDC) that, when used in combination, reduced the destructive behavior of individuals with developmental disabilities who displayed automatically maintained destructive behavior. The current investigation combines these two lines of research by comparing the effects of constant versus varied punishers on the self-injury of two individuals with developmental disabilities. The punishing stimuli were selected via the procedures described by Fisher et al. and were predicted to be at varying levels of effectiveness. The varied presentation of punishers resulted in enhanced suppressive effects over the constant presentation of a punisher for one of two individuals, but only in comparison to a single stimulus predicted to be minimally effective. Even then, the differences were small. These results suggest that the additive effects of varied punishment are negligible if clinicians use stimuli predicted to be effective and are discussed in terms of the conditions under which stimulus variation could potentially enhance the effects of punishers.

  13. Chelating agents related to ethylenediamine bis(2-hydroxyphenyl)acetic acid (EDDHA): synthesis, characterization, and equilibrium studies of the free ligands and their Mg2+, Ca2+, Cu2+, and Fe3+ chelates.

    PubMed

    Yunta, Felipe; García-Marco, Sonia; Lucena, Juan J; Gómez-Gallego, Mar; Alcázar, Roberto; Sierra, Miguel A

    2003-08-25

    Iron chelates such as ethylenediamine-N,N'-bis(2-hydroxyphenyl)acetic acid (EDDHA) and their analogues are the most efficient soil fertilizers to treat iron chlorosis in plants growing in calcareous soils. EDDHA, EDDH4MA (ethylenediamine-N,N'-bis(2-hydroxy-4-methylphenyl)acetic acid), and EDDCHA (ethylenediamine-N,N'-bis(2-hydroxy-5-carboxyphenyl)acetic acid) are allowed by the European directive, but also EDDHSA (ethylenediamine-N,N'-bis(2-hydroxy-5-sulfonylphenyl)acetic acid) and EDDH5MA (ethylenediamine-N,N'-bis(2-hydroxy-5-methylphenyl)acetic acid) are present in several commercial iron chelates. In this study, these chelating agents as well as p,p-EDDHA (ethylenediamine-N,N'-bis(4-hydroxyphenyl)acetic acid) and EDDMtxA (ethylenediamine-N,N'-bis(2-metoxyphenyl)acetic acid) have been obtained following a new synthetic pathway. Their chemical behavior has been studied to predict the effect of the substituents in the benzene ring on their efficacy as iron fertilizers for soils above pH 7. The purity of the chelating agents has been determined using a novel methodology through spectrophotometric titration at 480 nm with Fe(3+) as titrant to evaluate the inorganic impurities. The protonation constants were determined by both spectrophotometric and potentiometric methods, and Ca(2+) and Mg(2+) stability constants were determined from potentiometric titrations. To establish the Fe(3+) and Cu(2+) stability constants, a new spectrophotometric method has been developed, and the results were compared with those reported in the literature for EDDHA and EDDHMA and their meso- and rac-isomers. pM values have been also determined to provide a comparable basis to establish the relative chelating ability of these ligands. The purity obtained for the ligands is higher than 87% in all cases and is comparable with that obtained by (1)H NMR. No significant differences have been found among ligands when their protonation and stability constants were compared. As expected, no Fe(3+) complexation was observed for p,p-EDDHA and EDDMtxA. The presence of sulfonium groups in EDDHSA produces an increase in acidity that affects their protonation and stability constants, although the pFe values suggest that EDDHSA could be also effective to correct iron chlorosis in plants.

  14. Identification of both copy number variation-type and constant-type core elements in a large segmental duplication region of the mouse genome

    PubMed Central

    2013-01-01

    Background Copy number variation (CNV), an important source of diversity in genomic structure, is frequently found in clusters called CNV regions (CNVRs). CNVRs are strongly associated with segmental duplications (SDs), but the composition of these complex repetitive structures remains unclear. Results We conducted self-comparative-plot analysis of all mouse chromosomes using the high-speed and large-scale-homology search algorithm SHEAP. For eight chromosomes, we identified various types of large SD as tartan-checked patterns within the self-comparative plots. A complex arrangement of diagonal split lines in the self-comparative-plots indicated the presence of large homologous repetitive sequences. We focused on one SD on chromosome 13 (SD13M), and developed SHEPHERD, a stepwise ab initio method, to extract longer repetitive elements and to characterize repetitive structures in this region. Analysis using SHEPHERD showed the existence of 60 core elements, which were expected to be the basic units that form SDs within the repetitive structure of SD13M. The demonstration that sequences homologous to the core elements (>70% homology) covered approximately 90% of the SD13M region indicated that our method can characterize the repetitive structure of SD13M effectively. Core elements were composed largely of fragmented repeats of a previously identified type, such as long interspersed nuclear elements (LINEs), together with partial genic regions. Comparative genome hybridization array analysis showed that whereas 42 core elements were components of CNVR that varied among mouse strains, 8 did not vary among strains (constant type), and the status of the others could not be determined. The CNV-type core elements contained significantly larger proportions of long terminal repeat (LTR) types of retrotransposon than the constant-type core elements, which had no CNV. The higher divergence rates observed in the CNV-type core elements than in the constant type indicate that the CNV-type core elements have a longer evolutionary history than constant-type core elements in SD13M. Conclusions Our methodology for the identification of repetitive core sequences simplifies characterization of the structures of large SDs and detailed analysis of CNV. The results of detailed structural and quantitative analyses in this study might help to elucidate the biological role of one of the SDs on chromosome 13. PMID:23834397

  15. Distance Metric Tracking

    DTIC Science & Technology

    2016-03-02

    some close- ness constant and dissimilar pairs be more distant than some larger constant. Online and non -linear extensions to the ITML methodology are...is obtained, instead of solving an objective function formed from the entire dataset. Many online learning methods have regret guarantees, that is... function Metric learning seeks to learn a metric that encourages data points marked as similar to be close and data points marked as different to be far

  16. Accommodation and vergence response gains to different near cues characterize specific esotropias

    PubMed Central

    Horwood, Anna M; Riddell, Patricia M

    2015-01-01

    Aim To describe preliminary findings of how the profile of the use of blur, disparity and proximal cues varies between non-strabismic groups and those with different types of esotropia. Design Case control study Methodology A remote haploscopic photorefractor measured simultaneous convergence and accommodation to a range of targets containing all combinations of binocular disparity, blur and proximal (looming) cues. 13 constant esotropes, 16 fully accommodative esotropes, and 8 convergence excess esotropes were compared with age and refractive error matched controls, and 27 young adult emmetropic controls. All wore full refractive correction if not emmetropic. Response AC/A and CA/C ratios were also assessed. Results Cue use differed between the groups. Even esotropes with constant suppression and no binocular vision (BV) responded to disparity in cues. The constant esotropes with weak BV showed trends for more stable responses and better vergence and accommodation than those without any BV. The accommodative esotropes made less use of disparity cues to drive accommodation (p=0.04) and more use of blur to drive vergence (p=0.008) than controls. All esotropic groups failed to show the strong bias for better responses to disparity cues found in the controls, with convergence excess esotropes favoring blur cues. AC/A and CA/C ratios existed in an inverse relationship in the different groups. Accommodative lag of >1.0D at 33cm was common (46%) in the pooled esotropia groups compared with 11% in typical children (p=0.05). Conclusion Esotropic children use near cues differently from matched non-esotropic children in ways characteristic to their deviations. Relatively higher weighting for blur cues was found in accommodative esotropia compared to matched controls. PMID:23978142

  17. Rapid quantitative chemical mapping of surfaces with sub-2 nm resolution

    NASA Astrophysics Data System (ADS)

    Lai, Chia-Yun; Perri, Saverio; Santos, Sergio; Garcia, Ricardo; Chiesa, Matteo

    2016-05-01

    We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems.We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00496b

  18. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    NASA Astrophysics Data System (ADS)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  19. Methodology based on genetic heuristics for in-vivo characterizing the patient-specific biomechanical behavior of the breast tissues

    PubMed Central

    Lago, M. A.; Rúperez, M. J.; Martínez-Martínez, F.; Martínez-Sanchis, S.; Bakic, P. R.; Monserrat, C.

    2015-01-01

    This paper presents a novel methodology to in-vivo estimate the elastic constants of a constitutive model proposed to characterize the mechanical behavior of the breast tissues. An iterative search algorithm based on genetic heuristics was constructed to in-vivo estimate these parameters using only medical images, thus avoiding invasive measurements of the mechanical response of the breast tissues. For the first time, a combination of overlap and distance coefficients were used for the evaluation of the similarity between a deformed MRI of the breast and a simulation of that deformation. The methodology was validated using breast software phantoms for virtual clinical trials, compressed to mimic MRI-guided biopsies. The biomechanical model chosen to characterize the breast tissues was an anisotropic neo-Hookean hyperelastic model. Results from this analysis showed that the algorithm is able to find the elastic constants of the constitutive equations of the proposed model with a mean relative error of about 10%. Furthermore, the overlap between the reference deformation and the simulated deformation was of around 95% showing the good performance of the proposed methodology. This methodology can be easily extended to characterize the real biomechanical behavior of the breast tissues, which means a great novelty in the field of the simulation of the breast behavior for applications such as surgical planing, surgical guidance or cancer diagnosis. This reveals the impact and relevance of the presented work. PMID:27103760

  20. Methodology based on genetic heuristics for in-vivo characterizing the patient-specific biomechanical behavior of the breast tissues.

    PubMed

    Lago, M A; Rúperez, M J; Martínez-Martínez, F; Martínez-Sanchis, S; Bakic, P R; Monserrat, C

    2015-11-30

    This paper presents a novel methodology to in-vivo estimate the elastic constants of a constitutive model proposed to characterize the mechanical behavior of the breast tissues. An iterative search algorithm based on genetic heuristics was constructed to in-vivo estimate these parameters using only medical images, thus avoiding invasive measurements of the mechanical response of the breast tissues. For the first time, a combination of overlap and distance coefficients were used for the evaluation of the similarity between a deformed MRI of the breast and a simulation of that deformation. The methodology was validated using breast software phantoms for virtual clinical trials, compressed to mimic MRI-guided biopsies. The biomechanical model chosen to characterize the breast tissues was an anisotropic neo-Hookean hyperelastic model. Results from this analysis showed that the algorithm is able to find the elastic constants of the constitutive equations of the proposed model with a mean relative error of about 10%. Furthermore, the overlap between the reference deformation and the simulated deformation was of around 95% showing the good performance of the proposed methodology. This methodology can be easily extended to characterize the real biomechanical behavior of the breast tissues, which means a great novelty in the field of the simulation of the breast behavior for applications such as surgical planing, surgical guidance or cancer diagnosis. This reveals the impact and relevance of the presented work.

  1. Pulmonary capillary pressure in pulmonary hypertension.

    PubMed

    Souza, Rogerio; Amato, Marcelo Britto Passos; Demarzo, Sergio Eduardo; Deheinzelin, Daniel; Barbas, Carmen Silvia Valente; Schettino, Guilherme Paula Pinto; Carvalho, Carlos Roberto Ribeiro

    2005-04-01

    Pulmonary capillary pressure (PCP), together with the time constants of the various vascular compartments, define the dynamics of the pulmonary vascular system. Our objective in the present study was to estimate PCPs and time constants of the vascular system in patients with idiopathic pulmonary arterial hypertension (IPAH), and compare them with these measures in patients with acute respiratory distress syndrome (ARDS). We conducted the study in two groups of patients with pulmonary hypertension: 12 patients with IPAH and 11 with ARDS. Four methods were used to estimate the PCP based on monoexponential and biexponential fitting of pulmonary artery pressure decay curves. PCPs in the IPAH group were considerably greater than those in the ARDS group. The PCPs measured using the four methods also differed significantly, suggesting that each method measures the pressure at a different site in the pulmonary circulation. The time constant for the slow component of the biexponential fit in the IPAH group was significantly longer than that in the ARDS group. The PCP in IPAH patients is greater than normal but methodological limitations related to the occlusion technique may limit interpretation of these data in isolation. Different disease processes may result in different times for arterial emptying, with resulting implications for the methods available for estimating PCP.

  2. A modified Poisson-Boltzmann equation applied to protein adsorption.

    PubMed

    Gama, Marlon de Souza; Santos, Mirella Simões; Lima, Eduardo Rocha de Almeida; Tavares, Frederico Wanderley; Barreto, Amaro Gomes Barreto

    2018-01-05

    Ion-exchange chromatography has been widely used as a standard process in purification and analysis of protein, based on the electrostatic interaction between the protein and the stationary phase. Through the years, several approaches are used to improve the thermodynamic description of colloidal particle-surface interaction systems, however there are still a lot of gaps specifically when describing the behavior of protein adsorption. Here, we present an improved methodology for predicting the adsorption equilibrium constant by solving the modified Poisson-Boltzmann (PB) equation in bispherical coordinates. By including dispersion interactions between ions and protein, and between ions and surface, the modified PB equation used can describe the Hofmeister effects. We solve the modified Poisson-Boltzmann equation to calculate the protein-surface potential of mean force, treated as spherical colloid-plate system, as a function of process variables. From the potential of mean force, the Henry constants of adsorption, for different proteins and surfaces, are calculated as a function of pH, salt concentration, salt type, and temperature. The obtained Henry constants are compared with experimental data for several isotherms showing excellent agreement. We have also performed a sensitivity analysis to verify the behavior of different kind of salts and the Hofmeister effects. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Reversed headspace analysis for characterization, identification, and analysis of solid and liquid matrices: Part I.

    PubMed

    Markelov, M; Bershevits, O

    2006-03-01

    This paper offers a methodology of an experimentally simple reversed headspace (RHS) analysis for measuring of matrix effects and their use for identification and characterization of condensed matrices such as pharmaceuticals, polymers, chromatographic packing, etc. applicable for both quality control monitoring and research and development investigation. In RHS methods, the matrix is spiked and equilibrated with a mixture of volatile chemicals containing various functional groups (molecular sensor array or MSA mixture). Headspace chromatograms of the same spikes of a sample and an empty vial are compared. Examination of basic headspace theory shows that matrix specific constants (M), rather than partition coefficients (K), can be calculated from the headspace chromatograms and M=(K-1)xbeta, where beta is a degree of matrix volume change during equilibration. Matrix specific constants can be plotted against any property of chemicals (polarity, dielectric constant, solubility parameter, vapor pressure, etc.) or just against a set of consecutive numbers, each representing a chemical in MSA. This plot is, in a sense, a molecular affinity spectrum (MAS) specific for a given matrix at a given temperature and is independent of an instrument. Changes in MAS that correspond to chemicals with a particular functional group give an insight to the type of differences between matrices and may quantitatively define them.

  4. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  5. An evaluation of benthic macroinvertebrate biomass methodology : Part 1. Laboratory analytical methods.

    PubMed

    Mason, W T; Lewis, P A; Weber, C I

    1983-03-01

    Evaluation of analytical methods employed for wet weight (live or preserved samples) of benthic macroinvertebrates reveals that centrifugation at 140 x gravity for one minute yields constant biomass estimates. Less relative centrifugal force increases chance of incomplete removal of body moisture and results in weighing error, while greater force may rupture fragile macroinvertebrates, such as mayflies. Duration of specimen exposure in ethanol, formalin, and formol (formaling-ethanol combinations) causes significant body weight loss with within 48 hr formalin and formol cause less body weight loss than ethanol. However, as all preservatives tested cause body weight loss, preservation time of samples collected for comparative purposes should be treated uniformly. Dry weight estimates of macroinvertebrates are not significantly affected by kind of preservative or duration of exposure. Constant dry weights are attained by oven drying at 103 °C at a minimum of four hours or vacuum oven drying (15 inches of mercury pressure) at 103 °C for a minimum of one hour. Although requiring more time in preparation than oven drying and inalterably changing specimen body shape, freeze drying (10 microns pressure, -55 °C, 24 hr) provides constant dry weights and is advantageous for long term sample storage by minimizing curatorial attention. Constant ash-free dry weights of macroinvertebrate samples are attained by igniting samples at 500-550 °C for a minimum of one hour with slow cooling to room temperature in desiccators before weighing.

  6. Spatially resolved quantitative mapping of thermomechanical properties and phase transition temperatures using scanning probe microscopy

    DOEpatents

    Jesse, Stephen; Kalinin, Sergei V; Nikiforov, Maxim P

    2013-07-09

    An approach for the thermomechanical characterization of phase transitions in polymeric materials (polyethyleneterephthalate) by band excitation acoustic force microscopy is developed. This methodology allows the independent measurement of resonance frequency, Q factor, and oscillation amplitude of a tip-surface contact area as a function of tip temperature, from which the thermal evolution of tip-surface spring constant and mechanical dissipation can be extracted. A heating protocol maintained a constant tip-surface contact area and constant contact force, thereby allowing for reproducible measurements and quantitative extraction of material properties including temperature dependence of indentation-based elastic and loss moduli.

  7. Acidity in DMSO from the embedded cluster integral equation quantum solvation model.

    PubMed

    Heil, Jochen; Tomazic, Daniel; Egbers, Simon; Kast, Stefan M

    2014-04-01

    The embedded cluster reference interaction site model (EC-RISM) is applied to the prediction of acidity constants of organic molecules in dimethyl sulfoxide (DMSO) solution. EC-RISM is based on a self-consistent treatment of the solute's electronic structure and the solvent's structure by coupling quantum-chemical calculations with three-dimensional (3D) RISM integral equation theory. We compare available DMSO force fields with reference calculations obtained using the polarizable continuum model (PCM). The results are evaluated statistically using two different approaches to eliminating the proton contribution: a linear regression model and an analysis of pK(a) shifts for compound pairs. Suitable levels of theory for the integral equation methodology are benchmarked. The results are further analyzed and illustrated by visualizing solvent site distribution functions and comparing them with an aqueous environment.

  8. Scanning Electrochemical Microscopy in Neuroscience

    NASA Astrophysics Data System (ADS)

    Schulte, Albert; Nebel, Michaela; Schuhmann, Wolfgang

    2010-07-01

    This article reviews recent work involving the application of scanning electrochemical microscopy (SECM) to the study of individual cultured living cells, with an emphasis on topographical and functional imaging of neuronal and secretory cells of the nervous and endocrine system. The basic principles of biological SECM and associated negative amperometric-feedback and generator/collector-mode SECM imaging are discussed, and successful use of the methodology for screening soft and fragile membranous objects is outlined. The drawbacks of the constant-height mode of probe movement and the benefits of the constant-distance mode of SECM operation are described. Finally, representative examples of constant-height and constant-distance mode SECM on a variety of live cells are highlighted to demonstrate the current status of single-cell SECM in general and of SECM in neuroscience in particular.

  9. Structural health monitoring methodology for aircraft condition-based maintenance

    NASA Astrophysics Data System (ADS)

    Saniger, Jordi; Reithler, Livier; Guedra-Degeorges, Didier; Takeda, Nobuo; Dupuis, Jean Pierre

    2001-06-01

    Reducing maintenance costs while keeping a constant level of safety is a major issue for Air Forces and airlines. The long term perspective is to implement condition based maintenance to guarantee a constant safety level while decreasing maintenance costs. On this purpose, the development of a generalized Structural Health Monitoring System (SHMS) is needed. The objective of such a system is to localize the damages and to assess their severity, with enough accuracy to allow low cost corrective actions. The present paper describes a SHMS based on acoustic emission technology. This choice was driven by its reliability and wide use in the aerospace industry. The described SHMS uses a new learning methodology which relies on the generation of artificial acoustic emission events on the structure and an acoustic emission sensor network. The calibrated acoustic emission events picked up by the sensors constitute the knowledge set that the system relies on. With this methodology, the anisotropy of composite structures is taken into account, thus avoiding the major cause of errors of classical localization methods. Moreover, it is adaptive to different structures as it does not rely on any particular model but on measured data. The acquired data is processed and the event's location and corrected amplitude are computed. The methodology has been demonstrated and experimental tests on elementary samples presented a degree of accuracy of 1cm.

  10. High-throughput screening of inorganic compounds for the discovery of novel dielectric and optical materials

    DOE PAGES

    Petousis, Ioannis; Mrdjenovich, David; Ballouz, Eric; ...

    2017-01-31

    Dielectrics are an important class of materials that are ubiquitous in modern electronic applications. Even though their properties are important for the performance of devices, the number of compounds with known dielectric constant is on the order of a few hundred. Here, we use Density Functional Perturbation Theory as a way to screen for the dielectric constant and refractive index of materials in a fast and computationally efficient way. Our results constitute the largest dielectric tensors database to date, containing 1,056 compounds. Details regarding the computational methodology and technical validation are presented along with the format of our publicly availablemore » data. In addition, we integrate our dataset with the Materials Project allowing users easy access to material properties. Finally, we explain how our dataset and calculation methodology can be used in the search for novel dielectric compounds.« less

  11. High-throughput screening of inorganic compounds for the discovery of novel dielectric and optical materials

    PubMed Central

    Petousis, Ioannis; Mrdjenovich, David; Ballouz, Eric; Liu, Miao; Winston, Donald; Chen, Wei; Graf, Tanja; Schladt, Thomas D.; Persson, Kristin A.; Prinz, Fritz B.

    2017-01-01

    Dielectrics are an important class of materials that are ubiquitous in modern electronic applications. Even though their properties are important for the performance of devices, the number of compounds with known dielectric constant is on the order of a few hundred. Here, we use Density Functional Perturbation Theory as a way to screen for the dielectric constant and refractive index of materials in a fast and computationally efficient way. Our results constitute the largest dielectric tensors database to date, containing 1,056 compounds. Details regarding the computational methodology and technical validation are presented along with the format of our publicly available data. In addition, we integrate our dataset with the Materials Project allowing users easy access to material properties. Finally, we explain how our dataset and calculation methodology can be used in the search for novel dielectric compounds. PMID:28140408

  12. Piloted Evaluation of an Integrated Methodology for Propulsion and Airframe Control Design

    NASA Technical Reports Server (NTRS)

    Bright, Michelle M.; Simon, Donald L.; Garg, Sanjay; Mattern, Duane L.; Ranaudo, Richard J.; Odonoghue, Dennis P.

    1994-01-01

    An integrated methodology for propulsion and airframe control has been developed and evaluated for a Short Take-Off Vertical Landing (STOVL) aircraft using a fixed base flight simulator at NASA Lewis Research Center. For this evaluation the flight simulator is configured for transition flight using a STOVL aircraft model, a full nonlinear turbofan engine model, simulated cockpit and displays, and pilot effectors. The paper provides a brief description of the simulation models, the flight simulation environment, the displays and symbology, the integrated control design, and the piloted tasks used for control design evaluation. In the simulation, the pilots successfully completed typical transition phase tasks such as combined constant deceleration with flight path tracking, and constant acceleration wave-off maneuvers. The pilot comments of the integrated system performance and the display symbology are discussed and analyzed to identify potential areas of improvement.

  13. High-throughput screening of inorganic compounds for the discovery of novel dielectric and optical materials.

    PubMed

    Petousis, Ioannis; Mrdjenovich, David; Ballouz, Eric; Liu, Miao; Winston, Donald; Chen, Wei; Graf, Tanja; Schladt, Thomas D; Persson, Kristin A; Prinz, Fritz B

    2017-01-31

    Dielectrics are an important class of materials that are ubiquitous in modern electronic applications. Even though their properties are important for the performance of devices, the number of compounds with known dielectric constant is on the order of a few hundred. Here, we use Density Functional Perturbation Theory as a way to screen for the dielectric constant and refractive index of materials in a fast and computationally efficient way. Our results constitute the largest dielectric tensors database to date, containing 1,056 compounds. Details regarding the computational methodology and technical validation are presented along with the format of our publicly available data. In addition, we integrate our dataset with the Materials Project allowing users easy access to material properties. Finally, we explain how our dataset and calculation methodology can be used in the search for novel dielectric compounds.

  14. An efficient and accurate framework for calculating lattice thermal conductivity of solids: AFLOW—AAPL Automatic Anharmonic Phonon Library

    NASA Astrophysics Data System (ADS)

    Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Carrete, Jesús; Toher, Cormac; de Jong, Maarten; Asta, Mark; Fornari, Marco; Nardelli, Marco Buongiorno; Curtarolo, Stefano

    2017-10-01

    One of the most accurate approaches for calculating lattice thermal conductivity, , is solving the Boltzmann transport equation starting from third-order anharmonic force constants. In addition to the underlying approximations of ab-initio parameterization, two main challenges are associated with this path: high computational costs and lack of automation in the frameworks using this methodology, which affect the discovery rate of novel materials with ad-hoc properties. Here, the Automatic Anharmonic Phonon Library (AAPL) is presented. It efficiently computes interatomic force constants by making effective use of crystal symmetry analysis, it solves the Boltzmann transport equation to obtain , and allows a fully integrated operation with minimum user intervention, a rational addition to the current high-throughput accelerated materials development framework AFLOW. An "experiment vs. theory" study of the approach is shown, comparing accuracy and speed with respect to other available packages, and for materials characterized by strong electron localization and correlation. Combining AAPL with the pseudo-hybrid functional ACBN0 is possible to improve accuracy without increasing computational requirements.

  15. An irregular lattice method for elastic wave propagation

    NASA Astrophysics Data System (ADS)

    O'Brien, Gareth S.; Bean, Christopher J.

    2011-12-01

    Lattice methods are a class of numerical scheme which represent a medium as a connection of interacting nodes or particles. In the case of modelling seismic wave propagation, the interaction term is determined from Hooke's Law including a bond-bending term. This approach has been shown to model isotropic seismic wave propagation in an elastic or viscoelastic medium by selecting the appropriate underlying lattice structure. To predetermine the material constants, this methodology has been restricted to regular grids, hexagonal or square in 2-D or cubic in 3-D. Here, we present a method for isotropic elastic wave propagation where we can remove this lattice restriction. The methodology is outlined and a relationship between the elastic material properties and an irregular lattice geometry are derived. The numerical method is compared with an analytical solution for wave propagation in an infinite homogeneous body along with comparing the method with a numerical solution for a layered elastic medium. The dispersion properties of this method are derived from a plane wave analysis showing the scheme is more dispersive than a regular lattice method. Therefore, the computational costs of using an irregular lattice are higher. However, by removing the regular lattice structure the anisotropic nature of fracture propagation in such methods can be removed.

  16. Measurement of monomolecular binding constants of neutral phenols into the beta-cyclodextrin by continuous frontal analysis in capillary and microchip electrophoresis via a competitive assay.

    PubMed

    Le Saux, Thomas; Hisamoto, Hideaki; Terabe, Shigeru

    2006-02-03

    Measurement of binding constant by chip electrophoresis is a very promising technique for the high throughput screening of non-covalent interactions. Among the different electrophoretic methods available that yield the binding parameters, continuous frontal analysis is the most appropriate for a transposition from capillary electrophoresis (CE) to microchip electrophoresis. Implementation of this methodology in microchip was exemplified by the measurement of inclusion constants of 2-naphtalenesulfonate and neutral phenols (phenol, 4-chlorophenol and 4-nitrophenol) into beta-cyclodextrin by competitive assays. The issue of competitor choice is discussed in relation to its appropriateness for proper monitoring of the interaction.

  17. Probabilistic Material Strength Degradation Model for Inconel 718 Components Subjected to High Temperature, Mechanical Fatigue, Creep and Thermal Fatigue Effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie Corinne Scheidt

    1994-01-01

    This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.

  18. Assessing the groundwater recharge under various irrigation schemes in Central Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Lin, Zih-Ciao; Tsai, Cheng-Bin

    2014-05-01

    The flooded paddy fields can be considered as a major source of groundwater recharge in Central Taiwan. The risk of rice production has increased notably due to climate change in this area. To respond to agricultural water shortage caused by climate change without affecting rice yield in the future, the application of water-saving irrigation is the substantial resolution. The System of Rice Intensification (SRI) was developed as a set of insights and practices used in growing irrigated rice. Based on the water-saving irrigation practice of SRI, impacts of the new methodology on the reducing of groundwater recharge were assessed in central Taiwan. The three-dimensional finite element groundwater model (FEMWATER) with the variable boundary condition analog functions, was applied in simulating groundwater recharge under different irrigation schemes. According to local climatic and environmental characteristics associated with SRI methodology, the change of infiltration rate was evaluated and compared with the traditional irrigation schemes, including continuous irrigation and rotational irrigation scheme. The simulation results showed that the average infiltration rate in the rice growing season decreased when applying the SRI methodology, and the total groundwater recharge amount of SRI with a 5-day irrigation interval reduced 12% and 9% compared with continuous irrigation (6cm constant ponding water depth) and rotational scheme (5-day irrigation interval with 6 cm initial ponding water depth), respectively. The results could be used as basis for planning long-term adaptive water resource management strategies to climate change in Central Taiwan. Keywords: SRI, Irrigation schemes, Groundwater recharge, Infiltration

  19. Analysis of Freight Transport Strategies and Methodologies [summary

    DOT National Transportation Integrated Search

    2017-12-01

    Transportation planners constantly examine traffic flows to see if current roadway layouts are serving traffic needs. For freight hauling, this presents one issue on the open road, but a much different issue as these large vehicles approach their des...

  20. A novel methodological approach for the analysis of host-ligand interactions.

    PubMed

    Strat, Daniela; Missailidis, Sotiris; Drake, Alex F

    2007-02-02

    Traditional analysis of drug-binding data relies upon the Scatchard formalism. These methods rely upon the fitting of a linear equation providing intercept and gradient data that relate to physical properties, such as the binding constant, cooperativity coefficients and number of binding sites. However, the existence of different binding modes with different binding constants makes the implementation of these models difficult. This article describes a novel approach to the binding model of host-ligand interactions by using a derived analytical function describing the observed signal. The benefit of this method is that physically significant parameters, that is, binding constants and number of binding sites, are automatically derived by the use of a minimisation routine. This methodology was utilised to analyse the interactions between a novel antitumour agent and DNA. An optical spectroscopy study confirms that the pentacyclic acridine derivative (DH208) binds to nucleic acids. Two binding modes can be identified: a stronger one that involves intercalation and a weaker one that involves oriented outer-sphere binding. In both cases the plane of the bound acridine ring is parallel to the nucleic acid bases, orthogonal to the phosphate backbone. Ultraviolet (UV) and circular dichroism (CD) data were fitted using the proposed model. The binding constants and the number of binding sites derived from the model remained consistent across the different techniques used. The different wavelengths at which the measurements were made maintained the coherence of the results.

  1. Eyewitness performance in cognitive and structured interviews.

    PubMed

    Memon, A; Wark, L; Holley, A; Bull, R; Koehnken, G

    1997-09-01

    This paper addresses two methodological and theoretical questions relating to the Cognitive Interview (CI), which previous research has found to increase witness recall in interviews. (1) What are the effects of the CI mnemonic techniques when communication techniques are held constant? (2) How do trained interviewers compare with untrained interviewers? In this study, witnesses (college students) viewed a short film clip of a shooting and were questioned by interviewers (research assistants) trained in conducting the CI or a Structured Interview (SI)--similar to the CI except for the "cognitive" components--or by untrained interviewers (UI). The CI and SI groups recalled significantly more correct information compared to the UI group. However they also reported more errors and confabulated details. Theoretical and practical implications of the results are discussed in terms of precisely identifying the CI facilitatory effects and consequent good practice in the forensic setting.

  2. Economic optimization of operations for hybrid energy systems under variable markets

    DOE PAGES

    Chen, Jen; Garcia, Humberto E.

    2016-05-21

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  3. Economic optimization of operations for hybrid energy systems under variable markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jen; Garcia, Humberto E.

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  4. Fatigue Life Methodology for Tapered Hybrid Composite Flexbeams

    NASA Technical Reports Server (NTRS)

    urri, Gretchen B.; Schaff, Jeffery R.

    2006-01-01

    Nonlinear-tapered flexbeam specimens from a full-size composite helicopter rotor hub flexbeam were tested under combined constant axial tension and cyclic bending loads. Two different graphite/glass hybrid configurations tested under cyclic loading failed by delamination in the tapered region. A 2-D finite element model was developed which closely approximated the flexbeam geometry, boundary conditions, and loading. The analysis results from two geometrically nonlinear finite element codes, ANSYS and ABAQUS, are presented and compared. Strain energy release rates (G) associated with simulated delamination growth in the flexbeams are presented from both codes. These results compare well with each other and suggest that the initial delamination growth from the tip of the ply-drop toward the thick region of the flexbeam is strongly mode II. The peak calculated G values were used with material characterization data to calculate fatigue life curves for comparison with test data. A curve relating maximum surface strain to number of loading cycles at delamination onset compared well with the test results.

  5. In search for an optimal methodology to calculate the valence electron affinities of temporary anions.

    PubMed

    Puiatti, Marcelo; Vera, D Mariano A; Pierini, Adriana B

    2009-10-28

    Recently, we have proposed an approach for finding the valence anion ground state, based on the stabilization exerted by a polar solvent; the methodology used standard DFT methods and relatively inexpensive basis sets and yielded correct electron affinity (EA) values by gradually decreasing the dielectric constant of the medium. In order to address the overall performance of the new methodology, to find the best conditions for stabilizing the valence state and to evaluate its scope and limitations, we gathered a pool of 60 molecules, 25 of them bearing the conventional valence state as the ground anion and 35 for which the lowest anion state found holds the extra electron in a diffuse orbital around the molecule (non valence state). The results obtained by testing this representative set suggest a very good performance for most species having an experimental EA less negative than -3.0 eV; the correlation at the B3LYP/6-311+G(2df,p) level being y = 1.01x + 0.06, with a correlation index of 0.985. As an alternative, the time dependent DFT (TD-DFT) approach was also tested with both B3LYP and PBE0 functionals. The methodology we proposed shows a comparable or better accuracy with respect to TD-DFT, although the TD-DFT approach with the PBE0 functional is suggested as a suitable estimate for species with the most negative EAs (ca.-2.5 to -3.5 eV), for which stabilization strategies can hardly reach the valence state. As an application, a pool of 8 compounds of key biological interest with EAs which remain unknown or unclear were predicted using the new methodology.

  6. Seeded Fault Bearing Experiments: Methodology and Data Acquisition

    DTIC Science & Technology

    2011-06-01

    electronics piezoelectric ( IEPE ) transducer. Constant current biased transducers require AC coupling for the output signal. The ICP-Type Signal...the outer race I/O input/output IEPE integral electronics piezoelectric LCD liquid crystal display P&D Prognostics and Diagnostics RMS root

  7. Interdiscplinary team processes within an in-home service delivery organization.

    PubMed

    Gantert, Thomas W; McWilliam, Carol L

    2004-01-01

    Interdisciplinary teamwork is particularly difficult to achieve in the community context where geographical separateness and solo practices impede face to face contact and collaborative practice. Understanding the processes that occur within interdisciplinary teams is imperative, since client outcomes are influenced by interdisciplinary teamwork. The purpose of this exploratory study was to describe the processes that occur within interdisciplinary teams that deliver in-home care. Applying grounded theory methodology, the researcher conducted unstructured in-depth interviews with a purposeful sample of healthcare providers and used constant comparative analysis to elicit the findings. Findings revealed three key team processes: networking, navigating, and aligning. The descriptions afford several insights that are applicable to in-home healthcare agencies attempting to achieve effective interdisciplinary team functioning.

  8. Voltage Support Study of Smart PV Inverters on a High-Photovoltaic Penetration Utility Distribution Feeder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Pratt, Annabelle; Bialek, Tom

    2016-11-21

    This paper reports on tools and methodologies developed to study the impact of adding rooftop photovoltaic (PV) systems, with and without the ability to provide voltage support, on the voltage profile of distribution feeders. Simulation results are provided from a study of a specific utility feeder. The simulation model of the utility distribution feeder was built in OpenDSS and verified by comparing the simulated voltages to field measurements. First, we set all PV systems to operate at unity power factor and analyzed the impact on feeder voltages. Then we conducted multiple simulations with voltage support activated for all the smartmore » PV inverters. These included different constant power factor settings and volt/VAR controls.« less

  9. Integrating the older/special needs adoptive child into the family.

    PubMed

    Clark, Pamela; Thigpen, Sally; Yates, Amy Moeller

    2006-04-01

    This qualitative, grounded theory study investigated 11 families who reported having successfully integrated into their family unit at least one older/special needs adoptee. The theory that emerged through the constant comparative methodology consisted of two categories (Decision to Adopt and Adjustment) and a core category (Developing a Sense of Family). The two categories and core category comprised a process that was informed by the Family Narrative Paradigm and culminated in the successful integration of the child or children into the existing family unit. Parental perceptions that appeared to facilitate this process included: (a) finding strengths in the children overlooked by previous caregivers, (b) viewing behavior in context, (c) reframing negative behavior, and (d) attributing improvement in behavior to parenting efforts.

  10. The social determinants of substance abuse in African American baby boomers: effects of family, media images, and environment.

    PubMed

    Pope, Robert C; Wallhagen, Margaret; Davis, Harvey

    2010-07-01

    Grounded theory methodology was used to explore the social processes involved in the use of illicit drugs in older African Americans as an underpinning to the development of approaches to nursing care and treatment. Interviews were conducted with six older African American substance users who were currently in drug treatment programs. Responses to the questions were recorded, transcribed, and analyzed using constant comparative methods. Three core themes emerged: (a) family, (b) media images, and (c) environment. The core issues of substance abuse, such as the environment and larger societal forces, cannot be addressed by one discipline and mandate that clinicians move to an interdisciplinary approach to achieve a plan of care for this growing population.

  11. Creation and implementation of an effective physician compensation methodology for a nonprofit medical foundation.

    PubMed

    Ferch, A W

    2000-01-01

    The foundation has determined that the adjusted gross billing methodology is a viable method to be considered for a nonprofit medical foundation in compensating physicians. The foundation continues to experiment with the margin formula and is exploring other potential formulas, but believes with certain modifications the percentage of adjusted gross billing methodology can be effective and useful because of its simplicity, ease of administration, and motivational effect on the physicians. The primary improvement to the model needed would be the ability to adjust the formula on a frequent basis for individual practice variations. Modifications will continue to be made as circumstances change, but the basic principles will remain constant.

  12. Plethora or paucity: a systematic search and bibliometric study of the application and design of qualitative methods in nursing research 2008-2010.

    PubMed

    Ball, Elaine; McLoughlin, Moira; Darvill, Angela

    2011-04-01

    Qualitative methodology has increased in application and acceptability in all research disciplines. In nursing, it is appropriate that a plethora of qualitative methods can be found as nurses pose real-world questions to clinical, cultural and ethical issues of patient care (Johnson, 2007; Long and Johnson, 2007), yet the methods nurses readily use in pursuit of answers remains under intense scrutiny. One of the problems with qualitative methodology for nursing research is its place in the hierarchy of evidence (HOE); another is its comparison to the positivist constructs of what constitutes good research and the measurement of qualitative research against this. In order to position and strengthen its evidence base, nursing may well seek to distance itself from a qualitative perspective and utilise methods at the top of the HOE; yet given the relation of qualitative methods to nursing this would constrain rather than broaden the profession in search of answers and an evidence base. The comparison between qualitative and quantitative can be both mutually exclusive and rhetorical, by shifting the comparison this study takes a more reflexive position and critically appraises qualitative methods against the standards set by qualitative researchers. By comparing the design and application of qualitative methods in nursing over a two year period, the study examined how qualitative stands up to independent rather than comparative scrutiny. For the methods, a four-step mixed methods approach newly constructed by the first author was used to define the scope of the research question and develop inclusion criteria. 2. Synthesis tables were constructed to organise data, 3. Bibliometrics configured data. 4. Studies selected for inclusion in the review were critically appraised using a critical interpretive synthesis (Dixon-Woods et al., 2006). The paper outlines the research process as well as findings. Results showed of the 240 papers analysed, 27% used ad hoc or no references to qualitative; methodological terms such as thematic analysis or constant comparative methods were used inconsistently; qualitative was a catch-all panacea rather than a methodology with well-argued terms or contextual definition. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Learning Entrepreneurship in Higher Education

    ERIC Educational Resources Information Center

    Taatila, Vesa P.

    2010-01-01

    Purpose: There is a constant need to produce more entrepreneurial graduates from higher education institutions. This paper aims to present and discuss several successful cases of entrepreneurial learning environments in order to suggest some important aspects that higher education institutions should consider. Design/methodology/approach: The…

  14. Preloading To Accelerate Slow-Crack-Growth Testing

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Choi, Sung R.; Pawlik, Ralph J.

    2004-01-01

    An accelerated-testing methodology has been developed for measuring the slow-crack-growth (SCG) behavior of brittle materials. Like the prior methodology, the accelerated-testing methodology involves dynamic fatigue ( constant stress-rate) testing, in which a load or a displacement is applied to a specimen at a constant rate. SCG parameters or life prediction parameters needed for designing components made of the same material as that of the specimen are calculated from the relationship between (1) the strength of the material as measured in the test and (2) the applied stress rate used in the test. Despite its simplicity and convenience, dynamic fatigue testing as practiced heretofore has one major drawback: it is extremely time-consuming, especially at low stress rates. The present accelerated methodology reduces the time needed to test a specimen at a given rate of applied load, stress, or displacement. Instead of starting the test from zero applied load or displacement as in the prior methodology, one preloads the specimen and increases the applied load at the specified rate (see Figure 1). One might expect the preload to alter the results of the test and indeed it does, but fortunately, it is possible to account for the effect of the preload in interpreting the results. The accounting is done by calculating the normalized strength (defined as the strength in the presence of preload the strength in the absence of preload) as a function of (1) the preloading factor (defined as the preload stress the strength in the absence of preload) and (2) a SCG parameter, denoted n, that is used in a power-law crack-speed formulation. Figure 2 presents numerical results from this theoretical calculation.

  15. Calculation of exchange coupling constants in triply-bridged dinuclear Cu(II) compounds based on spin-flip constricted variational density functional theory.

    PubMed

    Seidu, Issaka; Zhekova, Hristina R; Seth, Michael; Ziegler, Tom

    2012-03-08

    The performance of the second-order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) for the calculation of the exchange coupling constant (J) is assessed by application to a series of triply bridged Cu(II) dinuclear complexes. A comparison of the J values based on SF-CV(2)-DFT with those obtained by the broken symmetry (BS) DFT method and experiment is provided. It is demonstrated that our methodology constitutes a viable alternative to the BS-DFT method. The strong dependence of the calculated exchange coupling constants on the applied functionals is demonstrated. Both SF-CV(2)-DFT and BS-DFT affords the best agreement with experiment for hybrid functionals.

  16. Methodological Choices in Muscle Synergy Analysis Impact Differentiation of Physiological Characteristics Following Stroke

    PubMed Central

    Banks, Caitlin L.; Pai, Mihir M.; McGuirk, Theresa E.; Fregly, Benjamin J.; Patten, Carolynn

    2017-01-01

    Muscle synergy analysis (MSA) is a mathematical technique that reduces the dimensionality of electromyographic (EMG) data. Used increasingly in biomechanics research, MSA requires methodological choices at each stage of the analysis. Differences in methodological steps affect the overall outcome, making it difficult to compare results across studies. We applied MSA to EMG data collected from individuals post-stroke identified as either responders (RES) or non-responders (nRES) on the basis of a critical post-treatment increase in walking speed. Importantly, no clinical or functional indicators identified differences between the cohort of RES and nRES at baseline. For this exploratory study, we selected the five highest RES and five lowest nRES available from a larger sample. Our goal was to assess how the methodological choices made before, during, and after MSA affect the ability to differentiate two groups with intrinsic physiologic differences based on MSA results. We investigated 30 variations in MSA methodology to determine which choices allowed differentiation of RES from nRES at baseline. Trial-to-trial variability in time-independent synergy vectors (SVs) and time-varying neural commands (NCs) were measured as a function of: (1) number of synergies computed; (2) EMG normalization method before MSA; (3) whether SVs were held constant across trials or allowed to vary during MSA; and (4) synergy analysis output normalization method after MSA. MSA methodology had a strong effect on our ability to differentiate RES from nRES at baseline. Across all 10 individuals and MSA variations, two synergies were needed to reach an average of 90% variance accounted for (VAF). Based on effect sizes, differences in SV and NC variability between groups were greatest using two synergies with SVs that varied from trial-to-trial. Differences in SV variability were clearest using unit magnitude per trial EMG normalization, while NC variability was less sensitive to EMG normalization method. No outcomes were greatly impacted by output normalization method. MSA variability for some, but not all, methods successfully differentiated intrinsic physiological differences inaccessible to traditional clinical or biomechanical assessments. Our results were sensitive to methodological choices, highlighting the need for disclosure of all aspects of MSA methodology in future studies. PMID:28912707

  17. Methodological Choices in Muscle Synergy Analysis Impact Differentiation of Physiological Characteristics Following Stroke.

    PubMed

    Banks, Caitlin L; Pai, Mihir M; McGuirk, Theresa E; Fregly, Benjamin J; Patten, Carolynn

    2017-01-01

    Muscle synergy analysis (MSA) is a mathematical technique that reduces the dimensionality of electromyographic (EMG) data. Used increasingly in biomechanics research, MSA requires methodological choices at each stage of the analysis. Differences in methodological steps affect the overall outcome, making it difficult to compare results across studies. We applied MSA to EMG data collected from individuals post-stroke identified as either responders (RES) or non-responders (nRES) on the basis of a critical post-treatment increase in walking speed. Importantly, no clinical or functional indicators identified differences between the cohort of RES and nRES at baseline. For this exploratory study, we selected the five highest RES and five lowest nRES available from a larger sample. Our goal was to assess how the methodological choices made before, during, and after MSA affect the ability to differentiate two groups with intrinsic physiologic differences based on MSA results. We investigated 30 variations in MSA methodology to determine which choices allowed differentiation of RES from nRES at baseline. Trial-to-trial variability in time-independent synergy vectors (SVs) and time-varying neural commands (NCs) were measured as a function of: (1) number of synergies computed; (2) EMG normalization method before MSA; (3) whether SVs were held constant across trials or allowed to vary during MSA; and (4) synergy analysis output normalization method after MSA. MSA methodology had a strong effect on our ability to differentiate RES from nRES at baseline. Across all 10 individuals and MSA variations, two synergies were needed to reach an average of 90% variance accounted for (VAF). Based on effect sizes, differences in SV and NC variability between groups were greatest using two synergies with SVs that varied from trial-to-trial. Differences in SV variability were clearest using unit magnitude per trial EMG normalization, while NC variability was less sensitive to EMG normalization method. No outcomes were greatly impacted by output normalization method. MSA variability for some, but not all, methods successfully differentiated intrinsic physiological differences inaccessible to traditional clinical or biomechanical assessments. Our results were sensitive to methodological choices, highlighting the need for disclosure of all aspects of MSA methodology in future studies.

  18. Theory and simulations of adhesion receptor dimerization on membrane surfaces.

    PubMed

    Wu, Yinghao; Honig, Barry; Ben-Shaul, Avinoam

    2013-03-19

    The equilibrium constants of trans and cis dimerization of membrane bound (2D) and freely moving (3D) adhesion receptors are expressed and compared using elementary statistical-thermodynamics. Both processes are mediated by the binding of extracellular subdomains whose range of motion in the 2D environment is reduced upon dimerization, defining a thin reaction shell where dimer formation and dissociation take place. We show that the ratio between the 2D and 3D equilibrium constants can be expressed as a product of individual factors describing, respectively, the spatial ranges of motions of the adhesive domains, and their rotational freedom within the reaction shell. The results predicted by the theory are compared to those obtained from a novel, to our knowledge, dynamical simulations methodology, whereby pairs of receptors perform realistic translational, internal, and rotational motions in 2D and 3D. We use cadherins as our model system. The theory and simulations explain how the strength of cis and trans interactions of adhesive receptors are affected both by their presence in the constrained intermembrane space and by the 2D environment of membrane surfaces. Our work provides fundamental insights as to the mechanism of lateral clustering of adhesion receptors after cell-cell contact and, more generally, to the formation of lateral microclusters of proteins on cell surfaces. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  19. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

    ERIC Educational Resources Information Center

    Putten, Jim Vander; Nolen, Amanda L.

    2010-01-01

    This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

  20. Divergent Perceptions: Parental Acceptance and Adolescents' Psychosocial Adjustment.

    ERIC Educational Resources Information Center

    Eguia, Maria E.

    This study examined whether divergent parent-adolescent perceptions regarding parental acceptance predicted adolescent adjustment when the level of parental acceptance (as perceived by the adolescent) was held constant, a methodological and theoretical issue largely ignored by previous research. Subjects were 192 intact, primarily dual-earner…

  1. Evolving cell models for systems and synthetic biology.

    PubMed

    Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio

    2010-03-01

    This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.

  2. A Qualitative Assessment of the Content Validity of the ICECAP-A and EQ-5D-5L and Their Appropriateness for Use in Health Research

    PubMed Central

    Keeley, Thomas; Al-Janabi, Hareth; Lorgelly, Paula; Coast, Joanna

    2013-01-01

    Purpose The ICECAP-A and EQ-5D-5L are two index measures appropriate for use in health research. Assessment of content validity allows understanding of whether a measure captures the most relevant and important aspects of a concept. This paper reports a qualitative assessment of the content validity and appropriateness for use of the eq-5D-5L and ICECAP-A measures, using novel methodology. Methods In-depth semi-structured interviews were conducted with research professionals in the UK and Australia. Informants were purposively sampled based on their professional role. Data were analysed in an iterative, thematic and constant comparative manner. A two stage investigation - the comparative direct approach - was developed to address the methodological challenges of the content validity research and allow rigorous assessment. Results Informants viewed the ICECAP-A as an assessment of the broader determinants of quality of life, but lacking in assessment of health-related determinants. The eq-5D-5L was viewed as offering good coverage of health determinants, but as lacking in assessment of these broader determinants. Informants held some concerns about the content or wording of the Self-care, Pain/Discomfort and Anxiety/Depression items (EQ-5D-5L) and the Enjoyment, Achievement and attachment items (ICECAP-A). Conclusion Using rigorous qualitative methodology the results suggest that the ICECAP-A and EQ-5D-5L hold acceptable levels of content validity and are appropriate for use in health research. This work adds expert opinion to the emerging body of research using patients and public to validate these measures. PMID:24367708

  3. Platelet-rich plasma in arthroscopic rotator cuff repair: a meta-analysis of randomized controlled trials.

    PubMed

    Zhao, Jia-Guo; Zhao, Li; Jiang, Yan-Xia; Wang, Zeng-Liang; Wang, Jia; Zhang, Peng

    2015-01-01

    The purpose of this study was to appraise the retear rate and clinical outcomes of platelet-rich plasma use in patients undergoing arthroscopic full-thickness rotator cuff repair. We searched the Cochrane Library, PubMed, and EMBASE databases for randomized controlled trials comparing the outcomes of arthroscopic rotator cuff surgery with or without the use of platelet-rich plasma. Methodological quality was assessed by the Detsky quality scale. When there was no high heterogeneity, we used a fixed-effects model. Dichotomous variables were presented as risk ratios (RRs) with 95% confidence intervals (CIs), and continuous data were measured as mean differences with 95% CIs. The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system was used to assess the quality of evidence for each individual outcome. Eight randomized controlled trials were included, with the sample size ranging from 28 to 88. Overall methodological quality was high. Fixed-effects analysis showed that differences were not significant between the 2 groups in retear rate (RR, 0.94; 95% CI, 0.70 to 1.25; P = .66), Constant score (mean difference, 1.12; 95% CI, -1.38 to 3.61; P = .38), and University of California at Los Angeles (UCLA) score (mean difference, -0.68; 95% CI, -2.00 to 0.65; P = .32). The strength of GRADE evidence was categorized respectively as low for retear, moderate for Constant score, and low for UCLA shoulder score. Our meta-analysis does not support the use of platelet-rich plasma in the arthroscopic repair of full-thickness rotator cuff tears over repairs without platelet-rich plasma because of similar retear rates and clinical outcomes. Level II, meta-analysis of Level I and II randomized controlled trials. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  4. The efficacy of therapeutic ultrasound for rotator cuff tendinopathy: A systematic review and meta-analysis.

    PubMed

    Desmeules, François; Boudreault, Jennifer; Roy, Jean-Sébastien; Dionne, Clermont; Frémont, Pierre; MacDermid, Joy C

    2015-08-01

    A systematic review and meta-analysis on the efficacy of therapeutic ultrasound (US) in adults suffering from rotator cuff tendinopathy. A literature search was conducted in four databases for randomized controlled trials (RCT) published until 12/2013, comparing the efficacy of US to any other interventions in adults suffering from rotator cuff tendinopathy. The Cochrane Risk of Bias tool was used to evaluate the risk of bias of included studies. Data were summarized qualitatively or quantitatively. Eleven RCTs with a low mean methodological score (50.0% ± 15.6%) were included. Therapeutic US did not provide greater benefits than a placebo intervention or advice in terms of pain reduction and functional improvement. When provided in conjunction with exercise, US therapy is not superior to exercise alone in terms of pain reduction and functional improvement (pooled mean difference of the Constant-Murley score: -0.26 with 95% confidence interval of -3.84 to 3.32). Laser therapy was found superior to therapeutic US in terms of pain reduction. Based on low to moderate level evidence, therapeutic US does not provide any benefit compared to a placebo or advice, to laser therapy or when combined to exercise. More methodologically sound studies on the efficacy of therapeutic US are warranted. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. New Tools for Design

    ERIC Educational Resources Information Center

    Halliburton, Cal; Roza, Victoria

    2006-01-01

    Technology educators are constantly in search of new tools and methods to enhance the education of their students. This article is an excerpt from a longer article published in "The Technology Teacher" that introduced the technology education community to a research- and knowledge-based methodology for design--invention and innovation. This…

  6. Predicting Failure Progression and Failure Loads in Composite Open-Hole Tension Coupons

    NASA Technical Reports Server (NTRS)

    Arunkumar, Satyanarayana; Przekop, Adam

    2010-01-01

    Failure types and failure loads in carbon-epoxy [45n/90n/-45n/0n]ms laminate coupons with central circular holes subjected to tensile load are simulated using progressive failure analysis (PFA) methodology. The progressive failure methodology is implemented using VUMAT subroutine within the ABAQUS(TradeMark)/Explicit nonlinear finite element code. The degradation model adopted in the present PFA methodology uses an instantaneous complete stress reduction (COSTR) approach to simulate damage at a material point when failure occurs. In-plane modeling parameters such as element size and shape are held constant in the finite element models, irrespective of laminate thickness and hole size, to predict failure loads and failure progression. Comparison to published test data indicates that this methodology accurately simulates brittle, pull-out and delamination failure types. The sensitivity of the failure progression and the failure load to analytical loading rates and solvers precision is demonstrated.

  7. Development of quantitative radioactive methodologies on paper to determine important lateral-flow immunoassay parameters.

    PubMed

    Mosley, Garrett L; Nguyen, Phuong; Wu, Benjamin M; Kamei, Daniel T

    2016-08-07

    The lateral-flow immunoassay (LFA) is a well-established diagnostic technology that has recently seen significant advancements due in part to the rapidly expanding fields of paper diagnostics and paper-fluidics. As LFA-based diagnostics become more complex, it becomes increasingly important to quantitatively determine important parameters during the design and evaluation process. However, current experimental methods for determining these parameters have certain limitations when applied to LFA systems. In this work, we describe our novel methods of combining paper and radioactive measurements to determine nanoprobe molarity, the number of antibodies per nanoprobe, and the forward and reverse rate constants for nanoprobe binding to immobilized target on the LFA test line. Using a model LFA system that detects for the presence of the protein transferrin (Tf), we demonstrate the application of our methods, which involve quantitative experimentation and mathematical modeling. We also compare the results of our rate constant experiments with traditional experiments to demonstrate how our methods more appropriately capture the influence of the LFA environment on the binding interaction. Our novel experimental approaches can therefore more efficiently guide the research process for LFA design, leading to more rapid advancement of the field of paper-based diagnostics.

  8. Experimental determination of thermodynamic equilibrium in biocatalytic transamination.

    PubMed

    Tufvesson, Pär; Jensen, Jacob S; Kroutil, Wolfgang; Woodley, John M

    2012-08-01

    The equilibrium constant is a critical parameter for making rational design choices in biocatalytic transamination for the synthesis of chiral amines. However, very few reports are available in the scientific literature determining the equilibrium constant (K) for the transamination of ketones. Various methods for determining (or estimating) equilibrium have previously been suggested, both experimental as well as computational (based on group contribution methods). However, none of these were found suitable for determining the equilibrium constant for the transamination of ketones. Therefore, in this communication we suggest a simple experimental methodology which we hope will stimulate more accurate determination of thermodynamic equilibria when reporting the results of transaminase-catalyzed reactions in order to increase understanding of the relationship between substrate and product molecular structure on reaction thermodynamics. Copyright © 2012 Wiley Periodicals, Inc.

  9. Features of the Thermodynamics of Trivalent Lanthanide/Actinide Distribution Reactions by Tri-n-Octylphosphine Oxide and Bis(2-EthylHexyl) Phosphoric Acid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Travis S. Grimes; Peter R. Zalupski

    2014-11-01

    A new methodology has been developed to study the thermochemical features of the biphasic transfer reactions of trisnitrato complexes of lanthanides and americium by a mono-functional solvating ligand (tri-n-octyl phosphine oxide - TOPO). Stability constants for successive nitrato complexes (M(NO3)x3-x (aq) where M is Eu3+, Am3+ or Cm3+) were determined to assist in the calculation of the extraction constant, Kex, for the metal ions under study. Enthalpies of extraction (?Hextr) for the lanthanide series (excluding Pm3+) and Am3+ by TOPO have been measured using isothermal titration calorimetry. The observed ?Hextr were found to be constant at ~29 kJ mol-1across themore » series from La3+-Er3+, with a slight decrease observed from Tm3+-Lu3+. These heats were found to be consistent with enthalpies determined using van ’t Hoff analysis of temperature dependent extraction studies. A complete set of thermodynamic parameters (?G, ?H, ?S) was calculated for Eu(NO3)3, Am(NO3)3 and Cm(NO3)3 extraction by TOPO and Am3+ and Cm3+ extraction by bis(2-ethylhexyl) phosphoric acid (HDEHP). A discussion comparing the energetics of these systems is offered. The measured biphasic extraction heats for the transplutonium elements, ?Hextr, presented in these studies are the first ever direct measurements offered using two-phase calorimetric techniques.« less

  10. My Teaching Philosophy

    ERIC Educational Resources Information Center

    Mambetaliev, Askarbek

    2007-01-01

    Since the collapse of the Soviet Union the Ministry of Education of the Kyrgyz Republic has included a few social science disciplines in the list of the Educational State Standards, though the content of these subjects and teaching methodologies are still weak. One of the problems, which I constantly face in Kyrgyzstan when developing a new…

  11. Performance in Physiology Evaluation: Possible Improvement by Active Learning Strategies

    ERIC Educational Resources Information Center

    Montrezor, Luís H.

    2016-01-01

    The evaluation process is complex and extremely important in the teaching/learning process. Evaluations are constantly employed in the classroom to assist students in the learning process and to help teachers improve the teaching process. The use of active methodologies encourages students to participate in the learning process, encourages…

  12. A Multivariate Solution of the Multivariate Ranking and Selection Problem

    DTIC Science & Technology

    1980-02-01

    Taneja (1972)), a ’a for a vector of constants c (Krishnaiah and Rizvi (1966)), the generalized variance ( Gnanadesikan and Gupta (1970)), iegier (1976...Olk-in, I. and Sobel, M. (1977). Selecting and Ordering Populations: A New Statistical Methodology, John Wiley & Sons, Inc., New York. Gnanadesikan

  13. Middle Grade Students' Concept Images of Algebraic Concepts

    ERIC Educational Resources Information Center

    Tekin-Sitrava, Reyhan

    2017-01-01

    This study investigates middle school students' concept images of algebraic concepts which are term, constant term, variable, and coefficient. Also, the study aimed to explore their performances in defining these concepts correctly. A phenomenological method was used to support methodological perspective and to reveal the findings of the study.…

  14. Effective Second Language Writing. TESOL Classroom Practice Series

    ERIC Educational Resources Information Center

    Kasten, Susan, Ed.

    2010-01-01

    The classroom practices discussed in "Effective Second Language Writing" reflect various trends and methodologies; however, the underlying theme in this volume of the Classroom Practice Series is the need for clear and meaningful communication between ESL writers and their readers. While approaches differ, two core beliefs are constant: ESL…

  15. Study on improving the turbidity measurement of the absolute coagulation rate constant.

    PubMed

    Sun, Zhiwei; Liu, Jie; Xu, Shenghua

    2006-05-23

    The existing theories dealing with the evaluation of the absolute coagulation rate constant by turbidity measurement were experimentally tested for different particle-sized (radius = a) suspensions at incident wavelengths (lambda) ranging from near-infrared to ultraviolet light. When the size parameter alpha = 2pi a/lambda > 3, the rate constant data from previous theories for fixed-sized particles show significant inconsistencies at different light wavelengths. We attribute this problem to the imperfection of these theories in describing the light scattering from doublets through their evaluation of the extinction cross section. The evaluations of the rate constants by all previous theories become untenable as the size parameter increases and therefore hampers the applicable range of the turbidity measurement. By using the T-matrix method, we present a robust solution for evaluating the extinction cross section of doublets formed in the aggregation. Our experiments show that this new approach is effective in extending the applicability range of the turbidity methodology and increasing measurement accuracy.

  16. Targeted proteomics coming of age - SRM, PRM and DIA performance evaluated from a core facility perspective.

    PubMed

    Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph

    2016-08-01

    Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Methodology development for evaluation of selective-fidelity rotorcraft simulation

    NASA Technical Reports Server (NTRS)

    Lewis, William D.; Schrage, D. P.; Prasad, J. V. R.; Wolfe, Daniel

    1992-01-01

    This paper addressed the initial step toward the goal of establishing performance and handling qualities acceptance criteria for realtime rotorcraft simulators through a planned research effort to quantify the system capabilities of 'selective fidelity' simulators. Within this framework the simulator is then classified based on the required task. The simulator is evaluated by separating the various subsystems (visual, motion, etc.) and applying corresponding fidelity constants based on the specific task. This methodology not only provides an assessment technique, but also provides a technique to determine the required levels of subsystem fidelity for a specific task.

  18. A comparison of kinesthetic-tactual and visual displays via a critical tracking task. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.

    1979-01-01

    The feasibility of using the critical tracking task to evaluate kinesthetic-tactual displays was examined. The test subjects were asked to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. The results indicate that the critical tracking task is both a feasible and a reliable methodology for assessing tactual tracking. Further, that the critical tracking methodology is as sensitive and valid a measure of tactual tracking as visual tracking is demonstrated by the approximately equal effects of quickening for the tactual and visual displays.

  19. A methodology to determine the level of automation to improve the production process and reduce the ergonomics index

    NASA Astrophysics Data System (ADS)

    Chan-Amaya, Alejandro; Anaya-Pérez, María Elena; Benítez-Baltazar, Víctor Hugo

    2017-08-01

    Companies are constantly looking for improvements in productivity to increase their competitiveness. The use of automation technologies is a tool that have been proven to be effective to achieve this. There are companies that are not familiar with the process to acquire automation technologies, therefore, they abstain from investments and thereby miss the opportunity to take advantage of it. The present document proposes a methodology to determine the level of automation appropriate for the production process and thus minimize automation and improve production taking in consideration the ergonomics factor.

  20. Measuring small compartment dimensions by probing diffusion dynamics via Non-uniform Oscillating-Gradient Spin-Echo (NOGSE) NMR.

    PubMed

    Shemesh, Noam; Alvarez, Gonzalo A; Frydman, Lucio

    2013-12-01

    Noninvasive measurements of microstructure in materials, cells, and in biological tissues, constitute a unique capability of gradient-assisted NMR. Diffusion-diffraction MR approaches pioneered by Callaghan demonstrated this ability; Oscillating-Gradient Spin-Echo (OGSE) methodologies tackle the demanding gradient amplitudes required for observing diffraction patterns by utilizing constant-frequency oscillating gradient pairs that probe the diffusion spectrum, D(ω). Here we present a new class of diffusion MR experiments, termed Non-uniform Oscillating-Gradient Spin-Echo (NOGSE), which dynamically probe multiple frequencies of the diffusion spectral density at once, thus affording direct microstructural information on the compartment's dimension. The NOGSE methodology applies N constant-amplitude gradient oscillations; N-1 of these oscillations are spaced by a characteristic time x, followed by a single gradient oscillation characterized by a time y, such that the diffusion dynamics is probed while keeping (N-1)x+y≡TNOGSE constant. These constant-time, fixed-gradient-amplitude, multi-frequency attributes render NOGSE particularly useful for probing small compartment dimensions with relatively weak gradients - alleviating difficulties associated with probing D(ω) frequency-by-frequency or with varying relaxation weightings, as in other diffusion-monitoring experiments. Analytical descriptions of the NOGSE signal are given, and the sequence's ability to extract small compartment sizes with a sensitivity towards length to the sixth power, is demonstrated using a microstructural phantom. Excellent agreement between theory and experiments was evidenced even upon applying weak gradient amplitudes. An MR imaging version of NOGSE was also implemented in ex vivo pig spinal cords and mouse brains, affording maps based on compartment sizes. The effects of size distributions on NOGSE are also briefly analyzed. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Generating or developing grounded theory: methods to understand health and illness.

    PubMed

    Woods, Phillip; Gapp, Rod; King, Michelle A

    2016-06-01

    Grounded theory is a qualitative research methodology that aims to explain social phenomena, e.g. why particular motivations or patterns of behaviour occur, at a conceptual level. Developed in the 1960s by Glaser and Strauss, the methodology has been reinterpreted by Strauss and Corbin in more recent times, resulting in different schools of thought. Differences arise from different philosophical perspectives concerning knowledge (epistemology) and the nature of reality (ontology), demanding that researchers make clear theoretical choices at the commencement of their research when choosing this methodology. Compared to other qualitative methods it has ability to achieve understanding of, rather than simply describing, a social phenomenon. Achieving understanding however, requires theoretical sampling to choose interviewees that can contribute most to the research and understanding of the phenomenon, and constant comparison of interviews to evaluate the same event or process in different settings or situations. Sampling continues until conceptual saturation is reached, i.e. when no new concepts emerge from the data. Data analysis focusses on categorising data (finding the main elements of what is occurring and why), and describing those categories in terms of properties (conceptual characteristics that define the category and give meaning) and dimensions (the variations within properties which produce specificity and range). Ultimately a core category which theoretically explains how all other categories are linked together is developed from the data. While achieving theoretical abstraction in the core category, it should be logical and capture all of the variation within the data. Theory development requires understanding of the methodology not just working through a set of procedures. This article provides a basic overview, set in the literature surrounding grounded theory, for those wanting to increase their understanding and quality of research output.

  2. Creep force modelling for rail traction vehicles based on the Fastsim algorithm

    NASA Astrophysics Data System (ADS)

    Spiryagin, Maksym; Polach, Oldrich; Cole, Colin

    2013-11-01

    The evaluation of creep forces is a complex task and their calculation is a time-consuming process for multibody simulation (MBS). A methodology of creep forces modelling at large traction creepages has been proposed by Polach [Creep forces in simulations of traction vehicles running on adhesion limit. Wear. 2005;258:992-1000; Influence of locomotive tractive effort on the forces between wheel and rail. Veh Syst Dyn. 2001(Suppl);35:7-22] adapting his previously published algorithm [Polach O. A fast wheel-rail forces calculation computer code. Veh Syst Dyn. 1999(Suppl);33:728-739]. The most common method for creep force modelling used by software packages for MBS of running dynamics is the Fastsim algorithm by Kalker [A fast algorithm for the simplified theory of rolling contact. Veh Syst Dyn. 1982;11:1-13]. However, the Fastsim code has some limitations which do not allow modelling the creep force - creep characteristic in agreement with measurements for locomotives and other high-power traction vehicles, mainly for large traction creep at low-adhesion conditions. This paper describes a newly developed methodology based on a variable contact flexibility increasing with the ratio of the slip area to the area of adhesion. This variable contact flexibility is introduced in a modification of Kalker's code Fastsim by replacing the constant Kalker's reduction factor, widely used in MBS, by a variable reduction factor together with a slip-velocity-dependent friction coefficient decreasing with increasing global creepage. The proposed methodology is presented in this work and compared with measurements for different locomotives. The modification allows use of the well recognised Fastsim code for simulation of creep forces at large creepages in agreement with measurements without modifying the proven modelling methodology at small creepages.

  3. Using Teradata University Network (TUN), a Free Internet Resource for Teaching and Learning

    ERIC Educational Resources Information Center

    Winter, Robert; Gericke, Anke; Bucher, Tobias

    2008-01-01

    Business intelligence and information logistics have become an important part of teaching curricula in recent years due to the increased demand for adequately trained graduates. Since these fields are characterized by a high amount of software and methodology innovations, teaching materials and teaching aids require constant updating. Teradata has…

  4. Cutting through the Hype: Evaluating the Innovative Potential of New Educational Technologies through Business Model Analysis

    ERIC Educational Resources Information Center

    Kalman, Yoram M.

    2016-01-01

    In an era when novel educational technologies are constantly introduced to the marketplace, often accompanied by hyperbolic claims that these ground-breaking innovations will transform the educational landscape, decision makers in educational institutions need a methodological approach for examining the innovative potential of new educational…

  5. Exploring Listeners' Real-Time Reactions to Regional Accents

    ERIC Educational Resources Information Center

    Watson, Kevin; Clark, Lynn

    2015-01-01

    Evaluative reactions to language stimuli are presumably dynamic events, constantly changing through time as the signal unfolds, yet the tools we usually use to capture these reactions provide us with only a snapshot of this process by recording reactions at a single point in time. This paper outlines and evaluates a new methodology which employs…

  6. Colleges and Universities Want to Be Your Friend: Communicating via Online Social Networking

    ERIC Educational Resources Information Center

    Wandel, Tamara L.

    2008-01-01

    This article presents a compilation of data regarding the role of online social networks within campus communities, specifically for nonacademic purposes. Both qualitative and quantitative data methodologies are used to provide a unique perspective on a constantly evolving topic. Interviews of students and administrators allow for candid…

  7. Discursive Shadowing in Linguistic Ethnography. Situated Practices and Circulating Discourses in Multilingual Schools

    ERIC Educational Resources Information Center

    Dewilde, Joke; Creese, Angela

    2016-01-01

    We consider discursive shadowing as methodology in linguistic ethnography and how it refines our analyses of participants' situated practices. In addition to the constant and extended company the researcher and key participant keep with one another in the field, shadowing in a linguistic ethnographic approach includes the ubiquitous…

  8. "Constantly in the Making": Pedagogical Characteristics of Education for Sustainability in Postsecondary Classrooms

    ERIC Educational Resources Information Center

    Belue Buckley, Jessica

    2015-01-01

    Using a grounded theory methodology with observation of 67 courses and interviews with 42 individuals, including faculty, staff, and students, the author highlights three pedagogical characteristics of postsecondary educators who engage in education for sustainability (EfS). Educators teach beyond content, incorporate a values orientation, and use…

  9. Impact of Hands-On Research Experience on Students' Learning in an Introductory Management Information System Course

    ERIC Educational Resources Information Center

    Wu, Yun; Sankar, Chetan S.

    2013-01-01

    Although students in Introductory Information Systems courses are taught new technology concepts, the complexity and constantly changing nature of these technologies makes it challenging to deliver the concepts effectively. Aiming to improve students' learning experiences, this research utilized the five phases of design science methodology to…

  10. Precipitable water vapour content from ESR/SKYNET sun-sky radiometers: validation against GNSS/GPS and AERONET over three different sites in Europe

    NASA Astrophysics Data System (ADS)

    Campanelli, Monica; Mascitelli, Alessandra; Sanò, Paolo; Diémoz, Henri; Estellés, Victor; Federico, Stefano; Iannarelli, Anna Maria; Fratarcangeli, Francesca; Mazzoni, Augusto; Realini, Eugenio; Crespi, Mattia; Bock, Olivier; Martínez-Lozano, Jose A.; Dietrich, Stefano

    2018-01-01

    The estimation of the precipitable water vapour content (W) with high temporal and spatial resolution is of great interest to both meteorological and climatological studies. Several methodologies based on remote sensing techniques have been recently developed in order to obtain accurate and frequent measurements of this atmospheric parameter. Among them, the relative low cost and easy deployment of sun-sky radiometers, or sun photometers, operating in several international networks, allowed the development of automatic estimations of W from these instruments with high temporal resolution. However, the great problem of this methodology is the estimation of the sun-photometric calibration parameters. The objective of this paper is to validate a new methodology based on the hypothesis that the calibration parameters characterizing the atmospheric transmittance at 940 nm are dependent on vertical profiles of temperature, air pressure and moisture typical of each measurement site. To obtain the calibration parameters some simultaneously seasonal measurements of W, from independent sources, taken over a large range of solar zenith angle and covering a wide range of W, are needed. In this work yearly GNSS/GPS datasets were used for obtaining a table of photometric calibration constants and the methodology was applied and validated in three European ESR-SKYNET network sites, characterized by different atmospheric and climatic conditions: Rome, Valencia and Aosta. Results were validated against the GNSS/GPS and AErosol RObotic NETwork (AERONET) W estimations. In both the validations the agreement was very high, with a percentage RMSD of about 6, 13 and 8 % in the case of GPS intercomparison at Rome, Aosta and Valencia, respectively, and of 8 % in the case of AERONET comparison in Valencia. Analysing the results by W classes, the present methodology was found to clearly improve W estimation at low W content when compared against AERONET in terms of % bias, bringing the agreement with the GPS (considered the reference one) from a % bias of 5.76 to 0.52.

  11. Toxicant induced behavioural aberrations in larval zebrafish are dependent on minor methodological alterations.

    PubMed

    Fraser, Thomas W K; Khezri, Abdolrahman; Jusdado, Juan G H; Lewandowska-Sabat, Anna M; Henry, Theodore; Ropstad, Erik

    2017-07-05

    Alterations in zebrafish motility are used to identify neurotoxic compounds, but few have reported how methodology may affect results. To investigate this, we exposed embryos to bisphenol A (BPA) or tetrabromobisphenol A (TBBPA) before assessing larval motility. Embryos were maintained on a day/night cycle (DN) or in constant darkness, were reared in 96 or 24 well plates (BPA only), and behavioural tests were carried out at 96, 100, or 118 (BPA only) hours post fertilisation (hpf). We found that the prior photo-regime, larval age, and/or arena size influence behavioural outcomes in response to toxicant exposure. For example, methodology determined whether 10μM BPA induced hyperactivity, hypoactivity, or had no behavioural effect. Furthermore, the minimum effect concentration was not consistent between different methodologies. Finally, we observed a mechanism previously used to explain hyperactivity following BPA exposure does not appear to explain the hypoactivity observed following minor alterations in methodology. Therefore, we demonstrate how methodology can have notable implications on dose responses and behavioural outcomes in larval zebrafish motility following identical chemical exposures. As such, our results have significant consequences for human and environmental risk assessment. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Adding value to the learning process by online peer review activities: towards the elaboration of a methodology to promote critical thinking in future engineers

    NASA Astrophysics Data System (ADS)

    Dominguez, Caroline; Nascimento, Maria M.; Payan-Carreira, Rita; Cruz, Gonçalo; Silva, Helena; Lopes, José; Morais, Maria da Felicidade A.; Morais, Eva

    2015-09-01

    Considering the results of research on the benefits and difficulties of peer review, this paper describes how teaching faculty, interested in endorsing the acquisition of communication and critical thinking (CT) skills among engineering students, has been implementing a learning methodology throughout online peer review activities. While introducing a new methodology, it is important to weight the advantages found and the conditions that might have restrained the activity outcomes, thereby modulating its overall efficiency. Our results show that several factors are decisive for the success of the methodology: the use of specific and detailed orientation guidelines for CT skills, the students' training on how to deliver a meaningful feedback, the opportunity to counter-argument, the selection of good assignments' examples, and the constant teacher's monitoring of the activity. Results also tackle other aspects of the methodology such as the thinking skills evaluation tools (grades and tests) that most suit our reality. An improved methodology is proposed taking in account the encountered limitations, thus offering the possibility to other interested institutions to use/test and/or improve it.

  13. Higher success rate with transcranial electrical stimulation of motor-evoked potentials using constant-voltage stimulation compared with constant-current stimulation in patients undergoing spinal surgery.

    PubMed

    Shigematsu, Hideki; Kawaguchi, Masahiko; Hayashi, Hironobu; Takatani, Tsunenori; Iwata, Eiichiro; Tanaka, Masato; Okuda, Akinori; Morimoto, Yasuhiko; Masuda, Keisuke; Tanaka, Yuu; Tanaka, Yasuhito

    2017-10-01

    During spine surgery, the spinal cord is electrophysiologically monitored via transcranial electrical stimulation of motor-evoked potentials (TES-MEPs) to prevent injury. Transcranial electrical stimulation of motor-evoked potential involves the use of either constant-current or constant-voltage stimulation; however, there are few comparative data available regarding their ability to adequately elicit compound motor action potentials. We hypothesized that the success rates of TES-MEP recordings would be similar between constant-current and constant-voltage stimulations in patients undergoing spine surgery. The objective of this study was to compare the success rates of TES-MEP recordings between constant-current and constant-voltage stimulation. This is a prospective, within-subject study. Data from 100 patients undergoing spinal surgery at the cervical, thoracic, or lumbar level were analyzed. The success rates of the TES-MEP recordings from each muscle were examined. Transcranial electrical stimulation with constant-current and constant-voltage stimulations at the C3 and C4 electrode positions (international "10-20" system) was applied to each patient. Compound muscle action potentials were bilaterally recorded from the abductor pollicis brevis (APB), deltoid (Del), abductor hallucis (AH), tibialis anterior (TA), gastrocnemius (GC), and quadriceps (Quad) muscles. The success rates of the TES-MEP recordings from the right Del, right APB, bilateral Quad, right TA, right GC, and bilateral AH muscles were significantly higher using constant-voltage stimulation than those using constant-current stimulation. The overall success rates with constant-voltage and constant-current stimulations were 86.3% and 68.8%, respectively (risk ratio 1.25 [95% confidence interval: 1.20-1.31]). The success rates of TES-MEP recordings were higher using constant-voltage stimulation compared with constant-current stimulation in patients undergoing spinal surgery. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Ultrasonic-generated fluid velocity with Sovereign WhiteStar micropulse and continuous phacoemulsification.

    PubMed

    Steinert, Roger F; Schafer, Mark E

    2006-02-01

    To evaluate and compare ultrasonic turbulence created by conventional and micropulse ultrasound technology. Sonora Medical Systems, Longmont, Colorado, USA. A high-resolution digital ultrasound probe imaged the zone around a phacoemulsification tip. Doppler analysis allowed determination of flow. The fluid velocity was measured at 4 levels of ultrasound power at a constant flow, comparing the ultrasonic conditions of continuous energy to WhiteStar micropulses. In addition to the normal baseline irrigation and aspiration, fluid movement was detected directly below the phaco tip, produced by a nonlinear effect known as acoustic streaming. Acoustic streaming increased with increased phacoemulsification power for both conditions. At each of the 4 levels of power, fluid velocity away from the tip was less with micropulse technology than with continuous phacoemulsification. The demonstrated decrease in acoustic streaming flow away from the phaco tip with Sovereign WhiteStar micropulse technology compared to conventional ultrasound provides an objective explanation for clinical observations of increased stability of nuclear fragments at the tip and less turbulence in the anterior chamber during phacoemulsification. This methodology can be used to examine and compare fluid flow and turbulence under a variety of clinically relevant conditions.

  15. Methodological tools for the collection and analysis of participant observation data using grounded theory.

    PubMed

    Laitinen, Heleena; Kaunonen, Marja; Astedt-Kurki, Päivi

    2014-11-01

    To give clarity to the analysis of participant observation in nursing when implementing the grounded theory method. Participant observation (PO) is a method of collecting data that reveals the reality of daily life in a specific context. In grounded theory, interviews are the primary method of collecting data but PO gives a distinctive insight, revealing what people are really doing, instead of what they say they are doing. However, more focus is needed on the analysis of PO. An observational study carried out to gain awareness of nursing care and its electronic documentation in four acute care wards in hospitals in Finland. Discussion of using the grounded theory method and PO as a data collection tool. The following methodological tools are discussed: an observational protocol, jotting of notes, microanalysis, the use of questioning, constant comparison, and writing and illustrating. Each tool has specific significance in collecting and analysing data, working in constant interaction. Grounded theory and participant observation supplied rich data and revealed the complexity of the daily reality of acute care. In this study, the methodological tools provided a base for the study at the research sites and outside. The process as a whole was challenging. It was time-consuming and it required rigorous and simultaneous data collection and analysis, including reflective writing. Using these methodological tools helped the researcher stay focused from data collection and analysis to building theory. Using PO as a data collection method in qualitative nursing research provides insights. It is not commonly discussed in nursing research and therefore this study can provide insight, which cannot be seen or revealed by using other data collection methods. Therefore, this paper can produce a useful tool for those who intend to use PO and grounded theory in their nursing research.

  16. FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES

    DTIC Science & Technology

    2017-06-01

    FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES BY AMANDA DONNELLY A THESIS...work develops a comparative model for strategic design methodologies, focusing on the primary elements of vision, time, process, communication and...collaboration, and risk assessment. My analysis dissects and compares three potential design methodologies including, net assessment, scenarios and

  17. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.

    PubMed

    Tan, Woei Wan

    2007-04-01

    The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.

  18. Quantitative determination of band distortions in diamond attenuated total reflectance infrared spectra.

    PubMed

    Boulet-Audet, Maxime; Buffeteau, Thierry; Boudreault, Simon; Daugey, Nicolas; Pézolet, Michel

    2010-06-24

    Due to its unmatched hardness and chemical inertia, diamond offers many advantages over other materials for extreme conditions and routine analysis by attenuated total reflection (ATR) infrared spectroscopy. Its low refractive index can offer up to a 6-fold absorbance increase compared to germanium. Unfortunately, it also results for strong bands in spectral distortions compared to transmission experiments. The aim of this paper is to present a methodological approach to determine quantitatively the degree of the spectral distortions in ATR spectra. This approach requires the determination of the optical constants (refractive index and extinction coefficient) of the investigated sample. As a typical example, the optical constants of the fibroin protein of the silk worm Bombyx mori have been determined from the polarized ATR spectra obtained using both diamond and germanium internal reflection elements. The positions found for the amide I band by germanium and diamond ATR are respectively 6 and 17 cm(-1) lower than the true value dtermined from the k(nu) spectrum, which is calculated to be 1659 cm(-1). To determine quantitatively the effect of relevant parameters such as the film thickness and the protein concentration, various spectral simulations have also been performed. The use of a thinner film probed by light polarized in the plane of incidence and diluting the protein sample can help in obtaining ATR spectra that are closer to their transmittance counterparts. To extend this study to any system, the ATR distortion amplitude has been evaluated using spectral simulations performed for bands of various intensities and widths. From these simulations, a simple empirical relationship has been found to estimate the band shift from the experimental band height and width that could be of practical use for ATR users. This paper shows that the determination of optical constants provides an efficient way to recover the true spectrum shape and band frequencies of distorted ATR spectra.

  19. Weighted Ensemble Simulation: Review of Methodology, Applications, and Software

    PubMed Central

    Zuckerman, Daniel M.; Chong, Lillian T.

    2018-01-01

    The weighted ensemble (WE) methodology orchestrates quasi-independent parallel simulations run with intermittent communication that can enhance sampling of rare events such as protein conformational changes, folding, and binding. The WE strategy can achieve superlinear scaling—the unbiased estimation of key observables such as rate constants and equilibrium state populations to greater precision than would be possible with ordinary parallel simulation. WE software can be used to control any dynamics engine, such as standard molecular dynamics and cell-modeling packages. This article reviews the theoretical basis of WE and goes on to describe successful applications to a number of complex biological processes—protein conformational transitions, (un)binding, and assembly processes, as well as cell-scale processes in systems biology. We furthermore discuss the challenges that need to be overcome in the next phase of WE methodological development. Overall, the combined advances in WE methodology and software have enabled the simulation of long-timescale processes that would otherwise not be practical on typical computing resources using standard simulation. PMID:28301772

  20. Weighted Ensemble Simulation: Review of Methodology, Applications, and Software.

    PubMed

    Zuckerman, Daniel M; Chong, Lillian T

    2017-05-22

    The weighted ensemble (WE) methodology orchestrates quasi-independent parallel simulations run with intermittent communication that can enhance sampling of rare events such as protein conformational changes, folding, and binding. The WE strategy can achieve superlinear scaling-the unbiased estimation of key observables such as rate constants and equilibrium state populations to greater precision than would be possible with ordinary parallel simulation. WE software can be used to control any dynamics engine, such as standard molecular dynamics and cell-modeling packages. This article reviews the theoretical basis of WE and goes on to describe successful applications to a number of complex biological processes-protein conformational transitions, (un)binding, and assembly processes, as well as cell-scale processes in systems biology. We furthermore discuss the challenges that need to be overcome in the next phase of WE methodological development. Overall, the combined advances in WE methodology and software have enabled the simulation of long-timescale processes that would otherwise not be practical on typical computing resources using standard simulation.

  1. Bayesian methodology incorporating expert judgment for ranking countermeasure effectiveness under uncertainty: example applied to at grade railroad crossings in Korea.

    PubMed

    Washington, Simon; Oh, Jutaek

    2006-03-01

    Transportation professionals are sometimes required to make difficult transportation safety investment decisions in the face of uncertainty. In particular, an engineer may be expected to choose among an array of technologies and/or countermeasures to remediate perceived safety problems when: (1) little information is known about the countermeasure effects on safety; (2) information is known but from different regions, states, or countries where a direct generalization may not be appropriate; (3) where the technologies and/or countermeasures are relatively untested, or (4) where costs prohibit the full and careful testing of each of the candidate countermeasures via before-after studies. The importance of an informed and well-considered decision based on the best possible engineering knowledge and information is imperative due to the potential impact on the numbers of human injuries and deaths that may result from these investments. This paper describes the formalization and application of a methodology to evaluate the safety benefit of countermeasures in the face of uncertainty. To illustrate the methodology, 18 countermeasures for improving safety of at grade railroad crossings (AGRXs) in the Republic of Korea are considered. Akin to "stated preference" methods in travel survey research, the methodology applies random selection and laws of large numbers to derive accident modification factor (AMF) densities from expert opinions. In a full Bayesian analysis framework, the collective opinions in the form of AMF densities (data likelihood) are combined with prior knowledge (AMF density priors) for the 18 countermeasures to obtain 'best' estimates of AMFs (AMF posterior credible intervals). The countermeasures are then compared and recommended based on the largest safety returns with minimum risk (uncertainty). To the author's knowledge the complete methodology is new and has not previously been applied or reported in the literature. The results demonstrate that the methodology is able to discern anticipated safety benefit differences across candidate countermeasures. For the 18 at grade railroad crossings considered in this analysis, it was found that the top three performing countermeasures for reducing crashes are in-vehicle warning systems, obstacle detection systems, and constant warning time systems.

  2. Full-Envelope Launch Abort System Performance Analysis Methodology

    NASA Technical Reports Server (NTRS)

    Aubuchon, Vanessa V.

    2014-01-01

    The implementation of a new dispersion methodology is described, which dis-perses abort initiation altitude or time along with all other Launch Abort System (LAS) parameters during Monte Carlo simulations. In contrast, the standard methodology assumes that an abort initiation condition is held constant (e.g., aborts initiated at altitude for Mach 1, altitude for maximum dynamic pressure, etc.) while dispersing other LAS parameters. The standard method results in large gaps in performance information due to the discrete nature of initiation conditions, while the full-envelope dispersion method provides a significantly more comprehensive assessment of LAS abort performance for the full launch vehicle ascent flight envelope and identifies performance "pinch-points" that may occur at flight conditions outside of those contained in the discrete set. The new method has significantly increased the fidelity of LAS abort simulations and confidence in the results.

  3. Decision analysis to complete diagnostic research by closing the gap between test characteristics and cost-effectiveness.

    PubMed

    Schaafsma, Joanna D; van der Graaf, Yolanda; Rinkel, Gabriel J E; Buskens, Erik

    2009-12-01

    The lack of a standard methodology in diagnostic research impedes adequate evaluation before implementation of constantly developing diagnostic techniques. We discuss the methodology of diagnostic research and underscore the relevance of decision analysis in the process of evaluation of diagnostic tests. Overview and conceptual discussion. Diagnostic research requires a stepwise approach comprising assessment of test characteristics followed by evaluation of added value, clinical outcome, and cost-effectiveness. These multiple goals are generally incompatible with a randomized design. Decision-analytic models provide an important alternative through integration of the best available evidence. Thus, critical assessment of clinical value and efficient use of resources can be achieved. Decision-analytic models should be considered part of the standard methodology in diagnostic research. They can serve as a valid alternative to diagnostic randomized clinical trials (RCTs).

  4. Tuning of Terahertz Resonances of Pyridyl Benzamide Derivatives by Electronegative Atom Substitution

    NASA Astrophysics Data System (ADS)

    Dash, Jyotirmayee; Ray, Shaumik; Devi, Nirmala; Basutkar, Nitin; Gonnade, Rajesh G.; Ambade, Ashootosh V.; Pesala, Bala

    2018-05-01

    N-(pyridin-2-yl) benzamide (Ph2AP)-based organic molecules with prominent terahertz (THz) signatures (less than 5 THz) have been synthesized. The THz resonances are tuned by substituting the most electronegative atom, fluorine, at ortho (2F-Ph2AP), meta (3F-Ph2AP), and para (4F-Ph2AP) positions in a Ph2AP molecule. Substitution of fluorine helps in varying the charge distribution of the atoms forming hydrogen bond and hence strength of the hydrogen bond is varied which helps in tuning the THz resonances. The tuning of lower THz resonances of 2F-Ph2AP, 3F-Ph2AP, and 4F-Ph2AP has been explained in terms of compliance constant (relaxed force constant). Four-molecule cluster simulations have been carried out using Gaussian09 software to calculate the compliance constant of the hydrogen bonds. Crystal structure simulations of the above molecules using CRYSTAL14 software have been carried out to understand the origin of THz resonances. It has been observed that THz resonances are shifted to higher frequencies with stronger hydrogen bonds. The study shows that 3F-Ph2AP and 4F-Ph2AP have higher hydrogen bond strength and hence the THz resonances originating due to stretching of intermolecular hydrogen bonds have been shifted to higher frequencies compared to 2F-Ph2AP. The methodology presented here will help in designing novel organic molecules by substituting various electronegative atoms in order to achieve prominent THz resonances.

  5. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE PAGES

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.; ...

    2017-09-07

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  6. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  7. Simultaneous measurement of the maximum oscillation amplitude and the transient decay time constant of the QCM reveals stiffness changes of the adlayer.

    PubMed

    Marxer, C Galli; Coen, M Collaud; Bissig, H; Greber, U F; Schlapbach, L

    2003-10-01

    Interpretation of adsorption kinetics measured with a quartz crystal microbalance (QCM) can be difficult for adlayers undergoing modification of their mechanical properties. We have studied the behavior of the oscillation amplitude, A(0), and the decay time constant, tau, of quartz during adsorption of proteins and cells, by use of a home-made QCM. We are able to measure simultaneously the frequency, f, the dissipation factor, D, the maximum amplitude, A(0), and the transient decay time constant, tau, every 300 ms in liquid, gaseous, or vacuum environments. This analysis enables adsorption and modification of liquid/mass properties to be distinguished. Moreover the surface coverage and the stiffness of the adlayer can be estimated. These improvements promise to increase the appeal of QCM methodology for any applications measuring intimate contact of a dynamic material with a solid surface.

  8. Generating Fatigue Crack Growth Thresholds with Constant Amplitude Loads

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Newman, James C., J.; Forman, Royce G.

    2002-01-01

    The fatigue crack growth threshold, defining crack growth as either very slow or nonexistent, has been traditionally determined with standardized load reduction methodologies. Some experimental procedures tend to induce load history effects that result in remote crack closure from plasticity. This history can affect the crack driving force, i.e. during the unloading process the crack will close first at some point along the wake, reducing the effective load at the crack tip. One way to reduce the effects of load history is to propagate a crack under constant amplitude loading. As a crack propagates under constant amplitude loading, the stress intensity factor, K, will increase, as will the crack growth rate, da/dN. A fatigue crack growth threshold test procedure is developed and experimentally validated that does not produce load history effects and can be conducted at a specified stress ratio, R.

  9. Hafnium transistor process design for neural interfacing.

    PubMed

    Parent, David W; Basham, Eric J

    2009-01-01

    A design methodology is presented that uses 1-D process simulations of Metal Insulator Semiconductor (MIS) structures to design the threshold voltage of hafnium oxide based transistors used for neural recording. The methodology is comprised of 1-D analytical equations for threshold voltage specification, and doping profiles, and 1-D MIS Technical Computer Aided Design (TCAD) to design a process to implement a specific threshold voltage, which minimized simulation time. The process was then verified with a 2-D process/electrical TCAD simulation. Hafnium oxide films (HfO) were grown and characterized for dielectric constant and fixed oxide charge for various annealing temperatures, two important design variables in threshold voltage design.

  10. Warpage minimization on wheel caster by optimizing process parameters using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Safuan, N. S.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    In injection moulding process, it is important to keep the productivity increase constantly with least of waste produced such as warpage defect. Thus, this study is concerning on minimizing warpage defect on wheel caster part. Apart from eliminating product wastes, this project also giving out best optimization techniques using response surface methodology. This research studied on five parameters A-packing pressure, B-packing time, C-mold temperature, D-melting temperature and E-cooling time. The optimization showed that packing pressure is the most significant parameter. Warpage have been improved 42.64% from 0.6524 mm to 0.3742mm.

  11. Model for the Effect of Fiber Bridging on the Fracture Resistance of Reinforced-Carbon-Carbon

    NASA Technical Reports Server (NTRS)

    Chan, Kwai S.; Lee, Yi-Der; Hudak, Stephen J., Jr.

    2009-01-01

    A micromechanical methodology has been developed for analyzing fiber bridging and resistance-curve behavior in reinforced-carbon-carbon (RCC) panels with a three-dimensional (3D) composite architecture and a silicon carbide (SiC) surface coating. The methodology involves treating fiber bridging traction on the crack surfaces in terms of a weight function approach and a bridging law that relates the bridging stress to the crack opening displacement. A procedure has been developed to deduce material constants in the bridging law from the linear portion of the K-resistance curve. This report contains information on the application of procedures and outcomes.

  12. Pirates and Power: What Captain Jack Sparrow, His Friends, and His Foes Can Teach Us about Power Bases

    ERIC Educational Resources Information Center

    Williams, Jennifer R.

    2006-01-01

    Leadership educators are constantly looking for new and inventive ways to teach leadership theory. Because leadership educators realize principles of androgyny and experiential education work well with leadership theories, instructors find movies are a great way to infuse leadership theory with novel teaching methodology. "Movies, like…

  13. Spacecraft detumbling through energy dissipation

    NASA Technical Reports Server (NTRS)

    Fitz-Coy, Norman; Chatterjee, Anindya

    1993-01-01

    The attitude motion of a tumbling, rigid, axisymmetric spacecraft is considered. A methodology for detumbling the spacecraft through energy dissipation is presented. The differential equations governing this motion are stiff, and therefore an approximate solution, based on the variation of constants method, is developed and utilized in the analysis of the detumbling strategy. Stability of the detumbling process is also addressed.

  14. The Classroom "Is" the Newsroom: CNA: A Wire Service Journalism Training Model to Bridge the Theory versus Practice Dichotomy

    ERIC Educational Resources Information Center

    Tulloch, Christopher David; Mas i Manchon, Lluis

    2018-01-01

    Although recent journalism education literature has promoted constant innovation, creative curriculum design, and the adaptation of teaching methodologies to the requirements of a fiercely competitive media marketplace, the permanent face-off between the academy and the profession has often led to theoretical models distanced from real newsroom…

  15. Meta-Design as a Pedagogical Framework for Encouraging Student Agency and Democratizing the Classroom

    ERIC Educational Resources Information Center

    Hethrington, Christopher

    2015-01-01

    As diverse social and economic pressures are applied to post-secondary education, innovative approaches to pedagogical methodology are required. Given that the new norm in both industry and academia is that of constant change, a flexible and responsive approach is required along with a framework that empowers students with the skills to become…

  16. Critical role of morphology on the dielectric constant of semicrystalline polyolefins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Misra, Mayank; Kumar, Sanat K., E-mail: sk2794@columbia.edu; Mannodi-Kanakkithodi, Arun

    2016-06-21

    A particularly attractive method to predict the dielectric properties of materials is density functional theory (DFT). While this method is very popular, its large computational requirements allow practical treatments of unit cells with just a small number of atoms in an ordered array, i.e., in a crystalline morphology. By comparing DFT and Molecular Dynamics (MD) simulations on the same ordered arrays of functional polyolefins, we confirm that both methodologies yield identical estimates for the dipole moments and hence the ionic component of the dielectric storage modulus. Additionally, MD simulations of more realistic semi-crystalline morphologies yield estimates for this polar contributionmore » that are in good agreement with the limited experiments in this field. However, these predictions are up to 10 times larger than those for pure crystalline simulations. Here, we show that the constraints provided by the surrounding chains significantly impede dipolar relaxations in the crystalline regions, whereas amorphous chains must sample all configurations to attain their fully isotropic spatial distributions. These results, which suggest that the amorphous phase is the dominant player in the context, argue strongly that the proper polymer morphology needs to be modeled to ensure accurate estimates of the ionic component of the dielectric constant.« less

  17. Implications of effluent organic matter and its hydrophilic fraction on zinc(II) complexation in rivers under strong urban pressure: aromaticity as an inaccurate indicator of DOM-metal binding.

    PubMed

    Louis, Yoann; Pernet-Coudrier, Benoît; Varrault, Gilles

    2014-08-15

    The zinc binding characteristics of dissolved organic matter (DOM) fractions from the Seine River Basin were studied after being separated and extracted according to their polarity: hydrophobic, transphilic, and hydrophilic. The applied experimental methodology was based on a determination of labile zinc species by means of differential pulse anodic stripping voltammetry (DPASV) at increasing concentrations of total zinc on a logarithmic scale and at fixed levels of: pH, ionic strength, and temperature. Fitting the DOM fractions with two discrete classes of ligands successfully allowed determining the conditional zinc binding constants (Ki) as well as total ligand density (LiT). The binding constants obtained for each DOM fraction were then compared and discussed with respect to the hydrophobic/hydrophilic nature and sample origin. Results highlighted a strong complexation of zinc to the effluent organic matter and especially the most hydrophilic fraction, which also displayed a very low specific UV absorbance. Although the biotic ligand model takes into account the quality of DOM through UV absorbance in the predictions of metal bioavailability and toxicity, this correction is not efficient for urban waters. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Nuclear Proliferation Technology Trends Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zentner, Michael D.; Coles, Garill A.; Talbert, Robert J.

    2005-10-04

    A process is underway to develop mature, integrated methodologies to address nonproliferation issues. A variety of methodologies (both qualitative and quantitative) are being considered. All have one thing in common, a need for a consistent set of proliferation related data that can be used as a basis for application. One approach to providing a basis for predicting and evaluating future proliferation events is to understand past proliferation events, that is, the different paths that have actually been taken to acquire or attempt to acquire special nuclear material. In order to provide this information, this report describing previous material acquisition activitiesmore » (obtained from open source material) has been prepared. This report describes how, based on an evaluation of historical trends in nuclear technology development, conclusions can be reached concerning: (1) The length of time it takes to acquire a technology; (2) The length of time it takes for production of special nuclear material to begin; and (3) The type of approaches taken for acquiring the technology. In addition to examining time constants, the report is intended to provide information that could be used to support the use of the different non-proliferation analysis methodologies. Accordingly, each section includes: (1) Technology description; (2) Technology origin; (3) Basic theory; (4) Important components/materials; (5) Technology development; (6) Technological difficulties involved in use; (7) Changes/improvements in technology; (8) Countries that have used/attempted to use the technology; (9) Technology Information; (10) Acquisition approaches; (11) Time constants for technology development; and (12) Required Concurrent Technologies.« less

  19. Comparative assessment of smallholder sustainability using an agricultural sustainability framework and a yield based index insurance: A case study

    NASA Astrophysics Data System (ADS)

    Moshtaghi, Mehrdad; Adla, Soham; Pande, Saket; Disse, Markus; Savenije, Hubert

    2017-04-01

    The concept of sustainability is central to smallholder agriculture as subsistence farming is constantly impacted by livelihood insecurity and is constrained by access to capital, water technology and alternative employment opportunities. This study compares two approaches which aim at quantifying smallholder sustainability but differ in their underlying principles, methodologies for assessment and reporting, and applications. The yield index based insurance can protect the smallholder agriculture and help it to more economic sustainability because the income of smallholder depends on selling crops and this insurance scheme is based on crop yields. In this research, the trigger of this insurance sets on the basis of yields in previous years. The crop yields are calculated every year through socio-hydrology modeling and smallholder can get indemnity when crop yields are lower than average of previous five years (a crop failure). The FAO Sustainability Assessment of Food and Agriculture (SAFA) is an inclusive and comprehensive framework for sustainability assessment in the food and agricultural sector. It follows the UN definition of the 4 dimensions of sustainability (good governance, environmental integrity, economic resilience and social well-being) and includes 21 themes and 58 sub-themes with a multi-indicator approach. The direct sustainability corresponding to the FAO SAFA economic resilience dimension is compared with the indirect notion of sustainability derived from the yield based index insurance. A semi-synthetic comparison is conducted to understand the differences in the underlying principles, methodologies and application of the two approaches. Both approaches are applied to data from smallholder regions of Marathwada in Maharashtra (India) which experienced a severe rise in farmer suicides in the 2000s which has been attributed to a combination of socio-hydrological factors.

  20. Getting through to circadian oscillators: why use constant routines?

    NASA Technical Reports Server (NTRS)

    Duffy, Jeanne F.; Dijk, Derk-Jan

    2002-01-01

    Overt 24-h rhythmicity is composed of both exogenous and endogenous components, reflecting the product of multiple (periodic) feedback loops with a core pacemaker at their center. Researchers attempting to reveal the endogenous circadian (near 24-h) component of rhythms commonly conduct their experiments under constant environmental conditions. However, even under constant environmental conditions, rhythmic changes in behavior, such as food intake or the sleep-wake cycle, can contribute to observed rhythmicity in many physiological and endocrine variables. Assessment of characteristics of the core circadian pacemaker and its direct contribution to rhythmicity in different variables, including rhythmicity in gene expression, may be more reliable when such periodic behaviors are eliminated or kept constant across all circadian phases. This is relevant for the assessment of the status of the circadian pacemaker in situations in which the sleep-wake cycle or food intake regimes are altered because of external conditions, such as in shift work or jet lag. It is also relevant for situations in which differences in overt rhythmicity could be due to changes in either sleep oscillatory processes or circadian rhythmicity, such as advanced or delayed sleep phase syndromes, in aging, or in particular clinical conditions. Researchers studying human circadian rhythms have developed constant routine protocols to assess the status of the circadian pacemaker in constant behavioral and environmental conditions, whereas this technique is often thought to be unnecessary in the study of animal rhythms. In this short review, the authors summarize constant routine methodology and what has been learned from constant routines and argue that animal and human circadian rhythm researchers should (continue to) use constant routines as a step on the road to getting through to central and peripheral circadian oscillators in the intact organism.

  1. Relativistic force field: parametric computations of proton-proton coupling constants in (1)H NMR spectra.

    PubMed

    Kutateladze, Andrei G; Mukhina, Olga A

    2014-09-05

    Spin-spin coupling constants in (1)H NMR carry a wealth of structural information and offer a powerful tool for deciphering molecular structures. However, accurate ab initio or DFT calculations of spin-spin coupling constants have been very challenging and expensive. Scaling of (easy) Fermi contacts, fc, especially in the context of recent findings by Bally and Rablen (Bally, T.; Rablen, P. R. J. Org. Chem. 2011, 76, 4818), offers a framework for achieving practical evaluation of spin-spin coupling constants. We report a faster and more precise parametrization approach utilizing a new basis set for hydrogen atoms optimized in conjunction with (i) inexpensive B3LYP/6-31G(d) molecular geometries, (ii) inexpensive 4-31G basis set for carbon atoms in fc calculations, and (iii) individual parametrization for different atom types/hybridizations, not unlike a force field in molecular mechanics, but designed for the fc's. With the training set of 608 experimental constants we achieved rmsd <0.19 Hz. The methodology performs very well as we illustrate with a set of complex organic natural products, including strychnine (rmsd 0.19 Hz), morphine (rmsd 0.24 Hz), etc. This precision is achieved with much shorter computational times: accurate spin-spin coupling constants for the two conformers of strychnine were computed in parallel on two 16-core nodes of a Linux cluster within 10 min.

  2. The calculation of transport properties in quantum liquids using the maximum entropy numerical analytic continuation method: Application to liquid para-hydrogen

    PubMed Central

    Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.

    2002-01-01

    We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656

  3. Improving core outcome set development: qualitative interviews with developers provided pointers to inform guidance.

    PubMed

    Gargon, Elizabeth; Williamson, Paula R; Young, Bridget

    2017-06-01

    The objective of the study was to explore core outcome set (COS) developers' experiences of their work to inform methodological guidance on COS development and identify areas for future methodological research. Semistructured, audio-recorded interviews with a purposive sample of 32 COS developers. Analysis of transcribed interviews was informed by the constant comparative method and framework analysis. Developers found COS development to be challenging, particularly in relation to patient participation and accessing funding. Their accounts raised fundamental questions about the status of COS development and whether it is consultation or research. Developers emphasized how the absence of guidance had affected their work and identified areas where guidance or evidence about COS development would be useful including, patient participation, ethics, international development, and implementation. They particularly wanted guidance on systematic reviews, Delphi, and consensus meetings. The findings raise important questions about the funding, status, and process of COS development and indicate ways that it could be strengthened. Guidance could help developers to strengthen their work, but over specification could threaten quality in COS development. Guidance should therefore highlight common issues to consider and encourage tailoring of COS development to the context and circumstances of particular COS. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Selection of ionic liquids for enhancing the gas solubility of volatile organic compounds.

    PubMed

    Gonzalez-Miquel, Maria; Palomar, Jose; Rodriguez, Francisco

    2013-01-10

    A systematic thermodynamic analysis has been carried out for selecting cations and anions to enhance the absorption of volatile organic compounds (VOCs) at low concentration in gaseous streams by ionic liquids (ILs), using COSMO-RS methodology. The predictability of computational procedure was validated by comparing experimental and COSMO-RS calculated Henry's law constant data over a sample of 125 gaseous solute-IL systems. For more than 2400 solute-IL mixtures evaluated, including 9 solutes and 270 ILs, it was found that the lower the activity coefficient at infinite dilution (γ(∞)) of solutes in the ILs, the more the exothermic excess enthalpy (H(E)) of the equimolar IL-solute mixtures. Then, the solubility of a representative sample of VOC solutes, with very different chemical nature, was screened in a wide number of ILs using COSMO-RS methodology by means of γ(∞) and H(E) parameters, establishing criteria to select the IL structures that promote favorable solute-solvent intermolecular interactions. As a result of this analysis, an attempt of classification of VOCs respect to their potential solubility in ILs was proposed, providing insights to rationally select the cationic and anionic species for a possible development of absorption treatments of VOC pollutants based on IL systems.

  5. Non-destructive tests for railway evaluation: Detection of fouling and joint interpretation of GPR and track geometric parameters - COST Action TU1208

    NASA Astrophysics Data System (ADS)

    Solla, Mercedes; Fontul, Simona; Marecos, Vânia; Loizos, Andreas

    2016-04-01

    During the last years high-performance railway lines have increased both their number and capabilities. As all types of infrastructures, railways have to maintain a proper behaviour during the entire life cycle. This work is focused on the analysis of the GPR method and its capabilities to detect defects in both infra and superstructure in railways. Different GPR systems and frequency antennas (air-coupled with antennas of 1.0 and 1.8 GHz, and ground-coupled with antennas of 1.0 and 2.3 GHz) were compared to establish the best procedures. For the assessment of the ground conditions, both GPR systems were used in combination with Falling Weight Deflectometer (FWD) load tests, in order to evaluate the bearing capacity of the subgrade. Moreover, Light Falling Weight Deflectometer (LFWD) measures were performed for the validation of the interpretation of the damaged areas identified from GPR and FWD tests. Finally, to corroborate the joint interpretation of GPR and FWD-LFWD, drill cores were extracted in the damaged areas identified based on the field data. Comparing all the data, a good agreement was obtained between the methods, when identifying both anomalous deflections and reflections. It was also demonstrated that ground-coupled systems have clear advantages compared to air-coupled systems since these antennas provide both better signal penetration and vertical resolution to detect fine details like cracking. Regarding the assessment of the thickness, three different high-speed track infrastructure solutions were constructed in a physical model, using asphalt as subballast layer. Four different antennas were used, two ground- and two air-coupled systems. Two different methodologies were assumed to calibrate the velocity of wave propagation: coring and metal plate. Comparing the results obtained, it was observed that the ground-coupled system provided higher values of wave velocity than the air-coupled system. The velocity values were also obtained by the amplitude or metal plate method with the air-coupled system. These velocities values were similar to those values obtained with the ground-coupled system, when using the coring method. Some laboratory tests were also developed in this work aiming to evaluate the dielectric constants for different levels of ballast fouling (0, 7.5 and 15%). The effect of the water presence on the dielectric constant was also evaluated by simulating different water contents: 5.5, 10 and 14%. Different GPR systems and configuration were used. The results have demonstrated that dielectric values increase with the increasing of fouling conditions. The dielectric constants also increase with the increasing of water content. However, the analysis of all the results obtained has revealed that values are more sensitive to the fouling level rather than to the water content variation. The dielectric constants obtained with a frequency of 1.0 GHz were slightly lower than those obtained with higher frequencies of 1.8 and 2.3 GHz. Additionally, the dielectric constants obtained for all the measurements, increasing fouling conditions and water contents, with a frequency of 1.0 GHz, were also different. Thus, the dielectric constant values obtained with the ground-coupled antenna were slightly lower than those obtained with the air-coupled antenna.

  6. A methodologic approach for normalizing angular work and velocity during isotonic and isokinetic eccentric training.

    PubMed

    Guilhem, Gaël; Cornu, Christophe; Guével, Arnaud

    2012-01-01

    Resistance exercise training commonly is performed against a constant external load (isotonic) or at a constant velocity (isokinetic). Researchers comparing the effectiveness of isotonic and isokinetic resistance-training protocols need to equalize the mechanical stimulus (work and velocity) applied. To examine whether the standardization protocol could be adjusted and applied to an eccentric training program. Controlled laboratory study. Controlled research laboratory. Twenty-one sport science male students (age = 20.6 ± 1.5 years, height = 178.0 ± 4.0 cm, mass = 74.5 ± 9.1 kg). Participants performed 9 weeks of isotonic (n = 11) or isokinetic (n = 10) eccentric training of knee extensors that was designed so they would perform the same amount of angular work at the same mean angular velocity. Angular work and angular velocity. The isotonic and isokinetic groups performed the same total amount of work (-185.2 ± 6.5 kJ and -184.4 ± 8.6 kJ, respectively) at the same angular velocity (21 ± 1°/s and 22°/s, respectively) with the same number of repetitions (8.0 and 8.0, respectively). Bland-Altman analysis showed that work (bias = 2.4%) and angular velocity (bias = 0.2%) were equalized over 9 weeks between the modes of training. The procedure developed allows angular work and velocity to be standardized over 9 weeks of isotonic and isokinetic eccentric training of the knee extensors. This method could be useful in future studies in which researchers compare neuromuscular adaptations induced by each type of training mode with respect to rehabilitating patients after musculoskeletal injury.

  7. Two-phase simulations of the full load surge in Francis turbines

    NASA Astrophysics Data System (ADS)

    Wack, J.; Riedelbauch, S.

    2016-11-01

    At off-design conditions, Francis turbines experience cavitation which may reduce the power output and can cause severe damage in the machine. Certain conditions can cause self-excited oscillations of the vortex rope in the draft tube at full load operating point. For the presented work, two-phase simulations are carried out at model scale on a domain ranging from the inlet of the spiral case to the outlet of the draft tube. At different locations, wall pressure measurements are available and compared to the simulation results. Furthermore, the dynamics of the cavity volume in the draft tube cone and at the trailing edge of the runner blades are investigated by comparing with high speed visualization. To account for the selfexcited behaviour, proper boundary conditions need to be set. In this work, the focus lies on the treatment of the boundary condition at the inlet. In the first step, the dynamic behaviour of the cavity regions is investigated using a constant mass flow. Thereafter, oscillations of the total pressure and mass flow rate are prescribed using various frequencies and amplitudes. This methodology enables to examine the response of the cavity dynamics due to different excitations. It can be observed that setting a constant mass flow boundary condition is not suitable to account for the self-excited behaviour. Prescribing the total pressure has the result that the frequency of the vapour volume oscillation is the same as the frequency of the excitation signal. Contrary to that, for an excitation with a mass flow boundary condition, the response of the system is not equal to the excitation.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuniga-Gutierrez, Bernardo, E-mail: bzuniga.51@gmail.com; Camacho-Gonzalez, Monica; Bendana-Castillo, Alfonso

    The computation of the spin-rotation tensor within the framework of auxiliary density functional theory (ADFT) in combination with the gauge including atomic orbital (GIAO) scheme, to treat the gauge origin problem, is presented. For the spin-rotation tensor, the calculation of the magnetic shielding tensor represents the most demanding computational task. Employing the ADFT-GIAO methodology, the central processing unit time for the magnetic shielding tensor calculation can be dramatically reduced. In this work, the quality of spin-rotation constants obtained with the ADFT-GIAO methodology is compared with available experimental data as well as with other theoretical results at the Hartree-Fock and coupled-clustermore » level of theory. It is found that the agreement between the ADFT-GIAO results and the experiment is good and very similar to the ones obtained by the coupled-cluster single-doubles-perturbative triples-GIAO methodology. With the improved computational performance achieved, the computation of the spin-rotation tensors of large systems or along Born-Oppenheimer molecular dynamics trajectories becomes feasible in reasonable times. Three models of carbon fullerenes containing hundreds of atoms and thousands of basis functions are used for benchmarking the performance. Furthermore, a theoretical study of temperature effects on the structure and spin-rotation tensor of the H{sup 12}C–{sup 12}CH–DF complex is presented. Here, the temperature dependency of the spin-rotation tensor of the fluorine nucleus can be used to identify experimentally the so far unknown bent isomer of this complex. To the best of our knowledge this is the first time that temperature effects on the spin-rotation tensor are investigated.« less

  9. Concepts and Methodology for Labour Market Forecasts by Occupation and Qualification in the Context of a Flexible Labour Market.

    ERIC Educational Resources Information Center

    Borghans, Lex; de Grip, Andries; Heijke, Hans

    The problem of planning and making labor market forecasts by occupation and qualification in the context of a constantly changing labor market was examined. The examination focused on the following topics: assumptions, benefits, and pitfalls of the labor requirement model of projecting future imbalances between labor supply and demand for certain…

  10. Eigenspace Design of Helicopter Flight Control Systems

    DTIC Science & Technology

    1990-11-01

    Attitude Changes ......... 44 2.6 Yaw Cross Coupling Criteria . ............................................... 45 I 4. i Definition of the Rigid Body...laws. The methodology detailed in this report allows the designer to synthesize control laws which result in desirable response types such as attitude ...it is simple to relate the desired frequency response characteristics to the natural frequencies and damping factors or the time constants of the

  11. Modeling and Control for Microgrids

    NASA Astrophysics Data System (ADS)

    Steenis, Joel

    Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain scheduled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.

  12. Tethered particle analysis of supercoiled circular DNA using peptide nucleic acid handles.

    PubMed

    Norregaard, Kamilla; Andersson, Magnus; Nielsen, Peter Eigil; Brown, Stanley; Oddershede, Lene B

    2014-09-01

    This protocol describes how to monitor individual naturally supercoiled circular DNA plasmids bound via peptide nucleic acid (PNA) handles between a bead and a surface. The protocol was developed for single-molecule investigation of the dynamics of supercoiled DNA, and it allows the investigation of both the dynamics of the molecule itself and of its interactions with a regulatory protein. Two bis-PNA clamps designed to bind with extremely high affinity to predetermined homopurine sequence sites in supercoiled DNA are prepared: one conjugated with digoxigenin for attachment to an anti-digoxigenin-coated glass cover slide, and one conjugated with biotin for attachment to a submicron-sized streptavidin-coated polystyrene bead. Plasmids are constructed, purified and incubated with the PNA handles. The dynamics of the construct is analyzed by tracking the tethered bead using video microscopy: less supercoiling results in more movement, and more supercoiling results in less movement. In contrast to other single-molecule methodologies, the current methodology allows for studying DNA in its naturally supercoiled state with constant linking number and constant writhe. The protocol has potential for use in studying the influence of supercoils on the dynamics of DNA and its associated proteins, e.g., topoisomerase. The procedure takes ~4 weeks.

  13. On the Lennard-Jones and Devonshire theory for solid state thermodynamics

    NASA Astrophysics Data System (ADS)

    Lustig, Rolf

    2017-06-01

    The Lennard-Jones and Devonshire theory is developed into a self-consistent scheme for essentially complete thermodynamic information. The resulting methodology is compared with molecular simulation of the Lennard-Jones system in the face-centred-cubic solid state over an excessive range of state points. The thermal and caloric equations of state are in almost perfect agreement along the entire fluid-solid coexistence lines over more than six orders of magnitude in pressure. For homogeneous densities greater than twice the solid triple point density, the theory is essentially exact for derivatives of the Helmholtz energy. However, the fluid-solid phase equilibria are in disagreement with simulation. It is shown that the theory is in error by an additive constant to the Helmholtz energy A/(NkBT). Empirical inclusion of the error term makes all fluid-solid equilibria indistinguishable from exact results. Some arguments about the origin of the error are given.

  14. Pilot scale intensification of rubber seed (Hevea brasiliensis) oil via chemical interesterification using hydrodynamic cavitation technology.

    PubMed

    Bokhari, Awais; Yusup, Suzana; Chuah, Lai Fatt; Klemeš, Jiří Jaromír; Asif, Saira; Ali, Basit; Akbar, Majid Majeed; Kamil, Ruzaimah Nik M

    2017-10-01

    Chemical interesterification of rubber seed oil has been investigated for four different designed orifice devices in a pilot scale hydrodynamic cavitation (HC) system. Upstream pressure within 1-3.5bar induced cavities to intensify the process. An optimal orifice plate geometry was considered as plate with 1mm dia hole having 21 holes at 3bar inlet pressure. The optimisation results of interesterification were revealed by response surface methodology; methyl acetate to oil molar ratio of 14:1, catalyst amount of 0.75wt.% and reaction time of 20min at 50°C. HC is compared to mechanical stirring (MS) at optimised values. The reaction rate constant and the frequency factor of HC were 3.4-fold shorter and 3.2-fold higher than MS. The interesterified product was characterised by following EN 14214 and ASTM D 6751 international standards. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Optical Sensing of the Fatigue Damage State of CFRP under Realistic Aeronautical Load Sequences

    PubMed Central

    Zuluaga-Ramírez, Pablo; Arconada, Álvaro; Frövel, Malte; Belenguer, Tomás; Salazar, Félix

    2015-01-01

    We present an optical sensing methodology to estimate the fatigue damage state of structures made of carbon fiber reinforced polymer (CFRP), by measuring variations on the surface roughness. Variable amplitude loads (VAL), which represent realistic loads during aeronautical missions of fighter aircraft (FALSTAFF) have been applied to coupons until failure. Stiffness degradation and surface roughness variations have been measured during the life of the coupons obtaining a Pearson correlation of 0.75 between both variables. The data were compared with a previous study for Constant Amplitude Load (CAL) obtaining similar results. Conclusions suggest that the surface roughness measured in strategic zones is a useful technique for structural health monitoring of CFRP structures, and that it is independent of the type of load applied. Surface roughness can be measured in the field by optical techniques such as speckle, confocal perfilometers and interferometry, among others. PMID:25760056

  16. In-Service Physical Educators' Experiences of Online Adapted Physical Education Endorsement Courses.

    PubMed

    Sato, Takahiro; Haegele, Justin A; Foot, Rachel

    2017-04-01

    The purpose of this study was to investigate in-service physical education (PE) teachers' experiences during online adapted physical education (APE) graduate courses. Based on andragogy theory (adult learning theory) we employed a descriptive qualitative methodology using an explanatory case study design. The participants (6 female and 3 male) were in-service PE teachers enrolled in an online graduate APE endorsement program. Data collection included journal reflection reports and face-to-face interviews. A constant comparative method was used to interpret the data. Three interrelated themes emerged from the participants' narratives. The first theme, instructor communication, exposes the advantages and disadvantages the participants perceived regarding communication while enrolled in the online APE graduate courses. The second theme, bulletin board discussion experiences, described participants' perceptions of the use of the bulletin board discussion forum. Lastly, the final theme, assessment experiences, described how the participants learned knowledge and skills through online courses related to assessment and evaluation.

  17. Women's bleeding patterns: ability to recall and predict menstrual events. World Health Organization Task Force on Psychosocial Research in Family, Planning, Special Programme of Research, Development and Research Training in Human Reproduction.

    PubMed

    1981-01-01

    Objective records of the occurrence of menstrual bleeding were compared with women's subjective assessments of the timing and duration of these events. The number of days a woman experienced bleeding during each episode was relatively constant; however, the length of the bleeding episode varied greatly among the 13 cultures studies. A greater understanding of menstrual patterns is possible if the pattern is seen as a succession of discrete events rather than as a whole. A more careful use of terminology relating to these discrete events would provide greater understanding of menstruation for the woman concerned and those advising her. The methodology employed in the collection of data about menstrual events among illiterate women is described and suggestions given as to how such information can be most efficiently obtained.

  18. Integrating a Patient-Controlled Admission Program Into Mental Health Hospital Service: A Multicenter Grounded Theory Study.

    PubMed

    Ellegaard, Trine; Bliksted, Vibeke; Mehlsen, Mimi; Lomborg, Kirsten

    2018-05-01

    Patient-controlled admissions (PCAs) enable mental health patients by means of a contract to initiate an admission at a mental health hospital unit without using traditional admission procedures. This study was part of a 3-year Danish multicenter project, and we explored how mental health professionals experienced and managed the implementation of a PCA program. The methodology was grounded theory and the sample included 26 participants. We performed a constant comparative analysis to explore the concerns, attitudes, and strategies of mental health professionals. We developed a model of how the mental health professionals strived to integrate PCA into clinical practice. The process was motivated by the idea of establishing a partnership with patients and involved two interrelated strategies to manage (a) the patient-related duties and (b) the admission contracts. The professionals moved from a phase of professional discomfort to a phase of professional awareness, and ended up with professional comprehension.

  19. Feedback linearization based control of a variable air volume air conditioning system for cooling applications.

    PubMed

    Thosar, Archana; Patra, Amit; Bhattacharyya, Souvik

    2008-07-01

    Design of a nonlinear control system for a Variable Air Volume Air Conditioning (VAVAC) plant through feedback linearization is presented in this article. VAVAC systems attempt to reduce building energy consumption while maintaining the primary role of air conditioning. The temperature of the space is maintained at a constant level by establishing a balance between the cooling load generated in the space and the air supply delivered to meet the load. The dynamic model of a VAVAC plant is derived and formulated as a MIMO bilinear system. Feedback linearization is applied for decoupling and linearization of the nonlinear model. Simulation results for a laboratory scale plant are presented to demonstrate the potential of keeping comfort and maintaining energy optimal performance by this methodology. Results obtained with a conventional PI controller and a feedback linearizing controller are compared and the superiority of the proposed approach is clearly established.

  20. The effects of physical aging at elevated temperatures on the viscoelastic creep on IM7/K3B

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Feldman, Mark

    1994-01-01

    Physical aging at elevated temperature of the advanced composite IM7/K3B was investigated through the use of creep compliance tests. Testing consisted of short term isothermal, creep/recovery with the creep segments performed at constant load. The matrix dominated transverse tensile and in-plane shear behavior were measured at temperatures ranging from 200 to 230 C. Through the use of time based shifting procedures, the aging shift factors, shift rates and momentary master curve parameters were found at each temperature. These material parameters were used as input to a predictive methodology, which was based upon effective time theory and linear viscoelasticity combined with classical lamination theory. Long term creep compliance test data was compared to predictions to verify the method. The model was then used to predict the long term creep behavior for several general laminates.

  1. Quantifying driver's field-of-view in tractors: methodology and case study.

    PubMed

    Gilad, Issachar; Byran, Eyal

    2015-01-01

    When driving a car, the visual awareness is important for operating and controlling the vehicle. When operating a tractor, it is even more complex. This is because the driving is always accompanied with another task (e.g., plough) that demands constant changes of body postures, to achieve the needed Field-of-View (FoV). Therefore, the cockpit must be well designed to provide best FoV. Today, the driver's FoV is analyzed mostly by computer simulations of a cockpit model and a Digital Human Model (DHM) positioned inside. The outcome is an 'Eye view' that displays what the DHM 'sees'. This paper suggests a new approach that adds quantitative information to the current display; presented on three tractor models as case studies. Based on the results, the design can be modified. This may assist the engineer, to analyze, compare and improve the design, for better addressing the driver needs.

  2. An efficient and provable secure revocable identity-based encryption scheme.

    PubMed

    Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia

    2014-01-01

    Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.

  3. Unified nonlinear analysis for nonhomogeneous anisotropic beams with closed cross sections

    NASA Technical Reports Server (NTRS)

    Atilgan, Ali R.; Hodges, Dewey H.

    1991-01-01

    A unified methodology for geometrically nonlinear analysis of nonhomogeneous, anisotropic beams is presented. A 2D cross-sectional analysis and a nonlinear 1D global deformation analysis are derived from the common framework of a 3D, geometrically nonlinear theory of elasticity. The only restrictions are that the strain and local rotation are small compared to unity and that warping displacements are small relative to the cross-sectional dimensions. It is concluded that the warping solutions can be affected by large deformation and that this could alter the incremental stiffnes of the section. It is shown that sectional constants derived from the published, linear analysis can be used in the present nonlinear, 1D analysis governing the global deformation of the beam, which is based on intrinsic equations for nonlinear beam behavior. Excellent correlation is obtained with published experimental results for both isotropic and anisotropic beams undergoing large deflections.

  4. Filter design for cancellation of baseline-fluctuation in needle EMG recordings.

    PubMed

    Rodríguez-Carreño, I; Malanda-Trigueros, A; Gila-Useros, L; Navallas-Irujo, J; Rodríguez-Falces, J

    2006-01-01

    Appropriate cancellation of the baseline fluctuation (BLF) is an important issue when recording EMG signals as it may degrade signal quality and distort qualitative and quantitative analysis. We present a novel filter-design approach for automatic cancellation of the BLF based on several signal processing techniques used sequentially. The methodology is to estimate the spectral content of the BLF, and then to use this estimation to design a high-pass FIR filter that cancel the BLF present in the signal. Two merit figures are devised for measuring the degree of BLF present in an EMG record. These figures are used to compare our method with the conventional approach, which naively considers the baseline course to be of constant (without any fluctuation) potential shift. Applications of the technique on real and simulated EMG signals show the superior performance of our approach in terms of both visual inspection and the merit figures.

  5. Magnetic Resonance Fingerprinting

    PubMed Central

    Ma, Dan; Gulani, Vikas; Seiberlich, Nicole; Liu, Kecheng; Sunshine, Jeffrey L.; Duerk, Jeffrey L.; Griswold, Mark A.

    2013-01-01

    Summary Magnetic Resonance (MR) is an exceptionally powerful and versatile measurement technique. The basic structure of an MR experiment has remained nearly constant for almost 50 years. Here we introduce a novel paradigm, Magnetic Resonance Fingerprinting (MRF) that permits the non-invasive quantification of multiple important properties of a material or tissue simultaneously through a new approach to data acquisition, post-processing and visualization. MRF provides a new mechanism to quantitatively detect and analyze complex changes that can represent physical alterations of a substance or early indicators of disease. MRF can also be used to specifically identify the presence of a target material or tissue, which will increase the sensitivity, specificity, and speed of an MR study, and potentially lead to new diagnostic testing methodologies. When paired with an appropriate pattern recognition algorithm, MRF inherently suppresses measurement errors and thus can improve accuracy compared to previous approaches. PMID:23486058

  6. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  7. High temperature materials characterization

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.

    1990-01-01

    A lab facility for measuring elastic moduli up to 1700 C was constructed and delivered. It was shown that the ultrasonic method can be used to determine elastic constants of materials from room temperature to their melting points. The ease in coupling high frequency acoustic energy is still a difficult task. Even now, new coupling materials and higher power ultrasonic pulsers are being suggested. The surface was only scratched in terms of showing the full capabilities of either technique used, especially since there is such a large learning curve in developing proper methodologies to take measurements into the high temperature region. The laser acoustic system does not seem to have sufficient precision at this time to replace the normal buffer rod methodology.

  8. Boundary Layer Protuberance Simulations in Channel Nozzle Arc-Jet

    NASA Technical Reports Server (NTRS)

    Marichalar, J. J.; Larin, M. E.; Campbell, C. H.; Pulsonetti, M. V.

    2010-01-01

    Two protuberance designs were modeled in the channel nozzle of the NASA Johnson Space Center Atmospheric Reentry Materials and Structures Facility with the Data-Parallel Line Relaxation computational fluid dynamics code. The heating on the protuberance was compared to nominal baseline heating at a single fixed arc-jet condition in order to obtain heating augmentation factors for flight traceability in the Boundary Layer Transition Flight Experiment on Space Shuttle Orbiter flights STS-119 and STS-128. The arc-jet simulations were performed in conjunction with the actual ground tests performed on the protuberances. The arc-jet simulations included non-uniform inflow conditions based on the current best practices methodology and used variable enthalpy and constant mass flow rate across the throat. Channel walls were modeled as fully catalytic isothermal surfaces, while the test section (consisting of Reaction Cured Glass tiles) was modeled as a partially catalytic radiative equilibrium wall. The results of the protuberance and baseline simulations were compared to the applicable ground test results, and the effects of the protuberance shock on the opposite channel wall were investigated.

  9. The effects of ionic strength and organic matter on virus inactivation at low temperatures: general likelihood uncertainty estimation (GLUE) as an alternative to least-squares parameter optimization for the fitting of virus inactivation models

    NASA Astrophysics Data System (ADS)

    Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin

    2017-06-01

    This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.

  10. Development, refinement, and testing of a short term solar flare prediction algorithm

    NASA Technical Reports Server (NTRS)

    Smith, Jesse B., Jr.

    1993-01-01

    During the period included in this report, the expenditure of time and effort, and progress toward performance of the tasks and accomplishing the goals set forth in the two year research grant proposal, consisted primarily of calibration and analysis of selected data sets. The heliographic limits of 30 degrees from central meridian were continued. As previously reported, all analyses are interactive and are performed by the Principal Investigator. It should also be noted that the analysis time involved by the Principal Investigator during this reporting period was limited, partially due to illness and partially resulting from other uncontrollable factors. The calibration technique (as developed by MSFC solar scientists), incorporates sets of constants which vary according to the wave length of the observation data set. One input constant is then varied interactively to correct for observing conditions, etc., to result in a maximum magnetic field strength (in the calibrated data), based on a separate analysis. There is some insecurity in the methodology and the selection of variables to yield the most self-consistent results for variable maximum field strengths and for variable observing/atmospheric conditions. Several data sets were analyzed using differing constant sets, and separate analyses to differing maximum field strength - toward standardizing methodology and technique for the most self-consistent results for the large number of cases. It may be necessary to recalibrate some of the analyses, but the sc analyses are retained on the optical disks and can still be used with recalibration where necessary. Only the extracted parameters will be changed.

  11. Automatic exposure control systems designed to maintain constant image noise: effects on computed tomography dose and noise relative to clinically accepted technique charts.

    PubMed

    Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; Kofler, James M; McCollough, Cynthia H

    2015-01-01

    To compare computed tomography dose and noise arising from use of an automatic exposure control (AEC) system designed to maintain constant image noise as patient size varies with clinically accepted technique charts and AEC systems designed to vary image noise. A model was developed to describe tube current modulation as a function of patient thickness. Relative dose and noise values were calculated as patient width varied for AEC settings designed to yield constant or variable noise levels and were compared to empirically derived values used by our clinical practice. Phantom experiments were performed in which tube current was measured as a function of thickness using a constant-noise-based AEC system and the results were compared with clinical technique charts. For 12-, 20-, 28-, 44-, and 50-cm patient widths, the requirement of constant noise across patient size yielded relative doses of 5%, 14%, 38%, 260%, and 549% and relative noises of 435%, 267%, 163%, 61%, and 42%, respectively, as compared with our clinically used technique chart settings at each respective width. Experimental measurements showed that a constant noise-based AEC system yielded 175% relative noise for a 30-cm phantom and 206% relative dose for a 40-cm phantom compared with our clinical technique chart. Automatic exposure control systems that prescribe constant noise as patient size varies can yield excessive noise in small patients and excessive dose in obese patients compared with clinically accepted technique charts. Use of noise-level technique charts and tube current limits can mitigate these effects.

  12. Structured settlement annuities, part 2: mortality experience 1967--95 and the estimation of life expectancy in the presence of excess mortality.

    PubMed

    Singer, R B; Schmidt, C J

    2000-01-01

    the mortality experience for structured settlement (SS) annuitants issued both standard (Std) and substandard (SStd) has been reported twice previously by the Society of Actuaries (SOA), but the 1995 mortality described here has not previously been published. We describe in detail the 1995 SS mortality, and we also discuss the methodology of calculating life expectancy (e), contrasting three different life-table models. With SOA permission, we present in four tables the unpublished results of its 1995 SS mortality experience by Std and SStd issue, sex, and a combination of 8 age and 6 duration groups. Overall results on mortality expected from the 1983a Individual Annuity Table showed a mortality ratio (MR) of about 140% for Std cases and about 650% for all SStd cases. Life expectancy in a group with excess mortality may be computed by either adding the decimal excess death rate (EDR) to q' for each year of attained age to age 109 or multiplying q' by the decimal MR for each year to age 109. An example is given for men age 60 with localized prostate cancer; annual EDRs from a large published cancer study are used at duration 0-24 years, and the last EDR is assumed constant to age 109. This value of e is compared with e from constant initial values of EDR or MR after the first year. Interrelations of age, sex, e, and EDR and MR are discussed and illustrated with tabular data. It is shown that a constant MR for life-table calculation of e consistently overestimates projected annual mortality at older attained ages and underestimates e. The EDR method, approved for reserve calculations, is also recommended for use in underwriting conversion tables.

  13. Simultaneous measurement of glucose transport and utilization in the human brain

    PubMed Central

    Shestov, Alexander A.; Emir, Uzay E.; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R.

    2011-01-01

    Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, KMt and Vmaxt, in humans have so far been obtained by measuring steady-state brain glucose levels by proton (1H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose transport necessitated assuming a constant cerebral metabolic rate of glucose (CMRglc) obtained from other tracer studies, such as 13C NMR. Here we present new methodology to simultaneously obtain kinetic parameters for glucose transport and utilization in the human brain by fitting both dynamic and steady-state 1H NMR data with a reversible, non-steady-state Michaelis-Menten model. Dynamic data were obtained by measuring brain and plasma glucose time courses during glucose infusions to raise and maintain plasma concentration at ∼17 mmol/l for ∼2 h in five healthy volunteers. Steady-state brain vs. plasma glucose concentrations were taken from literature and the steady-state portions of data from the five volunteers. In addition to providing simultaneous measurements of glucose transport and utilization and obviating assumptions for constant CMRglc, this methodology does not necessitate infusions of expensive or radioactive tracers. Using this new methodology, we found that the maximum transport capacity for glucose through the blood-brain barrier was nearly twofold higher than maximum cerebral glucose utilization. The glucose transport and utilization parameters were consistent with previously published values for human brain. PMID:21791622

  14. Three Dimensions and Four Levels: Towards a Methodology for Comparative Religious Education

    ERIC Educational Resources Information Center

    Bråten, Oddrun Marie Hovde

    2015-01-01

    This article is an abstract of a suggested methodology for comparative studies in religious education. It is based on a study where religious education in state schools in England and Norway were compared. The methodology is a synthesis of two sets of ideas. The first is an idea of three dimensions in comparative education: supranational, national…

  15. Accelerated Testing Methodology for the Determination of Slow Crack Growth of Advanced Ceramics

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Salem, Jonathan A.; Gyekenyesi, John P.

    1997-01-01

    Constant stress-rate (dynamic fatigue) testing has been used for several decades to characterize slow crack growth behavior of glass and ceramics at both ambient and elevated temperatures. The advantage of constant stress-rate testing over other methods lies in its simplicity: Strengths are measured in a routine manner at four or more stress rates by applying a constant crosshead speed or constant loading rate. The slow crack growth parameters (n and A) required for design can be estimated from a relationship between strength and stress rate. With the proper use of preloading in constant stress-rate testing, an appreciable saving of test time can be achieved. If a preload corresponding to 50 % of the strength is applied to the specimen prior to testing, 50 % of the test time can be saved as long as the strength remains unchanged regardless of the applied preload. In fact, it has been a common, empirical practice in strength testing of ceramics or optical fibers to apply some preloading (less then 40%). The purpose of this work is to study the effect of preloading on the strength to lay a theoretical foundation on such an empirical practice. For this purpose, analytical and numerical solutions of strength as a function of preloading were developed. To verify the solution, constant stress-rate testing using glass and alumina at room temperature and alumina silicon nitride, and silicon carbide at elevated temperatures was conducted in a range of preloadings from O to 90 %.

  16. Ab-Initio Molecular Dynamics Simulations of Molten Ni-Based Superalloys (Preprint)

    DTIC Science & Technology

    2011-10-01

    in liquid–metal density with composition and temperature across the solidification zone. Here, fundamental properties of molten Ni -based alloys ...temperature across the solidification zone. Here, fundamental properties of molten Ni -based alloys , required for modeling these instabilities, are...temperature is assessed in model Ni -Al-W and RENE-N4 alloys . Calculations are performed using a recently implemented constant pressure methodology (NPT) which

  17. A Simple But Comprehensive Methodology To Determine Gas-Phase Emissions Of Motor Vehicles With Extractive FTIR Spectrometry

    NASA Astrophysics Data System (ADS)

    Reyes, F. M.; Jaczilevich, A.; Grutter, M. A.; Huerta, M. A.; Rincón, P.; Rincón, R.; González, R.

    2004-12-01

    In this contribution, a methodology to acquire valuable information on the chemical composition and evolution of vehicular emissions is presented. With this innovative experimental set-up, it is possible to obtain real-time emissions of the combustion products without the need of dilution or sample collection. Key pollutants such as CO, CO2, H2CO, CH4, NO, N2O, NH3, SO2, CH3OH, acetylene, ethylene, ethane and total hydrocarbons, most of which are not regulated nor measured by current emissions control programs, can be accurately monitored with a single instrument. An FTIR spectrometer is used for the analysis of a constant flow of sample gas from the tail-pipe into a stainless-steel cylindrical cell of constant volume.(1) The cell is heated to 185 °C to avoid condensation, the pressure is kept constant and a multi-pass optical arrangement(2)is used to transmit the modulated infrared beam several times to improve the sensitivity. The total flow from the exhaust used for calculating the emission can be continuously determined from the differential pressure measurements from a "Pitot" tube calibrated against a hot-wire devise. This simple methodology is proposed for performing state-of-the-art evaluations on the emission behavior of new technologies, reformulated fuels and emission control devices. The results presented here were performed on a dynamometer running FTP-75 and driving cycles typical for Mexico City.(3,4) References 1. Grutter M. "Multi-Gas Analysis using FTIR Spectroscopy over Mexico City." Atmosfera 16, 1-16 (2003). 2. White J.U. "Long optical paths of large aperture. J. Opt. Soc. Am., 32, 285-288 (1942). 3. Santiago Cruz L. and P.I. Rincón. "Instrumentation of the Emission Control Laboratory at the Engineering School of the National Autonomous University of Mexico." Instrumentation and Development 4, 19-24, (2000). 4. González Oropeza R. and A. Galván Zacarías. "Desarrollo de ciclos de manejo característicos de la Ciudad de México." Memorias del IX Congreso Anual, Soc. Mex. de Ing. Mec. 535-544 (2003).

  18. Uncertainty quantification of reaction mechanisms accounting for correlations introduced by rate rules and fitted Arrhenius parameters

    DOE PAGES

    Prager, Jens; Najm, Habib N.; Sargsyan, Khachik; ...

    2013-02-23

    We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel-air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel-air mixture at constant-pressure. We also outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. Finally, we examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations,more » and considering both accuracy and computational efficiency.« less

  19. First-principles chemical kinetic modeling of methyl trans-3-hexenoate epoxidation by HO 2

    DOE PAGES

    Cagnina, S.; Nicolle, Andre; de Bruin, T.; ...

    2017-02-16

    The design of innovative combustion processes relies on a comprehensive understanding of biodiesel oxidation kinetics. The present study aims at unraveling the reaction mechanism involved in the epoxidation of a realistic biodiesel surrogate, methyl trans-3-hexenoate, by hydroperoxy radicals using a bottom-up theoretical kinetics methodology. The obtained rate constants are in good agreement with experimental data for alkene epoxidation by HO 2. The impact of temperature and pressure on epoxidation pathways involving H-bonded and non-H-bonded conformers was assessed. As a result, the obtained rate constant was finally implemented into a state-of-the-art detailed combustion mechanism, resulting in fairly good agreement with enginemore » experiments.« less

  20. METHODOLOGICAL NOTES: On the redefinition of the kilogram and ampere in terms of fundamental physical constants

    NASA Astrophysics Data System (ADS)

    Karshenboim, Savelii G.

    2006-09-01

    In the summer of 2005, a meeting of the Consultative Committee for Units of the International Committee on Weights and Measures took place. One of the topics discussed at the meeting was a possible redefinition of the kilogram in terms of fundamental physical constants — a question of relevance to a wide circle of specialists, from school teachers to physicists performing research in a great variety of fields. In this paper, the current situation regarding this question is briefly reviewed and its discussion at the Consultative Committee for Units and other bodies involved is covered. Other issues related to the International System of Units (SI) and broached at the meeting are also discussed.

  1. Passivity-based Robust Control of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul G.; Joshi, Suresh M. (Technical Monitor)

    2000-01-01

    This report provides a brief summary of the research work performed over the duration of the cooperative research agreement between NASA Langley Research Center and Kansas State University. The cooperative agreement which was originally for the duration the three years was extended by another year through no-cost extension in order to accomplish the goals of the project. The main objective of the research was to develop passivity-based robust control methodology for passive and non-passive aerospace systems. The focus of the first-year's research was limited to the investigation of passivity-based methods for the robust control of Linear Time-Invariant (LTI) single-input single-output (SISO), open-loop stable, minimum-phase non-passive systems. The second year's focus was mainly on extending the passivity-based methodology to a larger class of non-passive LTI systems which includes unstable and nonminimum phase SISO systems. For LTI non-passive systems, five different passification. methods were developed. The primary effort during the years three and four was on the development of passification methodology for MIMO systems, development of methods for checking robustness of passification, and developing synthesis techniques for passifying compensators. For passive LTI systems optimal synthesis procedure was also developed for the design of constant-gain positive real controllers. For nonlinear passive systems, numerical optimization-based technique was developed for the synthesis of constant as well as time-varying gain positive-real controllers. The passivity-based control design methodology developed during the duration of this project was demonstrated by its application to various benchmark examples. These example systems included longitudinal model of an F-18 High Alpha Research Vehicle (HARV) for pitch axis control, NASA's supersonic transport wind tunnel model, ACC benchmark model, 1-D acoustic duct model, piezo-actuated flexible link model, and NASA's Benchmark Active Controls Technology (BACT) Wing model. Some of the stability results for linear passive systems were also extended to nonlinear passive systems. Several publications and conference presentations resulted from this research.

  2. Effect of Load Rate on Ultimate Tensile Strength of Ceramic Matrix Composites at Elevated Temperatures

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.

    2001-01-01

    The strengths of three continuous fiber-reinforced ceramic composites, including SiC/CAS-II, SiC/MAS-5 and SiC/SiC, were determined as a function of test rate in air at 1100 to 1200 C. All three composite materials exhibited a strong dependency of strength on test rate, similar to the behavior observed in many advanced monolithic ceramics at elevated temperatures. The application of the preloading technique as well as the prediction of life from one loading configuration (constant stress-rate) to another (constant stress loading) suggested that the overall macroscopic failure mechanism of the composites would be the one governed by a power-law type of damage evolution/accumulation, analogous to slow crack growth commonly observed in advanced monolithic ceramics. It was further found that constant stress-rate testing could be used as an alternative to life prediction test methodology even for composite materials, at least for short range of lifetimes and when ultimate strength is used as the failure criterion.

  3. Sensor and Methodology for Dielectric Analysis of Vegetal Oils Submitted to Thermal Stress

    PubMed Central

    Stevan, Sergio Luiz; Paiter, Leandro; Ricardo Galvão, José; Vieira Roque, Daniely; Sidinei Chaves, Eduardo

    2015-01-01

    Vegetable oils used in frying food represent a social problem as its destination. The residual oil can be recycled and returned to the production line, as biodiesel, as soap, or as putty. The state of the residual oil is determined according to their physicochemical characteristics whose values define its economically viable destination. However, the physicochemical analysis requires high costs, time and general cost of transporting. This study presents the use of a capacitive sensor and a quick and inexpensive method to correlate the physicochemical variables to the dielectric constant of the material undergoing oil samples to thermal cycling. The proposed method allows reducing costs in the characterization of residual oil and the reduction in analysis time. In addition, the method allows an assessment of the quality of the vegetable oil during use. The experimental results show the increasing of the dielectric constant with the temperature, which facilitates measurement and classification of the dielectric constant at considerably higher temperatures. The results also confirm a definitive degradation in used oil and a correlation between the dielectric constant of the sample with the results of the physicochemical analysis (iodine value, acid value, viscosity and refractive index). PMID:26501293

  4. Sensor and methodology for dielectric analysis of vegetal oils submitted to thermal stress.

    PubMed

    Stevan, Sergio Luiz; Paiter, Leandro; Galvão, José Ricardo; Roque, Daniely Vieira; Chaves, Eduardo Sidinei

    2015-10-16

    Vegetable oils used in frying food represent a social problem as its destination. The residual oil can be recycled and returned to the production line, as biodiesel, as soap, or as putty. The state of the residual oil is determined according to their physicochemical characteristics whose values define its economically viable destination. However, the physicochemical analysis requires high costs, time and general cost of transporting. This study presents the use of a capacitive sensor and a quick and inexpensive method to correlate the physicochemical variables to the dielectric constant of the material undergoing oil samples to thermal cycling. The proposed method allows reducing costs in the characterization of residual oil and the reduction in analysis time. In addition, the method allows an assessment of the quality of the vegetable oil during use. The experimental results show the increasing of the dielectric constant with the temperature, which facilitates measurement and classification of the dielectric constant at considerably higher temperatures. The results also confirm a definitive degradation in used oil and a correlation between the dielectric constant of the sample with the results of the physicochemical analysis (iodine value, acid value, viscosity and refractive index).

  5. Theoretical speciation of ethylenediamine-N-(o-hydroxyphenylacetic)-N'-(p-hydroxyphenylacetic) acid (o,p-EDDHA) in agronomic conditions.

    PubMed

    Yunta, Felipe; García-Marco, Sonia; Lucena, Juan J

    2003-08-27

    The presence of ethylenediamine-N-(o-hydroxyphenylacetic)-N'-(p-hydroxyphenylacetic) acid (o,p-EDDHA) as the second largest component in commercial EDDHA iron chelates has recently been demonstrated. Here is reported the speciation of o,p-EDDHA by the application of a novel methodology through the determination of the complexing capacity, protonation, and Ca(2+), Mg(2+), Cu(2+), and Fe(3+) stability constants. The pM values and species distribution in solution, hydroponic, and soil conditions were obtained. Due to the para position of one phenol group in o,p-EDDHA, the protonation constants and Ca and Mg stability constants have different values from those of o,o-EDDHA and p,p-EDDHA regioisomers. o,p-EDDHA/Fe(3+) stability constants are higher than those of EDTA/Fe(3+) but lower than those of o,o-EDDHA/Fe(3+). The sequence obtained for pFe is o,o-EDDHA/Fe(3+) >/= o,p-EDDHA/Fe(3+) > EDTA/Fe(3+). o,p-EDDHA/Fe(3+) can be used as an iron chelate in hydroponic conditions. Also, it can be used in soils with limited Cu availability.

  6. Determination of Henry’s Law Constants Using Internal Standards with Benchmark Values

    EPA Science Inventory

    It is shown that Henry’s law constants can be experimentally determined by comparing headspace content of compounds with known constants to interpolate the constants of other compounds. Studies were conducted over a range of water temperatures to identify temperature dependence....

  7. Developing comparative criminology and the case of China: an introduction.

    PubMed

    Liu, Jianhong

    2007-02-01

    Although comparative criminology has made significant development during the past decade or so, systematic empirical research has only developed along a few topics. Comparative criminology has never occupied a central position in criminology. This article analyzes the major theoretical and methodological impediments in the development of comparative criminology. It stresses a need to shift methodology from a conventional primary approach that uses the nation as the unit of analysis to an in-depth case study method as a primary methodological approach. The article maintains that case study method can overcome the limitation of its descriptive tradition and become a promising methodological approach for comparative criminology.

  8. Automatic Exposure Control Systems Designed to Maintain Constant Image Noise: Effects on Computed Tomography Dose and Noise Relative to Clinically Accepted Technique Charts

    PubMed Central

    Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; Kofler, James M.; McCollough, Cynthia H.

    2015-01-01

    Objective To compare computed tomography dose and noise arising from use of an automatic exposure control (AEC) system designed to maintain constant image noise as patient size varies with clinically accepted technique charts and AEC systems designed to vary image noise. Materials and Methods A model was developed to describe tube current modulation as a function of patient thickness. Relative dose and noise values were calculated as patient width varied for AEC settings designed to yield constant or variable noise levels and were compared to empirically derived values used by our clinical practice. Phantom experiments were performed in which tube current was measured as a function of thickness using a constant-noise-based AEC system and the results were compared with clinical technique charts. Results For 12-, 20-, 28-, 44-, and 50-cm patient widths, the requirement of constant noise across patient size yielded relative doses of 5%, 14%, 38%, 260%, and 549% and relative noises of 435%, 267%, 163%, 61%, and 42%, respectively, as compared with our clinically used technique chart settings at each respective width. Experimental measurements showed that a constant noise–based AEC system yielded 175% relative noise for a 30-cm phantom and 206% relative dose for a 40-cm phantom compared with our clinical technique chart. Conclusions Automatic exposure control systems that prescribe constant noise as patient size varies can yield excessive noise in small patients and excessive dose in obese patients compared with clinically accepted technique charts. Use of noise-level technique charts and tube current limits can mitigate these effects. PMID:25938214

  9. Numerical methods on European option second order asymptotic expansions for multiscale stochastic volatility

    NASA Astrophysics Data System (ADS)

    Canhanga, Betuel; Ni, Ying; Rančić, Milica; Malyarenko, Anatoliy; Silvestrov, Sergei

    2017-01-01

    After Black-Scholes proposed a model for pricing European Options in 1973, Cox, Ross and Rubinstein in 1979, and Heston in 1993, showed that the constant volatility assumption made by Black-Scholes was one of the main reasons for the model to be unable to capture some market details. Instead of constant volatilities, they introduced stochastic volatilities to the asset dynamic modeling. In 2009, Christoffersen empirically showed "why multifactor stochastic volatility models work so well". Four years later, Chiarella and Ziveyi solved the model proposed by Christoffersen. They considered an underlying asset whose price is governed by two factor stochastic volatilities of mean reversion type. Applying Fourier transforms, Laplace transforms and the method of characteristics they presented a semi-analytical formula to compute an approximate price for American options. The huge calculation involved in the Chiarella and Ziveyi approach motivated the authors of this paper in 2014 to investigate another methodology to compute European Option prices on a Christoffersen type model. Using the first and second order asymptotic expansion method we presented a closed form solution for European option, and provided experimental and numerical studies on investigating the accuracy of the approximation formulae given by the first order asymptotic expansion. In the present paper we will perform experimental and numerical studies for the second order asymptotic expansion and compare the obtained results with results presented by Chiarella and Ziveyi.

  10. Land-use change trajectories up to 2050. Insights from a global agro-economic model comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitz, Christoph; van Meijl, Hans; Kyle, G. Page

    Changes in agricultural land use have important implications for environmental services. Previous studies of agricultural land-use futures have been published indicating large uncertainty due to different model assumptions and methodologies. In this article we present a first comprehensive comparison of global agro-economic models that have harmonized drivers of population, GDP, and biophysical yields. The comparison allows us to ask two research questions: (1) How much cropland will be used under different socioeconomic and climate change scenarios? (2) How can differences in model results be explained? The comparison includes four partial and six general equilibrium models that differ in how theymore » model land supply and amount of potentially available land. We analyze results of two different socioeconomic scenarios and three climate scenarios (one with constant climate). Most models (7 out of 10) project an increase of cropland of 10–25% by 2050 compared to 2005 (under constant climate), but one model projects a decrease. Pasture land expands in some models, which increase the treat on natural vegetation further. Across all models most of the cropland expansion takes place in South America and sub-Saharan Africa. In general, the strongest differences in model results are related to differences in the costs of land expansion, the endogenous productivity responses, and the assumptions about potential cropland.« less

  11. Comparing Teacher-Directed and Computer-Assisted Constant Time Delay for Teaching Functional Sight Words to Students with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Coleman, Mari Beth; Hurley, Kevin J.; Cihak, David F.

    2012-01-01

    The purpose of this study was to compare the effectiveness and efficiency of teacher-directed and computer-assisted constant time delay strategies for teaching three students with moderate intellectual disability to read functional sight words. Target words were those found in recipes and were taught via teacher-delivered constant time delay or…

  12. First-principles calculation of photo-induced electron transfer rate constants in phthalocyanine-C60 organic photovoltaic materials: Beyond Marcus theory

    NASA Astrophysics Data System (ADS)

    Lee, Myeong H.; Dunietz, Barry D.; Geva, Eitan

    2014-03-01

    Classical Marcus theory is commonly adopted in solvent-mediated charge transfer (CT) process to obtain the CT rate constant, but it can become questionable when the intramolecular vibrational modes dominate the CT process as in OPV devices because Marcus theory treats these modes classically and therefore nuclear tunneling is not accounted for. We present a computational scheme to obtain the electron transfer rate constant beyond classical Marcus theory. Within this approach, the nuclear vibrational modes are treated quantum-mechanically and a short-time approximation is avoided. Ab initio calculations are used to obtain the basic parameters needed for calculating the electron transfer rate constant. We apply our methodology to phthalocyanine(H2PC)-C60 organic photovoltaic system where one C60 acceptor and one or two H2PC donors are included to model the donor-acceptor interface configuration. We obtain the electron transfer and recombination rate constants for all accessible charge transfer (CT) states, from which the CT exciton dynamics is determined by employing a master equation. The role of higher lying excited states in CT exciton dynamics is discussed. This work is pursued as part of the Center for Solar and Thermal Energy Conversion, an Energy Frontier Research Center funded by the US Department of Energy Office of Science, Office of Basic Energy Sciences under 390 Award No. DE-SC0000957.

  13. Dried Blood Spot Methodology in Combination With Liquid Chromatography/Tandem Mass Spectrometry Facilitates the Monitoring of Teriflunomide

    PubMed Central

    Lunven, Catherine; Turpault, Sandrine; Beyer, Yann-Joel; O'Brien, Amy; Delfolie, Astrid; Boyanova, Neli; Sanderink, Ger-Jan; Baldinetti, Francesca

    2016-01-01

    Background: Teriflunomide, a once-daily oral immunomodulator approved for treatment of relapsing-remitting multiple sclerosis, is eliminated slowly from plasma. If necessary to rapidly lower plasma concentrations of teriflunomide, an accelerated elimination procedure using cholestyramine or activated charcoal may be used. The current bioanalytical assay for determination of plasma teriflunomide concentration requires laboratory facilities for blood centrifugation and plasma storage. An alternative method, with potential for greater convenience, is dried blood spot (DBS) methodology. Analytical and clinical validations are required to switch from plasma to DBS (finger-prick sampling) methodology. Methods: Using blood samples from healthy subjects, an LC-MS/MS assay method for quantification of teriflunomide in DBS over a range of 0.01–10 mcg/mL was developed and validated for specificity, selectivity, accuracy, precision, reproducibility, and stability. Results were compared with those from the current plasma assay for determination of plasma teriflunomide concentration. Results: Method was specific and selective relative to endogenous compounds, with process efficiency ∼88%, and no matrix effect. Inaccuracy and imprecision for intraday and interday analyses were <15% at all concentrations tested. Quantification of teriflunomide in DBS assay was not affected by blood deposit volume and punch position within spot, and hematocrit level had a limited but acceptable effect on measurement accuracy. Teriflunomide was stable for at least 4 months at room temperature, and for at least 24 hours at 37°C with and without 95% relative humidity, to cover sampling, drying, and shipment conditions in the field. The correlation between DBS and plasma concentrations (R2 = 0.97), with an average blood to plasma ratio of 0.59, was concentration independent and constant over time. Conclusions: DBS sampling is a simple and practical method for monitoring teriflunomide concentrations. PMID:27015245

  14. Social costs of illegal drugs, alcohol and tobacco in the European Union: A systematic review.

    PubMed

    Barrio, Pablo; Reynolds, Jillian; García-Altés, Anna; Gual, Antoni; Anderson, Peter

    2017-09-01

    Drug use accounts for one of the main disease groups in Europe, with relevant consequences to society. There is an increasing need to evaluate the economic consequences of drug use in order to develop appropriate policies. Here, we review the social costs of illegal drugs, alcohol and tobacco in the European Union. A systematic search of relevant databases was conducted. Grey literature and previous systematic reviews were also searched. Studies reporting on social costs of illegal drugs, alcohol and tobacco were included. Methodology, cost components as well as costs were assessed from individual studies. To compare across studies, final costs were transformed to 2014 Euros. Forty-five studies reported in 43 papers met the inclusion criteria (11 for illegal drugs, 26 for alcohol and 8 for tobacco). While there was a constant inclusion of direct costs related to treatment of substance use and comorbidities, there was a high variability for the rest of cost components. Total costs showed also a great variability. Price per capita for the year 2014 ranged from €0.38 to €78 for illegal drugs, from €26 to €1500 for alcohol and from €10.55 to €391 for tobacco. Drug use imposes a heavy economic burden to Europe. However, given the high existing heterogeneity in methodologies, and in order to better assess the burden and thus to develop adequate policies, standardised methodological guidance is needed. [Barrio P, Reynolds J, García-Altés A, Gual A, Anderson P. Social costs of illegal drugs, alcohol and tobacco in the European Union: A systematic review. Drug Alcohol Rev 2017;00:000-000]. © 2017 Australasian Professional Society on Alcohol and other Drugs.

  15. Nuclear binding energy using semi empirical mass formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ankita,, E-mail: ankitagoyal@gmail.com; Suthar, B.

    2016-05-06

    In the present communication, semi empirical mass formula using the liquid drop model has been presented. Nuclear binding energies are calculated using semi empirical mass formula with various constants given by different researchers. We also compare these calculated values with experimental data and comparative study for finding suitable constants is added using the error plot. The study is extended to find the more suitable constant to reduce the error.

  16. Theoretical study of the gas-phase reactions of iodine atoms ((2)P(3/2)) with H(2), H(2)O, HI, and OH.

    PubMed

    Canneaux, Sébastien; Xerri, Bertrand; Louis, Florent; Cantrel, Laurent

    2010-09-02

    The rate constants of the reactions of iodine atoms with H(2), H(2)O, HI, and OH have been estimated using 39, 21, 13, and 39 different levels of theory, respectively, and have been compared to the available literature values over the temperature range of 250-2500 K. The aim of this methodological work is to demonstrate that standard theoretical methods are adequate to obtain quantitative rate constants for the reactions involving iodine-containing species. Geometry optimizations and vibrational frequency calculations are performed using three methods (MP2, MPW1K, and BHandHLYP) combined with three basis sets (cc-pVTZ, cc-pVQZ, and 6-311G(d,p)). Single-point energy calculations are performed with the highly correlated ab initio coupled cluster method in the space of single, double, and triple (pertubatively) electron excitations CCSD(T) using the cc-pVnZ (n = T, Q, and 5), aug-cc-pVnZ (n = T, Q, and 5), 6-311G(d,p), 6-311+G(3df,2p), and 6-311++G(3df,3pd) basis sets. Canonical transition state theory with a simple Wigner tunneling correction is used to predict the rate constants as a function of temperature. CCSD(T)/cc-pVnZ//MP2/cc-pVTZ (n = T and Q), CCSD(T)/6-311+G(3df,2p)//MP2/6-311G(d,p), and CCSD(T)/6-311++G(3df,3pd)//MP2/6-311G(d,p) levels of theory provide accurate kinetic rate constants when compared to available literature data. The use of the CCSD(T)/cc-pVQZ//MP2/cc-pVTZ and CCSD(T)/6-311++G(3df,3pd) levels of theory allows one to obtain a better agreement with the literature data for all reactions with the exception of the I + H(2) reaction R(1) . This computational procedure has been also used to predict rate constants for some reactions where no available experimental data exist. The use of quantum chemistry tools could be therefore extended to other elements and next applied to develop kinetic networks involving various fission products, steam, and hydrogen in the absence of literature data. The final objective is to implement the kinetics of gaseous reactions in the ASTEC (Accident Source Term Evaluation Code) code to improve speciation of fission transport, which can be transported along the Reactor Coolant System (RCS) of a Pressurized Water Reactor (PWR) in case of a severe accident.

  17. Servo-control for maintaining abdominal skin temperature at 36C in low birth weight infants.

    PubMed

    Sinclair, J C

    2000-01-01

    Randomized trials have shown that the neonatal mortality rate of low birth-weight babies can be reduced by keeping them warm. For low birth-weight babies nursed in incubators, warm conditions may be achieved either by heating the air to a desired temperature, or by servo-controlling the baby's body temperature at a desired set-point. In low birth weight infants, to determine the effect on death and other important clinical outcomes of targeting body temperature rather than air temperature as the end-point of control of incubator heating. Standard search strategy of the Cochrane Neonatal Collaborative Review Group. Randomized or quasi-randomized trials which test the effects of having the heat output of the incubator servo-controlled from body temperature compared with setting a constant incubator air temperature. Trial methodologic quality was systematically assessed. Outcome measures included death, timing of death, cause of death, and other clinical outcomes. Categorical outcomes were analyzed using relative risk and risk difference. Meta-analysis assumed a fixed effect model. Compared to setting a constant incubator air temperature of 31.8C, servo-control of abdominal skin temperature at 36C reduces the neonatal death rate among low birth weight infants: relative risk 0.72 (95% CI 0.54, 0.97); risk difference -12.7% (95% CI -1.6, -23.9). This effect is even greater among VLBW infants. During at least the first week after birth, low birth weight babies should be provided with a carefully regulated thermal environment that is near the thermoneutral point. For LBW babies in incubators, this can be achieved by adjusting incubator temperature to maintain an anterior abdominal skin temperature of at least 36C, using either servo-control or frequent manual adjustment of incubator air temperature.

  18. A Methodologic Approach for Normalizing Angular Work and Velocity During Isotonic and Isokinetic Eccentric Training

    PubMed Central

    Guilhem, Gaël; Cornu, Christophe; Guével, Arnaud

    2012-01-01

    Context: Resistance exercise training commonly is performed against a constant external load (isotonic) or at a constant velocity (isokinetic). Researchers comparing the effectiveness of isotonic and isokinetic resistance-training protocols need to equalize the mechanical stimulus (work and velocity) applied. Objective: To examine whether the standardization protocol could be adjusted and applied to an eccentric training program. Design: Controlled laboratory study. Setting: Controlled research laboratory. Patients or Other Participants: Twenty-one sport science male students (age = 20.6 ± 1.5 years, height = 178.0 ± 4.0 cm, mass = 74.5 ± 9.1 kg). Intervention(s): Participants performed 9 weeks of isotonic (n = 11) or isokinetic (n = 10) eccentric training of knee extensors that was designed so they would perform the same amount of angular work at the same mean angular velocity. Main Outcome Measure(s): Angular work and angular velocity. Results: The isotonic and isokinetic groups performed the same total amount of work (−185.2 ± 6.5 kJ and −184.4 ± 8.6 kJ, respectively) at the same angular velocity (21 ± 1°/s and 22°/s, respectively) with the same number of repetitions (8.0 and 8.0, respectively). Bland-Altman analysis showed that work (bias = 2.4%) and angular velocity (bias = 0.2%) were equalized over 9 weeks between the modes of training. Conclusions: The procedure developed allows angular work and velocity to be standardized over 9 weeks of isotonic and isokinetic eccentric training of the knee extensors. This method could be useful in future studies in which researchers compare neuromuscular adaptations induced by each type of training mode with respect to rehabilitating patients after musculoskeletal injury. PMID:22488276

  19. Methodology, Technical Approach and Measurement Techniques for Testing of TPM Thermal Protection Materials in IPM Plasmatrons

    DTIC Science & Technology

    2000-04-01

    system, 8 - experiments on a study of boundary layer spectrum infrared window). before boiling of glass- silicide coating. This simple 3. SAMPLES AND...dependencies of surface temperature of tested materials and make conclusions concerned joint gllass- silicide coating and anode power of generator...obtained using test stagnation point configuration. glass- silicide coating vs anode power of HF-generator. Temperature peak at constant power

  20. Quantifying the Molecular Origins of Opposite Solvent Effects on Protein-Protein Interactions

    PubMed Central

    Vagenende, Vincent; Han, Alvin X.; Pek, Han B.; Loo, Bernard L. W.

    2013-01-01

    Although the nature of solvent-protein interactions is generally weak and non-specific, addition of cosolvents such as denaturants and osmolytes strengthens protein-protein interactions for some proteins, whereas it weakens protein-protein interactions for others. This is exemplified by the puzzling observation that addition of glycerol oppositely affects the association constants of two antibodies, D1.3 and D44.1, with lysozyme. To resolve this conundrum, we develop a methodology based on the thermodynamic principles of preferential interaction theory and the quantitative characterization of local protein solvation from molecular dynamics simulations. We find that changes of preferential solvent interactions at the protein-protein interface quantitatively account for the opposite effects of glycerol on the antibody-antigen association constants. Detailed characterization of local protein solvation in the free and associated protein states reveals how opposite solvent effects on protein-protein interactions depend on the extent of dewetting of the protein-protein contact region and on structural changes that alter cooperative solvent-protein interactions at the periphery of the protein-protein interface. These results demonstrate the direct relationship between macroscopic solvent effects on protein-protein interactions and atom-scale solvent-protein interactions, and establish a general methodology for predicting and understanding solvent effects on protein-protein interactions in diverse biological environments. PMID:23696727

  1. Quantifying the molecular origins of opposite solvent effects on protein-protein interactions.

    PubMed

    Vagenende, Vincent; Han, Alvin X; Pek, Han B; Loo, Bernard L W

    2013-01-01

    Although the nature of solvent-protein interactions is generally weak and non-specific, addition of cosolvents such as denaturants and osmolytes strengthens protein-protein interactions for some proteins, whereas it weakens protein-protein interactions for others. This is exemplified by the puzzling observation that addition of glycerol oppositely affects the association constants of two antibodies, D1.3 and D44.1, with lysozyme. To resolve this conundrum, we develop a methodology based on the thermodynamic principles of preferential interaction theory and the quantitative characterization of local protein solvation from molecular dynamics simulations. We find that changes of preferential solvent interactions at the protein-protein interface quantitatively account for the opposite effects of glycerol on the antibody-antigen association constants. Detailed characterization of local protein solvation in the free and associated protein states reveals how opposite solvent effects on protein-protein interactions depend on the extent of dewetting of the protein-protein contact region and on structural changes that alter cooperative solvent-protein interactions at the periphery of the protein-protein interface. These results demonstrate the direct relationship between macroscopic solvent effects on protein-protein interactions and atom-scale solvent-protein interactions, and establish a general methodology for predicting and understanding solvent effects on protein-protein interactions in diverse biological environments.

  2. The effects of varied versus constant high-, medium-, and low-preference stimuli on performance.

    PubMed

    Wine, Byron; Wilder, David A

    2009-01-01

    The purpose of the current study was to compare the delivery of varied versus constant high-, medium-, and low-preference stimuli on performance of 2 adults on a computer-based task in an analogue employment setting. For both participants, constant delivery of the high-preference stimulus produced the greatest increases in performance over baseline; the varied presentation produced performance comparable to constant delivery of medium-preference stimuli. Results are discussed in terms of their implications for the selection and delivery of stimuli as part of employee performance-improvement programs in the field of organizational behavior management.

  3. Identification of Proteins Modulated in the Date Palm Stem Infested with Red Palm Weevil (Rhynchophorus ferrugineus Oliv.) Using Two Dimensional Differential Gel Electrophoresis and Mass Spectrometry

    PubMed Central

    Rasool, Khawaja Ghulam; Khan, Muhammad Altaf; Aldawood, Abdulrahman Saad; Tufail, Muhammad; Mukhtar, Muhammad; Takeda, Makio

    2015-01-01

    A state of the art proteomic methodology using Matrix Assisted Laser Desorption/Ionization-Time of Flight (MALDI TOF) has been employed to characterize peptides modulated in the date palm stem subsequent to infestation with red palm weevil (RPW). Our analyses revealed 32 differentially expressed peptides associated with RPW infestation in date palm stem. To identify RPW infestation associated peptides (I), artificially wounded plants (W) were used as additional control beside uninfested plants, a conventional control (C). A constant unique pattern of differential expression in infested (I), wounded (W) stem samples compared to control (C) was observed. The upregulated proteins showed relative fold intensity in order of I > W and downregulated spots trend as W > I, a quite interesting pattern. This study also reveals that artificially wounding of date palm stem affects almost the same proteins as infestation; however, relative intensity is quite lower than in infested samples both in up and downregulated spots. All 32 differentially expressed spots were subjected to MALDI-TOF analysis for their identification and we were able to match 21 proteins in the already existing databases. Relatively significant modulated expression pattern of a number of peptides in infested plants predicts the possibility of developing a quick and reliable molecular methodology for detecting plants infested with date palm. PMID:26287180

  4. Identification of Proteins Modulated in the Date Palm Stem Infested with Red Palm Weevil (Rhynchophorus ferrugineus Oliv.) Using Two Dimensional Differential Gel Electrophoresis and Mass Spectrometry.

    PubMed

    Rasool, Khawaja Ghulam; Khan, Muhammad Altaf; Aldawood, Abdulrahman Saad; Tufail, Muhammad; Mukhtar, Muhammad; Takeda, Makio

    2015-08-17

    A state of the art proteomic methodology using Matrix Assisted Laser Desorption/Ionization-Time of Flight (MALDI TOF) has been employed to characterize peptides modulated in the date palm stem subsequent to infestation with red palm weevil (RPW). Our analyses revealed 32 differentially expressed peptides associated with RPW infestation in date palm stem. To identify RPW infestation associated peptides (I), artificially wounded plants (W) were used as additional control beside uninfested plants, a conventional control (C). A constant unique pattern of differential expression in infested (I), wounded (W) stem samples compared to control (C) was observed. The upregulated proteins showed relative fold intensity in order of I > W and downregulated spots trend as W > I, a quite interesting pattern. This study also reveals that artificially wounding of date palm stem affects almost the same proteins as infestation; however, relative intensity is quite lower than in infested samples both in up and downregulated spots. All 32 differentially expressed spots were subjected to MALDI-TOF analysis for their identification and we were able to match 21 proteins in the already existing databases. Relatively significant modulated expression pattern of a number of peptides in infested plants predicts the possibility of developing a quick and reliable molecular methodology for detecting plants infested with date palm.

  5. [A method for forecasting the seasonal dynamic of malaria in the municipalities of Colombia].

    PubMed

    Velásquez, Javier Oswaldo Rodríguez

    2010-03-01

    To develop a methodology for forecasting the seasonal dynamic of malaria outbreaks in the municipalities of Colombia. Epidemiologic ranges were defined by multiples of 50 cases for the six municipalities with the highest incidence, 25 cases for the municipalities that ranked 10th and 11th by incidence, 10 for the municipality that ranked 193rd, and 5 for the municipality that ranked 402nd. The specific probability values for each epidemiologic range appearing in each municipality, as well as the S/k value--the ratio between entropy (S) and the Boltzmann constant (k)--were calculated for each three-week set, along with the differences in this ratio divided by the consecutive sets of weeks. These mathematical ratios were used to determine the values for forecasting the case dynamic, which were compared with the actual epidemiologic data from the period 2003-2007. The probability of the epidemiologic ranges appearing ranged from 0.019 and 1.00, while the differences in the S/k ratio between the sets of consecutive weeks ranged from 0.23 to 0.29. Three ratios were established to determine whether the dynamic corresponded to an outbreak. These ratios were corroborated with real epidemiological data from 810 Colombian municipalities. This methodology allows us to forecast the malaria case dynamic and outbreaks in the municipalities of Colombia and can be used in planning interventions and public health policies.

  6. Method for experimental investigation of transient operation on Laval test stand for model size turbines

    NASA Astrophysics Data System (ADS)

    Fraser, R.; Coulaud, M.; Aeschlimann, V.; Lemay, J.; Deschenes, C.

    2016-11-01

    With the growing proportion of inconstant energy source as wind and solar, hydroelectricity becomes a first class source of peak energy in order to regularize the grid. The important increase of start - stop cycles may then cause a premature ageing of runners by both a higher number of cycles in stress fluctuations and by reaching a higher stress level in absolute. Aiming to sustain good quality development on fully homologous scale model turbines, the Hydraulic Machines Laboratory (LAMH) of Laval University has developed a methodology to operate model size turbines on transient regimes such as start-up, stop or load rejection on its test stand. This methodology allows maintaining a constant head while the wicket gates are opening or closing in a representative speed on the model scale of what is made on the prototype. This paper first presents the opening speed on model based on dimensionless numbers, the methodology itself and its application. Then both its limitation and the first results using a bulb turbine are detailed.

  7. Automated combinatorial method for fast and robust prediction of lattice thermal conductivity

    NASA Astrophysics Data System (ADS)

    Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Toher, Cormac; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    The lack of computationally inexpensive and accurate ab-initio based methodologies to predict lattice thermal conductivity, κl, without computing the anharmonic force constants or performing time-consuming ab-initio molecular dynamics, is one of the obstacles preventing the accelerated discovery of new high or low thermal conductivity materials. The Slack equation is the best alternative to other more expensive methodologies but is highly dependent on two variables: the acoustic Debye temperature, θa, and the Grüneisen parameter, γ. Furthermore, different definitions can be used for these two quantities depending on the model or approximation. Here, we present a combinatorial approach based on the quasi-harmonic approximation to elucidate which definitions of both variables produce the best predictions of κl. A set of 42 compounds was used to test accuracy and robustness of all possible combinations. This approach is ideal for obtaining more accurate values than fast screening models based on the Debye model, while being significantly less expensive than methodologies that solve the Boltzmann transport equation.

  8. Design of high-linear CMOS circuit using a constant transconductance method for gamma-ray spectroscopy system

    NASA Astrophysics Data System (ADS)

    Jung, I. I.; Lee, J. H.; Lee, C. S.; Choi, Y.-W.

    2011-02-01

    We propose a novel circuit to be applied to the front-end integrated circuits of gamma-ray spectroscopy systems. Our circuit is designed as a type of current conveyor (ICON) employing a constant- gm (transconductance) method which can significantly improve the linearity in the amplified signals by using a large time constant and the time-invariant characteristics of an amplifier. The constant- gm method is obtained by a feedback control which keeps the transconductance of the input transistor constant. To verify the performance of the propose circuit, the time constant variations for the channel resistances are simulated with the TSMC 0.18 μm transistor parameters using HSPICE, and then compared with those of a conventional ICON. As a result, the proposed ICON shows only 0.02% output linearity variation and 0.19% time constant variation for the input amplitude up to 100 mV. These are significantly small values compared to a conventional ICON's 1.39% and 19.43%, respectively, for the same conditions.

  9. Short-term standard litter decomposition across three different ecosystems in middle taiga zone of West Siberia

    NASA Astrophysics Data System (ADS)

    Filippova, Nina V.; Glagolev, Mikhail V.

    2018-03-01

    The method of standard litter (tea) decomposition was implemented to compare decomposition rate constants (k) between different peatland ecosystems and coniferous forests in the middle taiga zone of West Siberia (near Khanty-Mansiysk). The standard protocol of TeaComposition initiative was used to make the data usable for comparisons among different sites and zonobiomes worldwide. This article sums up the results of short-term decomposition (3 months) on the local scale. The values of decomposition rate constants differed significantly between three ecosystem types: it was higher in forest compared to bogs, and treed bogs had lower decomposition constant compared to Sphagnum lawns. In general, the decomposition rate constants were close to ones reported earlier for similar climatic conditions and habitats.

  10. Constant-pH Molecular Dynamics Study of Kyotorphin in an Explicit Bilayer

    PubMed Central

    Magalhães, Pedro R.; Machuqueiro, Miguel; Baptista, António M.

    2015-01-01

    To our knowledge, we present the first constant-pH molecular dynamics study of the neuropeptide kyotorphin in the presence of an explicit lipid bilayer. The overall conformation freedom of the peptide was found to be affected by the interaction with the membrane, in accordance with previous results using different methodologies. Analysis of the interactions between the N-terminus amine group of the peptide and several lipid atoms shows that the membrane is able to stabilize both ionized and neutral forms of kyotorphin, resulting in a pKa value that is similar to the one obtained in water. This illustrates how a detailed molecular model of the membrane leads to rather different results than would be expected from simply regarding it as a low-dielectric slab. PMID:25954885

  11. Enhanced electrohydrodynamic force generation in a two-stroke cycle dielectric-barrier-discharge plasma actuator

    NASA Astrophysics Data System (ADS)

    Sato, Shintaro; Takahashi, Masayuki; Ohnishi, Naofumi

    2017-05-01

    An approach for electrohydrodynamic (EHD) force production is proposed with a focus on a charge cycle on a dielectric surface. The cycle, consisting of positive-charging and neutralizing strokes, is completely different from the conventional methodology, which involves a negative-charging stroke, in that the dielectric surface charge is constantly positive. The two-stroke charge cycle is realized by applying a DC voltage combined with repetitive pulses. Simulation results indicate that the negative pulse eliminates the surface charge accumulated during constant voltage phase, resulting in repetitive EHD force generation. The time-averaged EHD force increases almost linearly with increasing repetitive pulse frequency and becomes one order of magnitude larger than that driven by the sinusoidal voltage, which has the same peak-to-peak voltage.

  12. Integrated Control Using the SOFFT Control Structure

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1996-01-01

    The need for integrated/constrained control systems has become clearer as advanced aircraft introduced new coupled subsystems such as new propulsion subsystems with thrust vectoring and new aerodynamic designs. In this study, we develop an integrated control design methodology which accomodates constraints among subsystem variables while using the Stochastic Optimal Feedforward/Feedback Control Technique (SOFFT) thus maintaining all the advantages of the SOFFT approach. The Integrated SOFFT Control methodology uses a centralized feedforward control and a constrained feedback control law. The control thus takes advantage of the known coupling among the subsystems while maintaining the identity of subsystems for validation purposes and the simplicity of the feedback law to understand the system response in complicated nonlinear scenarios. The Variable-Gain Output Feedback Control methodology (including constant gain output feedback) is extended to accommodate equality constraints. A gain computation algorithm is developed. The designer can set the cross-gains between two variables or subsystems to zero or another value and optimize the remaining gains subject to the constraint. An integrated control law is designed for a modified F-15 SMTD aircraft model with coupled airframe and propulsion subsystems using the Integrated SOFFT Control methodology to produce a set of desired flying qualities.

  13. Simultaneous measurement of glucose transport and utilization in the human brain.

    PubMed

    Shestov, Alexander A; Emir, Uzay E; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R; Öz, Gülin

    2011-11-01

    Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, K(M)(t) and V(max)(t), in humans have so far been obtained by measuring steady-state brain glucose levels by proton ((1)H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose transport necessitated assuming a constant cerebral metabolic rate of glucose (CMR(glc)) obtained from other tracer studies, such as (13)C NMR. Here we present new methodology to simultaneously obtain kinetic parameters for glucose transport and utilization in the human brain by fitting both dynamic and steady-state (1)H NMR data with a reversible, non-steady-state Michaelis-Menten model. Dynamic data were obtained by measuring brain and plasma glucose time courses during glucose infusions to raise and maintain plasma concentration at ∼17 mmol/l for ∼2 h in five healthy volunteers. Steady-state brain vs. plasma glucose concentrations were taken from literature and the steady-state portions of data from the five volunteers. In addition to providing simultaneous measurements of glucose transport and utilization and obviating assumptions for constant CMR(glc), this methodology does not necessitate infusions of expensive or radioactive tracers. Using this new methodology, we found that the maximum transport capacity for glucose through the blood-brain barrier was nearly twofold higher than maximum cerebral glucose utilization. The glucose transport and utilization parameters were consistent with previously published values for human brain.

  14. A comparative study of soviet versus western helicopters. Part 2: Evaluation of weight, maintainability and design aspects of major components

    NASA Technical Reports Server (NTRS)

    Stepniewski, W. Z.; Shinn, R. A.

    1983-01-01

    A detailed comparative insight into design and operational philosophies of Soviet vs. Western helicopters is provided. This is accomplished by examining conceptual approaches, productibility and maintainability, and weight trends/prediction methodology. Extensive use of Soviet methodology (Tishchenko) to various weight classes of helicopters is compared to the results of using Western based methodology.

  15. IN VITRO QUANTIFICATION OF THE SIZE DISTRIBUTION OF INTRASACCULAR VOIDS LEFT AFTER ENDOVASCULAR COILING OF CEREBRAL ANEURYSMS.

    PubMed

    Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B

    2013-03-01

    Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.

  16. The role of platelet-rich plasma in arthroscopic rotator cuff repair: a systematic review with quantitative synthesis.

    PubMed

    Chahal, Jaskarndip; Van Thiel, Geoffrey S; Mall, Nathan; Heard, Wendell; Bach, Bernard R; Cole, Brian J; Nicholson, Gregory P; Verma, Nikhil N; Whelan, Daniel B; Romeo, Anthony A

    2012-11-01

    Despite the theoretic basis and interest in using platelet-rich plasma (PRP) to improve the potential for rotator cuff healing, there remains ongoing controversy regarding its clinical efficacy. The objective of this systematic review was to identify and summarize the available evidence to compare the efficacy of arthroscopic rotator cuff repair in patients with full-thickness rotator cuff tears who were concomitantly treated with PRP. We searched the Cochrane Central Register of Controlled Trials, Medline, Embase, and PubMed for eligible studies. Two reviewers selected studies for inclusion, assessed methodologic quality, and extracted data. Pooled analyses were performed using a random effects model to arrive at summary estimates of treatment effect with associated 95% confidence intervals. Five studies (2 randomized and 3 nonrandomized with comparative control groups) met the inclusion criteria, with a total of 261 patients. Methodologic quality was uniformly sound as assessed by the Detsky scale and Newcastle-Ottawa Scale. Quantitative synthesis of all 5 studies showed that there was no statistically significant difference in the overall rate of rotator cuff retear between patients treated with PRP and those treated without PRP (risk ratio, 0.77; 95% confidence interval, 0.48 to 1.23). There were also no differences in the pooled Constant score; Simple Shoulder Test score; American Shoulder and Elbow Surgeons score; University of California, Los Angeles shoulder score; or Single Assessment Numeric Evaluation score. PRP does not have an effect on overall retear rates or shoulder-specific outcomes after arthroscopic rotator cuff repair. Additional well-designed randomized trials are needed to corroborate these findings. Level III, systematic review of Level I, II, and III studies. Copyright © 2012 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  17. Neural Network and Response Surface Methodology for Rocket Engine Component Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar; Papita, Nilay; Shyy, Wei; Tucker, P. Kevin; Griffin, Lisa W.; Haftka, Raphael; Fitz-Coy, Norman; McConnaughey, Helen (Technical Monitor)

    2000-01-01

    The goal of this work is to compare the performance of response surface methodology (RSM) and two types of neural networks (NN) to aid preliminary design of two rocket engine components. A data set of 45 training points and 20 test points obtained from a semi-empirical model based on three design variables is used for a shear coaxial injector element. Data for supersonic turbine design is based on six design variables, 76 training, data and 18 test data obtained from simplified aerodynamic analysis. Several RS and NN are first constructed using the training data. The test data are then employed to select the best RS or NN. Quadratic and cubic response surfaces. radial basis neural network (RBNN) and back-propagation neural network (BPNN) are compared. Two-layered RBNN are generated using two different training algorithms, namely solverbe and solverb. A two layered BPNN is generated with Tan-Sigmoid transfer function. Various issues related to the training of the neural networks are addressed including number of neurons, error goals, spread constants and the accuracy of different models in representing the design space. A search for the optimum design is carried out using a standard gradient-based optimization algorithm over the response surfaces represented by the polynomials and trained neural networks. Usually a cubic polynominal performs better than the quadratic polynomial but exceptions have been noticed. Among the NN choices, the RBNN designed using solverb yields more consistent performance for both engine components considered. The training of RBNN is easier as it requires linear regression. This coupled with the consistency in performance promise the possibility of it being used as an optimization strategy for engineering design problems.

  18. Detecting organisational innovations leading to improved ICU outcomes: a protocol for a double-blinded national positive deviance study of critical care delivery

    PubMed Central

    Jopling, Jeffrey K; Scott, Jennifer Yang; Ramsey, Meghan; Vranas, Kelly; Wagner, Todd H; Milstein, Arnold

    2017-01-01

    Introduction There is substantial variability in intensive care unit (ICU) utilisation and quality of care. However, the factors that drive this variation are poorly understood. This study uses a novel adaptation of positive deviance approach—a methodology used in public health that assumes solutions to challenges already exist within the system to detect innovations that are likely to improve intensive care. Methods and analysis We used the Philips eICU Research Institute database, containing 3.3 million patient records from over 50 health systems across the USA. Acute Physiology and Chronic Health Evaluation IVa scores were used to identify the study cohort, which included ICU patients whose outcomes were felt to be most sensitive to organisational innovations. The primary outcomes included mortality and length of stay. Outcome measurements were directly standardised, and bootstrapped CIs were calculated with adjustment for false discovery rate. Using purposive sampling, we then generated a blinded list of five positive outliers and five negative comparators. Using rapid qualitative inquiry (RQI), blinded interdisciplinary site visit teams will conduct interviews and observations using a team ethnography approach. After data collection is completed, the data will be unblinded and analysed using a cross-case method to identify themes, patterns and innovations using a constant comparative grounded theory approach. This process detects the innovations in intensive care and supports an evaluation of how positive deviance and RQI methods can be adapted to healthcare. Ethics and dissemination The study protocol was approved by the Stanford University Institutional Review Board (reference: 39509). We plan on publishing study findings and methodological guidance in peer-reviewed academic journals, white papers and presentations at conferences. PMID:28615274

  19. A Practical Methodology for Disaggregating the Drivers of Drug Costs Using Administrative Data.

    PubMed

    Lungu, Elena R; Manti, Orlando J; Levine, Mitchell A H; Clark, Douglas A; Potashnik, Tanya M; McKinley, Carol I

    2017-09-01

    Prescription drug expenditures represent a significant component of health care costs in Canada, with estimates of $28.8 billion spent in 2014. Identifying the major cost drivers and the effect they have on prescription drug expenditures allows policy makers and researchers to interpret current cost pressures and anticipate future expenditure levels. To identify the major drivers of prescription drug costs and to develop a methodology to disaggregate the impact of each of the individual drivers. The methodology proposed in this study uses the Laspeyres approach for cost decomposition. This approach isolates the effect of the change in a specific factor (e.g., price) by holding the other factor(s) (e.g., quantity) constant at the base-period value. The Laspeyres approach is expanded to a multi-factorial framework to isolate and quantify several factors that drive prescription drug cost. Three broad categories of effects are considered: volume, price and drug-mix effects. For each category, important sub-effects are quantified. This study presents a new and comprehensive methodology for decomposing the change in prescription drug costs over time including step-by-step demonstrations of how the formulas were derived. This methodology has practical applications for health policy decision makers and can aid researchers in conducting cost driver analyses. The methodology can be adjusted depending on the purpose and analytical depth of the research and data availability. © 2017 Journal of Population Therapeutics and Clinical Pharmacology. All rights reserved.

  20. Conjugate Acid-Base Pairs, Free Energy, and the Equilibrium Constant

    ERIC Educational Resources Information Center

    Beach, Darrell H.

    1969-01-01

    Describes a method of calculating the equilibrium constant from free energy data. Values of the equilibrium constants of six Bronsted-Lowry reactions calculated by the author's method and by a conventional textbook method are compared. (LC)

  1. [Comparative analysis of the efficacy of a playful-narrative program to teach mathematics at pre-school level].

    PubMed

    Gil Llario, M D; Vicent Catalá, Consuelo

    2009-02-01

    Comparative analysis of the efficacy of a playful-narrative program to teach mathematics at pre-school level. In this paper, the effectiveness of a programme comprising several components that are meant to consolidate mathematical concepts and abilities at the pre-school level is analyzed. The instructional methodology of this programme is compared to other methodologies. One-hundred 5-6 year-old children made up the sample that was distributed in the following conditions: (1) traditional methodology; (2) methodology with perceptual and manipulative components, and (3) methodology with language and playful components. Mathematical competence was assessed with the Mathematical Criterial Pre-school Test and the subtest of quantitative-numeric concepts of BADyG. Participants were evaluated before and after the academic course during which they followed one of these methodologies. The results show that the programme with language and playful components is more effective than the traditional methodology (p<.000) and also more effective than the perceptual and manipulative methodology (p<.000). Implications of the results for instructional practices are analyzed.

  2. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  3. Susceptibility constants of airborne bacteria to dielectric barrier discharge for antibacterial performance evaluation.

    PubMed

    Park, Chul Woo; Hwang, Jungho

    2013-01-15

    Dielectric barrier discharge (DBD) is a promising method to remove contaminant bioaerosols. The collection efficiency of a DBD reactor is an important factor for determining a reactor's removal efficiency. Without considering collection, simply defining the inactivation efficiency based on colony counting numbers for DBD as on and off may lead to overestimation of the inactivation efficiency of the DBD reactor. One-pass removal tests of bioaerosols were carried out to deduce the inactivation efficiency of the DBD reactor using both aerosol- and colony-counting methods. Our DBD reactor showed good performance for removing test bioaerosols for an applied voltage of 7.5 kV and a residence time of 0.24s, with η(CFU), η(Number), and η(Inactivation) values of 94%, 64%, and 83%, respectively. Additionally, we introduce the susceptibility constant of bioaerosols to DBD as a quantitative parameter for the performance evaluation of a DBD reactor. The modified susceptibility constant, which is the ratio of the susceptibility constant to the volume of the plasma reactor, has been successfully demonstrated for the performance evaluation of different sized DBD reactors under different DBD operating conditions. Our methodology will be used for design optimization, performance evaluation, and prediction of power consumption of DBD for industrial applications. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Critical Thinking: Comparing Instructional Methodologies in a Senior-Year Learning Community

    ERIC Educational Resources Information Center

    Zelizer, Deborah A.

    2013-01-01

    This quasi-experimental, nonequivalent control group study compared the impact of Ennis's (1989) mixed instructional methodology to the immersion methodology on the development of critical thinking in a multicultural, undergraduate senior-year learning community. A convenience sample of students (n =171) were selected from four sections of a…

  5. Optical waveguides having flattened high order modes

    DOEpatents

    Messerly, Michael Joseph; Beach, Raymond John; Heebner, John Edward; Dawson, Jay Walter; Pax, Paul Henry

    2014-08-05

    A deterministic methodology is provided for designing optical fibers that support field-flattened, ring-like higher order modes. The effective and group indices of its modes can be tuned by adjusting the widths of the guide's field-flattened layers or the average index of certain groups of layers. The approach outlined here provides a path to designing fibers that simultaneously have large mode areas and large separations between the propagation constants of its modes.

  6. Axiomatic Analysis

    DTIC Science & Technology

    1978-09-01

    which, it seems to us, can be naturally interpreted as directly supporting this contention is reported in Hochstein and Shapley (1976a, b ), Levick ...are certainly positive aspects in many of these methodologies and, in particular, in what they are trying to obtain. To b * effective, however, a...statement about control structures Thus if y • A(x); we can say Where (y,x) are INTEGERS, A is a constant FUNCTION; in addition, if 2 « C( b ); we

  7. A Joint Replenishment Inventory Model with Lost Sales

    NASA Astrophysics Data System (ADS)

    Devy, N. L.; Ai, T. J.; Astanti, R. D.

    2018-04-01

    This paper deals with two items joint replenishment inventory problem, in which the demand of each items are constant and deterministic. Inventory replenishment of items is conducted periodically every T time intervals. Among of these replenishments, joint replenishment of both items is possible. It is defined that item i is replenished every ZiT time intervals. Replenishment of items are instantaneous. All of shortages are considered as lost sales. The maximum allowance for lost sales of item i is Si. Mathematical model is formulated in order to determining the basic time cycle T, replenishment multiplier Zi , and maximum lost sales Si in order to minimize the total cost per unit time. A solution methodology is proposed for solve the model and a numerical example is provided for demonstrating the effectiveness of the proposed methodology.

  8. Adapting Western research methods to indigenous ways of knowing.

    PubMed

    Simonds, Vanessa W; Christopher, Suzanne

    2013-12-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.

  9. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  10. Deduced elasticity of sp3-bonded amorphous diamond

    NASA Astrophysics Data System (ADS)

    Ballato, J.; Ballato, A.

    2017-11-01

    Amorphous diamond was recently synthesized using high temperature and pressure techniques [Z. Zeng, L. Yang, Q. Zeng, H. Lou, H. Sheng, J. Wen, D. J. Miller, Y. Meng, W. Yang, W. L. Mao, and H. K. Mao, Nat. Commun. 8, 322 (2017)]. Here, selected physical properties of this new phase of carbon are deduced using an extension of the Voigt-Reuss-Hill (VRHx) methodology whereby single crystal values are averaged over all orientations to yield values for the amorphous analog. Specifically, the elastic constants were deduced to be c11 = 1156.5 GPa, c12 = 87.6 GPa, and c44 = 534.5 GPa, whereas the Young's modulus, bulk modulus, and Poisson's ratio were also estimated to be 1144.2 GPa, 443.9 GPa, and 0.0704, respectively. These numbers are compared with experimental and theoretical literature values for other allotropic forms, specifically, Lonsdaleite, and two forms each of graphite and amorphous carbon. It is unknown at this time how the high temperature and pressure synthesis approach employed influences the structure, hence properties, of amorphous diamond at room temperature. However, the values provided herein constitute a baseline against which future structure/property/processing analyses can be compared.

  11. Calculation of kinetic rate constants from thermodynamic data

    NASA Technical Reports Server (NTRS)

    Marek, C. John

    1995-01-01

    A new scheme for relating the absolute value for the kinetic rate constant k to the thermodynamic constant Kp is developed for gases. In this report the forward and reverse rate constants are individually related to the thermodynamic data. The kinetic rate constants computed from thermodynamics compare well with the current kinetic rate constants. This method is self consistent and does not have extensive rules. It is first demonstrated and calibrated by computing the HBr reaction from H2 and Br2. This method then is used on other reactions.

  12. Comparative Physical Education and Sport. Second Edition.

    ERIC Educational Resources Information Center

    Bennett, Bruce L.; And Others

    Educational theories and practice in the field of physical education and sport in various countries are discussed and compared. Chapters address: (1) comparative physical education and sport; (2) history and methodology of comparative education; (3) history and methodolog of comparative physical education and sport; (4) physical education in the…

  13. Reaction rate constants and mean population percentage for nitrifiers in an alternating oxidation ditch system.

    PubMed

    Mantziaras, I D; Katsiri, A

    2011-01-01

    This paper presents a methodology for the determination of reaction rate constants for nitrifying bacteria and their mean population percentage in biomass in an alternating oxidation ditch system. The method used is based on the growth rate equations of the ASM1 model (IWA) (Henze et al. in Activated sludge models ASM1, ASM2, ASM2d, and ASM3. IWA Scientific and Technical Report no. 9, IWA Publishing, London, UK, 2000) and the application of mass balance equations for nitrifiers and ammonium nitrogen in an operational cycle of the ditch system. The system consists of two ditches operating in four phases. Data from a large-scale oxidation ditch pilot plant with a total volume of 120 m(3) within an experimental period of 8 months was used. Maximum specific growth rate for autotrophs (μ(A)) and the half-saturation constant for ammonium nitrogen (K(NH)) were found to be 0.36 day(-1) and 0.65 mgNH(4)-N/l, respectively. Additionally, the average population percentage of the nitrifiers in the biomass was estimated to be around 3%.

  14. Interaction between a cationic porphyrin and ctDNA investigated by SPR, CV and UV-vis spectroscopy.

    PubMed

    Xu, Zi-Qiang; Zhou, Bo; Jiang, Feng-Lei; Dai, Jie; Liu, Yi

    2013-10-01

    The interaction between ctDNA and a cationic porphyrin was studied in this work. The binding process was monitored by surface plasmon resonance (SPR) spectroscopy in detail. The association, dissociation rate constants and the binding constants calculated by global analysis were 2.4×10(2)±26.4M(-1)s(-1), 0.011±0.0000056s(-1) and 2.18×10(4)M(-1), respectively. And the results were confirmed by cyclic voltammetry and UV-vis absorption spectroscopy. The binding constants obtained from cyclic voltammetry and UV-vis absorption spectroscopy were 8.28×10(4)M(-1) and 6.73×10(4)M(-1) at 298K, respectively. The covalent immobilization methodology of ctDNA onto gold surface modified with three different compounds was also investigated by SPR. These compounds all contain sulfydryl but with different terminated functional groups. The results indicated that the 11-MUA (HS(CH2)10COOH)-modified gold film is more suitable for studying the DNA-drug interaction. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Nonlinear stability and control study of highly maneuverable high performance aircraft, phase 2

    NASA Technical Reports Server (NTRS)

    Mohler, R. R.

    1992-01-01

    This research should lead to the development of new nonlinear methodologies for the adaptive control and stability analysis of high angle-of-attack aircraft such as the F18 (HARV). The emphasis has been on nonlinear adaptive control, but associated model development, system identification, stability analysis and simulation is performed in some detail as well. Various models under investigation for different purposes are summarized in tabular form. Models and simulation for the longitudinal dynamics have been developed for all types except the nonlinear ordinary differential equation model. Briefly, studies completed indicate that nonlinear adaptive control can outperform linear adaptive control for rapid maneuvers with large changes in alpha. The transient responses are compared where the desired alpha varies from 5 degrees to 60 degrees to 30 degrees and back to 5 degrees in all about 16 sec. Here, the horizontal stabilator is the only control used with an assumed first-order linear actuator with a 1/30 sec time constant.

  16. Uncovering the features of negotiation in developing the patient-nurse relationship.

    PubMed

    Stoddart, Kathleen; Bugge, Carol

    2012-02-01

    This article describes a study that set out to explore the interaction between patients and nurses in community practice settings, in order to understand the social meanings and understandings brought to the interaction and at play within it. The study used a grounded theory methodology with traditional procedures. Driven by constant comparative analysis, data were collected by non-participant observation and informal and semi-structured interviews in four community health centres. Eighteen patients and 18 registered practice nurses participated. Negotiation was found to be a fundamental process in patient- nurse interaction. Navigation, socio-cultural characteristics and power and control were found to be key properties of negotiation. The negotiation processes for developing understanding required patients and nurses to draw upon social meanings and understandings generated from within and beyond their current interaction. Social meanings and understandings created within and beyond the health-care setting influence negotiation. The developmental nature of negotiation in interaction is an important dimension of the patient- nurse relationship in community practice.

  17. An investigation of the self-heating phenomenon in viscoelastic materials subjected to cyclic loadings accounting for prestress

    NASA Astrophysics Data System (ADS)

    de Lima, A. M. G.; Rade, D. A.; Lacerda, H. B.; Araújo, C. A.

    2015-06-01

    It has been demonstrated by many authors that the internal damping mechanism of the viscoelastic materials offers many possibilities for practical engineering applications. However, in traditional procedures of analysis and design of viscoelastic dampers subjected to cyclic loadings, uniform, constant temperature is generally assumed and do not take into account the self-heating phenomenon. Moreover, for viscoelastic materials subjected to dynamic loadings superimposed on static preloads, such as engine mounts, these procedures can lead to poor designs or even severe failures since the energy dissipated within the volume of the material leads to temperature rises. In this paper, a hybrid numerical-experimental investigation of effects of the static preloads on the self-heating phenomenon in viscoelastic dampers subjected to harmonic loadings is reported. After presenting the theoretical foundations, the numerical and experimental results obtained in terms of the temperature evolutions at different points within the volume of the viscoelastic material for various static preloads are compared, and the main features of the methodology are discussed.

  18. Embedded function methods for supersonic turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    He, J.; Kazakia, J. Y.; Walker, J. D. A.

    1990-01-01

    The development of embedded functions to represent the mean velocity and total enthalpy distributions in the wall layer of a supersonic turbulent boundary layer is considered. The asymptotic scaling laws (in the limit of large Reynolds number) for high speed compressible flows are obtained to facilitate eventual implementation of the embedded functions in a general prediction method. A self-consistent asymptotic structure is derived, as well as a compressible law of the wall in which the velocity and total enthalpy are logarithmic within the overlap zone, but in the Howarth-Dorodnitsyn variable. Simple outer region turbulence models are proposed (some of which are modifications of existing incompressible models) to reflect the effects of compressibility. As a test of the methodology and the new turbulence models, a set of self-similar outer region profiles is obtained for constant pressure flow; these are then coupled with embedded functions in the wall layer. The composite profiles thus obtained are compared directly with experimental data and good agreement is obtained for flows with Mach numbers up to 10.

  19. X-ray and Electrochemical Impedance Spectroscopy Diagnostic Investigations of Liquid Water in Polymer Electrolyte Membrane Fuel Cell Gas Diffusion Layers

    NASA Astrophysics Data System (ADS)

    Antonacci, Patrick

    In this thesis, electrochemical impedance spectroscopy (EIS) and synchrotron x-ray radiography were utilized to characterize the impact of liquid water distributions in polymer electrolyte membrane fuel cell (PEMFC) gas diffusion layers (GDLs) on fuel cell performance. These diagnostic techniques were used to quantify the effects of liquid water visualized on equivalent resistances measured through EIS. The effects of varying the thickness of the microporous layer (MPL) of GDLs were studied using these diagnostic techniques. In a first study on the feasibility of this methodology, two fuel cell cases with a 100 microm-thick and a 150 microm-thick MPL were compared under constant current density operation. In a second study with 10, 30, 50, and 100 microm-thick MPLs, the liquid water in the cathode substrate was demonstrated to affect mass transport resistance, while the liquid water content in the anode (from back diffusion) affected membrane hydration, evidenced through ohmic resistance measurements.

  20. A Grounded Theory Investigation Into Sophomore Students' Recall of Depression During Their Freshman Year in College: A Pilot Study.

    PubMed

    Brandy, Julie M; Kessler, Theresa A; Grabarek, Christina H

    2018-04-17

    Using a grounded theory approach, the current descriptive qualitative design was conducted with sophomore students to understand the meaning participants gave their freshman experiences with depression. Twelve participants were recruited using scripted class announcements across campus. After informed consent, interviews began with the question: What was the experience of your freshman year in college? All interviews were completed with the primary investigator and transcribed verbatim. Interviews were analyzed using constant comparative methodology. Data collection continued until saturation was achieved. Four major categories emerged, including the category of symptoms and emotions. This category included the subcategories expressions of stress, changes in eating habits, sleep issues, and procrastination. Descriptive examples of each were found throughout the interview data. With greater understanding of living with depression as a college freshman, health care and college student affairs professionals will have additional evidence to guide their practices. [Journal of Psychosocial Nursing and Mental Health Services, xx(x),xx-xx.]. Copyright 2018, SLACK Incorporated.

  1. Analysis of the bond-valence method for calculating (29) Si and (31) P magnetic shielding in covalent network solids.

    PubMed

    Holmes, Sean T; Alkan, Fahri; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil

    2016-07-05

    (29) Si and (31) P magnetic-shielding tensors in covalent network solids have been evaluated using periodic and cluster-based calculations. The cluster-based computational methodology employs pseudoatoms to reduce the net charge (resulting from missing co-ordination on the terminal atoms) through valence modification of terminal atoms using bond-valence theory (VMTA/BV). The magnetic-shielding tensors computed with the VMTA/BV method are compared to magnetic-shielding tensors determined with the periodic GIPAW approach. The cluster-based all-electron calculations agree with experiment better than the GIPAW calculations, particularly for predicting absolute magnetic shielding and for predicting chemical shifts. The performance of the DFT functionals CA-PZ, PW91, PBE, rPBE, PBEsol, WC, and PBE0 are assessed for the prediction of (29) Si and (31) P magnetic-shielding constants. Calculations using the hybrid functional PBE0, in combination with the VMTA/BV approach, result in excellent agreement with experiment. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Measuring Dilution of Microbicide Gels with Optical Imaging

    PubMed Central

    Drake, Tyler K.; Shah, Tejen; Peters, Jennifer J.; Wax, Adam; Katz, David F.

    2013-01-01

    We present a novel approach for measuring topical microbicide gel dilution using optical imaging. The approach compares gel thickness measurements from fluorimetry and multiplexed low coherence interferometry in order to calculate dilution of a gel. As a microbicide gel becomes diluted at fixed thickness, its mLCI thickness measurement remains constant, while the fluorimetry signal decreases in intensity. The difference between the two measurements is related to the extent of gel dilution. These two optical modalities are implemented in a single endoscopic instrument that enables simultaneous data collection. A preliminary validation study was performed with in vitro placebo gel measurements taken in a controlled test socket. It was found that change in slope of the regression line between fluorimetry and mLCI based measurements indicates dilution. A dilution calibration curve was then generated by repeating the test socket measurements with serial dilutions of placebo gel with vaginal fluid simulant. This methodology can provide valuable dilution information on candidate microbicide products, which could substantially enhance our understanding of their in vivo functioning. PMID:24340006

  3. From feelings of imprisonment to group cohesion: A qualitative analysis of group analytic psychotherapy with dual diagnosed patients admitted to an acute inpatient psychiatric unit.

    PubMed

    Sánchez Morales, Lidia; Eiroa-Orosa, Francisco José; Valls Llagostera, Cristina; González Pérez, Alba; Alberich, Cristina

    2018-05-01

    Group cohesion, the establishment of hope, and the expression of feelings have been said to be the basic ingredients of group psychotherapy. To date, there is few literature describing therapeutic processes in short stay settings such as acute psychiatric wards and with special patient groups such as addictions. Our goal with this study is to describe and analyze group processes in such contexts. We used a qualitative methodology combining constant comparative methods and hermeneutical triangulation to analyze therapeutic narratives in the context of a group analytic process carried following Foulkes' and Yalom's styles. The results provide a picture of the therapeutic process including the use of norms to strengthen group cohesion facilitating the expression of emotions in early stages of group development. This analysis is intended to be a guide for practitioners implementing group therapy in contexts involving several constraints, such as acute psychiatric wards.

  4. Improved detection of radioactive material using a series of measurements

    NASA Astrophysics Data System (ADS)

    Mann, Jenelle

    The goal of this project is to develop improved algorithms for detection of radioactive sources that have low signal compared to background. The detection of low signal sources is of interest in national security applications where the source may have weak ionizing radiation emissions, is heavily shielded, or the counting time is short (such as portal monitoring). Traditionally to distinguish signal from background the decision threshold (y*) is calculated by taking a long background count and limiting the false negative error (alpha error) to 5%. Some problems with this method include: background is constantly changing due to natural environmental fluctuations and large amounts of data are being taken as the detector continuously scans that are not utilized. Rather than looking at a single measurement, this work investigates looking at a series of N measurements and develops an appropriate decision threshold for exceeding the decision threshold n times in a series of N. This methodology is investigated for a rectangular, triangular, sinusoidal, Poisson, and Gaussian distribution.

  5. Shifting contours of boundaries: an exploration of inter-agency integration between hospital and community interprofessional diabetes programs.

    PubMed

    Wong, Rene; Breiner, Petra; Mylopoulos, Maria

    2014-09-01

    This article reports on research into the relationships that emerged between hospital-based and community-based interprofessional diabetes programs involved in inter-agency care. Using constructivist grounded theory methodology we interviewed a purposive theoretical sample of 21 clinicians and administrators from both types of programs. Emergent themes were identified through a process of constant comparative analysis. Initial boundaries were constructed based on contrasts in beliefs, practices and expertise. In response to bureaucratic and social pressures, boundaries were redefined in a way that created role uncertainty and disempowered community programs, ultimately preventing collaboration. We illustrate the dynamic and multi-dimensional nature of social and symbolic boundaries in inter-agency diabetes care and the tacit ways in which hospitals can maintain a power position at the expense of other actors in the field. As efforts continue in Canada and elsewhere to move knowledge and resources into community sectors, we highlight the importance of hospitals seeing beyond their own interests and adopting more altruistic models of inter-agency integration.

  6. Investigation of two- and three-bond carbon-hydrogen coupling constants in cinnamic acid based compounds.

    PubMed

    Pierens, Gregory K; Venkatachalam, Taracad K; Reutens, David C

    2016-12-01

    Two- and three-bond coupling constants ( 2 J HC and 3 J HC ) were determined for a series of 12 substituted cinnamic acids using a selective 2D inphase/antiphase (IPAP)-single quantum multiple bond correlation (HSQMBC) and 1D proton coupled 13 C NMR experiments. The coupling constants from two methods were compared and found to give very similar values. The results showed coupling constant values ranging from 1.7 to 9.7 Hz and 1.0 to 9.6 Hz for the IPAP-HSQMBC and the direct 13 C NMR experiments, respectively. The experimental values of the coupling constants were compared with discrete density functional theory (DFT) calculated values and were found to be in good agreement for the 3 J HC . However, the DFT method under estimated the 2 J HC coupling constants. Knowing the limitations of the measurement and calculation of these multibond coupling constants will add confidence to the assignment of conformation or stereochemical aspects of complex molecules like natural products. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Instanton rate constant calculations close to and above the crossover temperature.

    PubMed

    McConnell, Sean; Kästner, Johannes

    2017-11-15

    Canonical instanton theory is known to overestimate the rate constant close to a system-dependent crossover temperature and is inapplicable above that temperature. We compare the accuracy of the reaction rate constants calculated using recent semi-classical rate expressions to those from canonical instanton theory. We show that rate constants calculated purely from solving the stability matrix for the action in degrees of freedom orthogonal to the instanton path is not applicable at arbitrarily low temperatures and use two methods to overcome this. Furthermore, as a by-product of the developed methods, we derive a simple correction to canonical instanton theory that can alleviate this known overestimation of rate constants close to the crossover temperature. The combined methods accurately reproduce the rate constants of the canonical theory along the whole temperature range without the spurious overestimation near the crossover temperature. We calculate and compare rate constants on three different reactions: H in the Müller-Brown potential, methylhydroxycarbene → acetaldehyde and H 2  + OH → H + H 2 O. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Data Analysis and Its Impact on Predicting Schedule & Cost Risk

    DTIC Science & Technology

    2006-03-01

    variance of the error term by performing a Breusch - Pagan test for constant variance (Neter et al., 1996:239). In order to test the normality of...is constant variance. Using Microsoft Excel®, we calculate a p- 68 value of 0.225678 for the Breusch - Pagan test . We again compare this p-value to...calculate a p-value of 0.121211092 Breusch - Pagan test . We again compare this p-value to an alpha of 0.05 indicating our assumption of constant variance

  9. Health Systems and Their Assessment: A Methodological Proposal of the Synthetic Outcome Measure

    PubMed Central

    Romaniuk, Piotr; Kaczmarek, Krzysztof; Syrkiewicz-Świtała, Magdalena; Holecki, Tomasz; Szromek, Adam R.

    2018-01-01

    The effectiveness of health systems is an area of constant interest for public health researchers and practitioners. The varied approach to effectiveness itself has resulted in numerous methodological proposals related to its measurement. The limitations of the currently used methods lead to a constant search for better tools for the assessment of health systems. This article shows the possibilities of using the health system synthetic outcome measure (SOM) for this purpose. It is an original tool using 41 indicators referring to the epidemiological situation, health behaviors, and factors related to the health-care system, which allows a relatively quick and easy assessment of the health system in terms of its effectiveness. Construction of the measure of health system functioning in such a way allowed its presentation in dynamic perspective, i.e., assessing not only the health system itself in a given moment of time but also changes in the value of the effectiveness measures. In order to demonstrate the cognitive value of the SOM, the analysis of the effectiveness of health systems in 21 countries of Central and Eastern Europe during the transformation period was carried out. The mean SOM values calculated on the basis of the component measures allowed to differentiate countries in terms of the effectiveness of their health systems. Considering the whole period, a similar level of health system effects can be observed in Slovenia, Croatia, Czech Republic, Slovakia, Poland, Macedonia, and Albania. In the middle group, Hungary, Romania, Latvia, Lithuania, Georgia, Estonia, Bulgaria, Belarus, and Armenia were found. The third group, weakest in terms of achieved effects, was formed by health systems in countries like Ukraine, Moldova, and Russia. The presented method allows for the analysis of the health system outcomes from a comparative angle, eliminating arbitrariness of pinpointing a model solution as a potential reference point in the assessment of the systems. The measure, with the use of additional statistical tools to establish correlations with elements of the external and internal environment of a health system, allows for conducting analyses of conditions for differences in the effects of health system operation and circumstances for the effectiveness of reform processes.

  10. Probabilistic material strength degradation model for Inconel 718 components subjected to high temperature, high-cycle and low-cycle mechanical fatigue, creep and thermal fatigue effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie C.; Boyce, Lola

    1995-01-01

    This report presents the results of both the fifth and sixth year effort of a research program conducted for NASA-LeRC by The University of Texas at San Antonio (UTSA). The research included on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for five variables, namely, high temperature, high-cycle and low-cycle mechanical fatigue, creep and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using an updated version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of high-cycle mechanical fatigue, creep and thermal fatigue was performed. Then using the current version of PROMISS, entitled PROMISS94, a second sensitivity study including the effect of low-cycle mechanical fatigue, as well as, the three previous effects was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of high-cycle mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.

  11. Health Systems and Their Assessment: A Methodological Proposal of the Synthetic Outcome Measure.

    PubMed

    Romaniuk, Piotr; Kaczmarek, Krzysztof; Syrkiewicz-Świtała, Magdalena; Holecki, Tomasz; Szromek, Adam R

    2018-01-01

    The effectiveness of health systems is an area of constant interest for public health researchers and practitioners. The varied approach to effectiveness itself has resulted in numerous methodological proposals related to its measurement. The limitations of the currently used methods lead to a constant search for better tools for the assessment of health systems. This article shows the possibilities of using the health system synthetic outcome measure (SOM) for this purpose. It is an original tool using 41 indicators referring to the epidemiological situation, health behaviors, and factors related to the health-care system, which allows a relatively quick and easy assessment of the health system in terms of its effectiveness. Construction of the measure of health system functioning in such a way allowed its presentation in dynamic perspective, i.e., assessing not only the health system itself in a given moment of time but also changes in the value of the effectiveness measures. In order to demonstrate the cognitive value of the SOM, the analysis of the effectiveness of health systems in 21 countries of Central and Eastern Europe during the transformation period was carried out. The mean SOM values calculated on the basis of the component measures allowed to differentiate countries in terms of the effectiveness of their health systems. Considering the whole period, a similar level of health system effects can be observed in Slovenia, Croatia, Czech Republic, Slovakia, Poland, Macedonia, and Albania. In the middle group, Hungary, Romania, Latvia, Lithuania, Georgia, Estonia, Bulgaria, Belarus, and Armenia were found. The third group, weakest in terms of achieved effects, was formed by health systems in countries like Ukraine, Moldova, and Russia. The presented method allows for the analysis of the health system outcomes from a comparative angle, eliminating arbitrariness of pinpointing a model solution as a potential reference point in the assessment of the systems. The measure, with the use of additional statistical tools to establish correlations with elements of the external and internal environment of a health system, allows for conducting analyses of conditions for differences in the effects of health system operation and circumstances for the effectiveness of reform processes.

  12. Finite-Temperature Behavior of PdH x Elastic Constants Computed by Direct Molecular Dynamics

    DOE PAGES

    Zhou, X. W.; Heo, T. W.; Wood, B. C.; ...

    2017-05-30

    In this paper, robust time-averaged molecular dynamics has been developed to calculate finite-temperature elastic constants of a single crystal. We find that when the averaging time exceeds a certain threshold, the statistical errors in the calculated elastic constants become very small. We applied this method to compare the elastic constants of Pd and PdH 0.6 at representative low (10 K) and high (500 K) temperatures. The values predicted for Pd match reasonably well with ultrasonic experimental data at both temperatures. In contrast, the predicted elastic constants for PdH 0.6 only match well with ultrasonic data at 10 K; whereas, atmore » 500 K, the predicted values are significantly lower. We hypothesize that at 500 K, the facile hydrogen diffusion in PdH 0.6 alters the speed of sound, resulting in significantly reduced values of predicted elastic constants as compared to the ultrasonic experimental data. Finally, literature mechanical testing experiments seem to support this hypothesis.« less

  13. First Principles Investigation of Fluorine Based Strontium Series of Perovskites

    NASA Astrophysics Data System (ADS)

    Erum, Nazia; Azhar Iqbal, Muhammad

    2016-11-01

    Density functional theory is used to explore structural, elastic, and mechanical properties of SrLiF3, SrNaF3, SrKF3 and SrRbF3 fluoroperovskite compounds by means of an ab-initio Full Potential-Linearized Augmented Plane Wave (FP-LAPW) method. Several lattice parameters are employed to obtain accurate equilibrium volume (Vo). The resultant quantities include ground state energy, elastic constants, shear modulus, bulk modulus, young's modulus, cauchy's pressure, poisson's ratio, shear constant, ratio of elastic anisotropy factor, kleinman's parameter, melting temperature, and lame's coefficient. The calculated structural parameters via DFT as well as analytical methods are found to be consistent with experimental findings. Chemical bonding is used to investigate corresponding chemical trends which authenticate combination of covalent-ionic behavior. Furthermore electron density plots as well as elastic and mechanical properties are reported for the first time which reveals that fluorine based strontium series of perovskites are mechanically stable and posses weak resistance towards shear deformation as compared to resistance towards unidirectional compression while brittleness and ionic behavior is dominated in them which decreases from SrLiF3 to SrRbF3. Calculated cauchy's pressure, poisson's ratio and B/G ratio also proves ionic nature in these compounds. The present methodology represents an effective and influential approach to calculate the whole set of elastic and mechanical parameters which would support to understand various physical phenomena and empower device engineers for implementing these materials in numerous applications.

  14. Probing cochlear tuning and tonotopy in the tiger using otoacoustic emissions.

    PubMed

    Bergevin, Christopher; Walsh, Edward J; McGee, JoAnn; Shera, Christopher A

    2012-08-01

    Otoacoustic emissions (sound emitted from the ear) allow cochlear function to be probed noninvasively. The emissions evoked by pure tones, known as stimulus-frequency emissions (SFOAEs), have been shown to provide reliable estimates of peripheral frequency tuning in a variety of mammalian and non-mammalian species. Here, we apply the same methodology to explore peripheral auditory function in the largest member of the cat family, the tiger (Panthera tigris). We measured SFOAEs in 9 unique ears of 5 anesthetized tigers. The tigers, housed at the Henry Doorly Zoo (Omaha, NE), were of both sexes and ranged in age from 3 to 10 years. SFOAE phase-gradient delays are significantly longer in tigers--by approximately a factor of two above 2 kHz and even more at lower frequencies--than in domestic cats (Felis catus), a species commonly used in auditory studies. Based on correlations between tuning and delay established in other species, our results imply that cochlear tuning in the tiger is significantly sharper than in domestic cat and appears comparable to that of humans. Furthermore, the SFOAE data indicate that tigers have a larger tonotopic mapping constant (mm/octave) than domestic cats. A larger mapping constant in tiger is consistent both with auditory brainstem response thresholds (that suggest a lower upper frequency limit of hearing for the tiger than domestic cat) and with measurements of basilar-membrane length (about 1.5 times longer in the tiger than domestic cat).

  15. Self-ordered, controlled structure nanoporous membranes using constant current anodization.

    PubMed

    Lee, Kwan; Tang, Yun; Ouyang, Min

    2008-12-01

    We report a constant current (CC) based anodization technique to fabricate and control structure of mechanically stable anodic aluminum oxide (AAO) membranes with a long-range ordered hexagonal nanopore pattern. For the first time we show that interpore distance (Dint) of a self-ordered nanopore feature can be continuously tuned over a broad range with CC anodization and is uniquely defined by the conductivity of sulfuric acid as electrolyte. We further demonstrate that this technique can offer new degrees of freedom for engineering planar nanopore structures by fine tailoring the CC based anodization process. Our results not only facilitate further understanding of self-ordering mechanism of alumina membranes but also provide a fast, simple (without requirement of prepatterning or preoxide layer), and flexible methodology for controlling complex nanoporous structures, thus offering promising practical applications in nanotechnology.

  16. Variability of phase and amplitude fronts due to horizontal refraction in shallow water.

    PubMed

    Katsnelson, Boris G; Grigorev, Valery A; Lynch, James F

    2018-01-01

    The variability of the interference pattern of a narrow-band sound signal in a shallow water waveguide in the horizontal plane in the presence of horizontal stratification, in particular due to linear internal waves, is studied. It is shown that lines of constant phase (a phase front) and lines of constant amplitude/envelope (an amplitude front) for each waveguide mode may have different directions in the spatial vicinity of the point of reception. The angle between them depends on the waveguide's parameters, the mode number, and the sound frequency. Theoretical estimates and data processing methodology for obtaining these angles from experimental data recorded by a horizontal line array are proposed. The behavior of the angles, which are obtained for two episodes from the Shallow Water 2006 (SW06) experiment, show agreement with the theory presented.

  17. In-situ GPR test for three-dimensional mapping of the dielectric constant in a rock mass

    NASA Astrophysics Data System (ADS)

    Elkarmoty, Mohamed; Colla, Camilla; Gabrielli, Elena; Papeschi, Paolo; Bonduà, Stefano; Bruno, Roberto

    2017-11-01

    The Ground Penetrating Radar (GPR) is used to detect subsurface anomalies in several applications. The more the velocity of propagation or the dielectric constant is estimated accurately, the more the detection of anomalies at true subsurface depth can be accurately obtained. Since many GPR applications are performed in rock mass with non-homogeneous discontinuous nature, errors in estimating a bulk velocity of propagation or dielectric constant are possible. This paper presents a new in-situ GPR test for mapping the dielectric constant variability in a rock mass. The main aim is to investigate to what extent the dielectric constant is variable in the micro and macro scale of a typical rock mass and to give attention to GPR users in rock mass mediums. The methodology of this research is based on the insertion of steel rods in a rock mass, thus acting as reflectors. The velocity of propagation can be then modeled, from hyperbolic reflections, in the form of velocity pathways from antenna positions to a buried rod. Each pathway is characterized by discrete points which are assumed in three dimensions as centers of micro cubic rock mass. This allows converting the velocity of propagation into a dielectric constant for mapping and modeling the dielectric constant in a volumetric rock mass using a volumetric data visualization software program (Voxler). In a case study, 6 steel drilling rods were diagonally inserted in a vertical face of a bench in a sandstone quarry. Five equally spaced parallel lines, almost perpendicular to the orientations of the rods, were surveyed by a dual frequency GPR antenna of 200 and 600 MHz. The results show that the dielectric constant is randomly varied within the micro and macro scale either in single radargrams or in the volumetric rock mass. The proposed method can be useful if considered in signal processing software programs, particularly in presence of subsurface utilities with known geometry and dimension, allowing converting double travel time, through portions of a radargram, into more reliable depths using discrete dielectric constant values instead of one value for a whole radargram.

  18. Comparing otoacoustic emissions evoked by chirp transients with constant absorbed sound power and constant incident pressure magnitude.

    PubMed

    Keefe, Douglas H; Feeney, M Patrick; Hunter, Lisa L; Fitzpatrick, Denis F

    2017-01-01

    Human ear-canal properties of transient acoustic stimuli are contrasted that utilize measured ear-canal pressures in conjunction with measured acoustic pressure reflectance and admittance. These data are referenced to the tip of a probe snugly inserted into the ear canal. Promising procedures to calibrate across frequency include stimuli with controlled levels of incident pressure magnitude, absorbed sound power, and forward pressure magnitude. An equivalent pressure at the eardrum is calculated from these measured data using a transmission-line model of ear-canal acoustics parameterized by acoustically estimated ear-canal area at the probe tip and length between the probe tip and eardrum. Chirp stimuli with constant incident pressure magnitude and constant absorbed sound power across frequency were generated to elicit transient-evoked otoacoustic emissions (TEOAEs), which were measured in normal-hearing adult ears from 0.7 to 8 kHz. TEOAE stimuli had similar peak-to-peak equivalent sound pressure levels across calibration conditions. Frequency-domain TEOAEs were compared using signal level, signal-to-noise ratio (SNR), coherence synchrony modulus (CSM), group delay, and group spread. Time-domain TEOAEs were compared using SNR, CSM, instantaneous frequency and instantaneous bandwidth. Stimuli with constant incident pressure magnitude or constant absorbed sound power across frequency produce generally similar TEOAEs up to 8 kHz.

  19. Constant-roll tachyon inflation and observational constraints

    NASA Astrophysics Data System (ADS)

    Gao, Qing; Gong, Yungui; Fei, Qin

    2018-05-01

    For the constant-roll tachyon inflation, we derive the analytical expressions for the scalar and tensor power spectra, the scalar and tensor spectral tilts and the tensor to scalar ratio to the first order of epsilon1 by using the method of Bessel function approximation. The derived ns-r results are compared with the observations, we find that only the constant-roll inflation with ηH being a constant is consistent with the observations and observations constrain the constant-roll inflation to be slow-roll inflation. The tachyon potential is also reconstructed for the constant-roll inflation which is consistent with the observations.

  20. Development of an improved method of consolidating fatigue life data

    NASA Technical Reports Server (NTRS)

    Leis, B. N.; Sampath, S. G.

    1978-01-01

    A fatigue data consolidation model that incorporates recent advances in life prediction methodology was developed. A combined analytic and experimental study of fatigue of notched 2024-T3 aluminum alloy under constant amplitude loading was carried out. Because few systematic and complete data sets for 2024-T3 were available in the program generated data for fatigue crack initiation and separation failure for both zero and nonzero mean stresses. Consolidations of these data are presented.

  1. Accounting for the drug life cycle and future drug prices in cost-effectiveness analysis.

    PubMed

    Hoyle, Martin

    2011-01-01

    Economic evaluations of health technologies typically assume constant real drug prices and model only the cohort of patients currently eligible for treatment. It has recently been suggested that, in the UK, we should assume that real drug prices decrease at 4% per annum and, in New Zealand, that real drug prices decrease at 2% per annum and at patent expiry the drug price falls. It has also recently been suggested that we should model multiple future incident cohorts. In this article, the cost effectiveness of drugs is modelled based on these ideas. Algebraic expressions are developed to capture all costs and benefits over the entire life cycle of a new drug. The lifetime of a new drug in the UK, a key model parameter, is estimated as 33 years, based on the historical lifetime of drugs in England over the last 27 years. Under the proposed methodology, cost effectiveness is calculated for seven new drugs recently appraised in the UK. Cost effectiveness as assessed in the future is also estimated. Whilst the article is framed in mathematics, the findings and recommendations are also explained in non-mathematical language. The 'life-cycle correction factor' is introduced, which is used to convert estimates of cost effectiveness as traditionally calculated into estimates under the proposed methodology. Under the proposed methodology, all seven drugs appear far more cost effective in the UK than published. For example, the incremental cost-effectiveness ratio decreases by 46%, from £61, 900 to £33, 500 per QALY, for cinacalcet versus best supportive care for end-stage renal disease, and by 45%, from £31,100 to £17,000 per QALY, for imatinib versus interferon-α for chronic myeloid leukaemia. Assuming real drug prices decrease over time, the chance that a drug is publicly funded increases over time, and is greater when modelling multiple cohorts than with a single cohort. Using the methodology (compared with traditional methodology) all drugs in the UK and New Zealand are predicted to be more cost effective. It is suggested that the willingness-to-pay threshold should be reduced in the UK and New Zealand. The ranking of cost effectiveness will change with drugs assessed as relatively more cost effective and medical devices and surgical procedures relatively less cost effective than previously thought. The methodology is very simple to implement. It is suggested that the model should be parameterized for other countries.

  2. Methodological and hermeneutic reduction - a study of Finnish multiple-birth families.

    PubMed

    Heinonen, Kristiina

    2015-07-01

    To describe reduction as a method in methodological and hermeneutic reduction and the hermeneutic circle using van Manen's principles, with the empirical example of the lifeworlds of multiple-birth families in Finland. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. Open interviews with public health nurses, family care workers and parents of twins. The systematic literature and knowledge review shows there were no articles on multiple-birth families that used van Manen's method. This paper presents reduction as a method that uses the hermeneutic circle. The lifeworlds of multiple-birth families consist of three core themes: 'A state of constant vigilance'; 'Ensuring that they can continue to cope'; and 'Opportunities to share with other people'. Reduction allows us to perform deep phenomenological-hermeneutic research and understand people's lifeworlds. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they also need further tools and training to be able to empower parents of twins. The many variations in adapting reduction mean its use can be very complex and confusing. This paper adds to the discussion of phenomenology, hermeneutic study and reduction.

  3. Computational simulation of probabilistic lifetime strength for aerospace materials subjected to high temperature, mechanical fatigue, creep and thermal fatigue

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Bast, Callie C.; Trimble, Greg A.

    1992-01-01

    This report presents the results of a fourth year effort of a research program, conducted for NASA-LeRC by the University of Texas at San Antonio (UTSA). The research included on-going development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subject to a number of effects or primitive variables. These primitive variables may include high temperature, fatigue or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation has been randomized and is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the above-described constitutive equation using actual experimental materials data together with regression analysis of that data, thereby predicting values for the empirical material constants for each effect or primitive variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from industry and the open literature for materials typically for applications in aerospace propulsion system components. Material data for Inconel 718 has been analyzed using the developed methodology.

  4. Computational simulation of probabilistic lifetime strength for aerospace materials subjected to high temperature, mechanical fatigue, creep, and thermal fatigue

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Bast, Callie C.; Trimble, Greg A.

    1992-01-01

    The results of a fourth year effort of a research program conducted for NASA-LeRC by The University of Texas at San Antonio (UTSA) are presented. The research included on-going development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects or primitive variables. These primitive variables may include high temperature, fatigue, or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation was randomized and is included in the computer program, PROMISC. Also included in the research is the development of methodology to calibrate the above-described constitutive equation using actual experimental materials data together with regression analysis of that data, thereby predicting values for the empirical material constants for each effect or primitive variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from industry and the open literature for materials typically for applications in aerospace propulsion system components. Material data for Inconel 718 was analyzed using the developed methodology.

  5. Wind Tunnel to Atmospheric Mapping for Static Aeroelastic Scaling

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Spain, Charles V.; Rivera, J. A.

    2004-01-01

    Wind tunnel to Atmospheric Mapping (WAM) is a methodology for scaling and testing a static aeroelastic wind tunnel model. The WAM procedure employs scaling laws to define a wind tunnel model and wind tunnel test points such that the static aeroelastic flight test data and wind tunnel data will be correlated throughout the test envelopes. This methodology extends the notion that a single test condition - combination of Mach number and dynamic pressure - can be matched by wind tunnel data. The primary requirements for affecting this extension are matching flight Mach numbers, maintaining a constant dynamic pressure scale factor and setting the dynamic pressure scale factor in accordance with the stiffness scale factor. The scaling is enabled by capabilities of the NASA Langley Transonic Dynamics Tunnel (TDT) and by relaxation of scaling requirements present in the dynamic problem that are not critical to the static aeroelastic problem. The methodology is exercised in two example scaling problems: an arbitrarily scaled wing and a practical application to the scaling of the Active Aeroelastic Wing flight vehicle for testing in the TDT.

  6. A temperature compensation methodology for piezoelectric based sensor devices

    NASA Astrophysics Data System (ADS)

    Wang, Dong F.; Lou, Xueqiao; Bao, Aijian; Yang, Xu; Zhao, Ji

    2017-08-01

    A temperature compensation methodology comprising a negative temperature coefficient thermistor with the temperature characteristics of a piezoelectric material is proposed to improve the measurement accuracy of piezoelectric sensing based devices. The piezoelectric disk is characterized by using a disk-shaped structure and is also used to verify the effectiveness of the proposed compensation method. The measured output voltage shows a nearly linear relationship with respect to the applied pressure by introducing the proposed temperature compensation method in a temperature range of 25-65 °C. As a result, the maximum measurement accuracy is observed to be improved by 40%, and the higher the temperature, the more effective the method. The effective temperature range of the proposed method is theoretically analyzed by introducing the constant coefficient of the thermistor (B), the resistance of initial temperature (R0), and the paralleled resistance (Rx). The proposed methodology can not only eliminate the influence of piezoelectric temperature dependent characteristics on the sensing accuracy but also decrease the power consumption of piezoelectric sensing based devices by the simplified sensing structure.

  7. Creep Life Prediction of Ceramic Components Using the Finite Element Based Integrated Design Program (CARES/Creep)

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.

    1997-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. Such long life requirements necessitate subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this work is to present a design methodology for predicting the lifetimes of structural components subjected to multiaxial creep loading. This methodology utilizes commercially available finite element packages and takes into account the time varying creep stress distributions (stress relaxation). In this methodology, the creep life of a component is divided into short time steps, during which, the stress and strain distributions are assumed constant. The damage, D, is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. For components subjected to predominantly tensile loading, failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity.

  8. Understanding Nutritional Epidemiology and Its Role in Policy12

    PubMed Central

    Satija, Ambika; Yu, Edward; Willett, Walter C; Hu, Frank B

    2015-01-01

    Nutritional epidemiology has recently been criticized on several fronts, including the inability to measure diet accurately, and for its reliance on observational studies to address etiologic questions. In addition, several recent meta-analyses with serious methodologic flaws have arrived at erroneous or misleading conclusions, reigniting controversy over formerly settled debates. All of this has raised questions regarding the ability of nutritional epidemiologic studies to inform policy. These criticisms, to a large degree, stem from a misunderstanding of the methodologic issues of the field and the inappropriate use of the drug trial paradigm in nutrition research. The exposure of interest in nutritional epidemiology is human diet, which is a complex system of interacting components that cumulatively affect health. Consequently, nutritional epidemiology constantly faces a unique set of challenges and continually develops specific methodologies to address these. Misunderstanding these issues can lead to the nonconstructive and sometimes naive criticisms we see today. This article aims to clarify common misunderstandings of nutritional epidemiology, address challenges to the field, and discuss the utility of nutritional science in guiding policy by focusing on 5 broad questions commonly asked of the field. PMID:25593140

  9. An Adaptive Flow Solver for Air-Borne Vehicles Undergoing Time-Dependent Motions/Deformations

    NASA Technical Reports Server (NTRS)

    Singh, Jatinder; Taylor, Stephen

    1997-01-01

    This report describes a concurrent Euler flow solver for flows around complex 3-D bodies. The solver is based on a cell-centered finite volume methodology on 3-D unstructured tetrahedral grids. In this algorithm, spatial discretization for the inviscid convective term is accomplished using an upwind scheme. A localized reconstruction is done for flow variables which is second order accurate. Evolution in time is accomplished using an explicit three-stage Runge-Kutta method which has second order temporal accuracy. This is adapted for concurrent execution using another proven methodology based on concurrent graph abstraction. This solver operates on heterogeneous network architectures. These architectures may include a broad variety of UNIX workstations and PCs running Windows NT, symmetric multiprocessors and distributed-memory multi-computers. The unstructured grid is generated using commercial grid generation tools. The grid is automatically partitioned using a concurrent algorithm based on heat diffusion. This results in memory requirements that are inversely proportional to the number of processors. The solver uses automatic granularity control and resource management techniques both to balance load and communication requirements, and deal with differing memory constraints. These ideas are again based on heat diffusion. Results are subsequently combined for visualization and analysis using commercial CFD tools. Flow simulation results are demonstrated for a constant section wing at subsonic, transonic, and a supersonic case. These results are compared with experimental data and numerical results of other researchers. Performance results are under way for a variety of network topologies.

  10. Automated and comprehensive link engineering supporting branched, ring, and mesh network topologies

    NASA Astrophysics Data System (ADS)

    Farina, J.; Khomchenko, D.; Yevseyenko, D.; Meester, J.; Richter, A.

    2016-02-01

    Link design, while relatively easy in the past, can become quite cumbersome with complex channel plans and equipment configurations. The task of designing optical transport systems and selecting equipment is often performed by an applications or sales engineer using simple tools, such as custom Excel spreadsheets. Eventually, every individual has their own version of the spreadsheet as well as their own methodology for building the network. This approach becomes unmanageable very quickly and leads to mistakes, bending of the engineering rules and installations that do not perform as expected. We demonstrate a comprehensive planning environment, which offers an efficient approach to unify, control and expedite the design process by controlling libraries of equipment and engineering methodologies, automating the process and providing the analysis tools necessary to predict system performance throughout the system and for all channels. In addition to the placement of EDFAs and DCEs, performance analysis metrics are provided at every step of the way. Metrics that can be tracked include power, CD and OSNR, SPM, XPM, FWM and SBS. Automated routine steps assist in design aspects such as equalization, padding and gain setting for EDFAs, the placement of ROADMs and transceivers, and creating regeneration points. DWDM networks consisting of a large number of nodes and repeater huts, interconnected in linear, branched, mesh and ring network topologies, can be designed much faster when compared with conventional design methods. Using flexible templates for all major optical components, our technology-agnostic planning approach supports the constant advances in optical communications.

  11. Associated with aerospace vehicles development of methodologies for the estimation of thermal properties

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1994-01-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.

  12. Working group written presentation: Solar radiation

    NASA Technical Reports Server (NTRS)

    Slemp, Wayne S.

    1989-01-01

    The members of the Solar Radiation Working Group arrived at two major solar radiation technology needs: (1) generation of a long term flight data base; and (2) development of a standardized UV testing methodology. The flight data base should include 1 to 5 year exposure of optical filters, windows, thermal control coatings, hardened coatings, polymeric films, and structural composites. The UV flux and wavelength distribution, as well as particulate radiation flux and energy, should be measured during this flight exposure. A standard testing methodology is needed to establish techniques for highly accelerated UV exposure which will correlate well with flight test data. Currently, UV can only be accelerated to about 3 solar constants and can correlate well with flight exposure data. With space missions to 30 years, acceleration rates of 30 to 100X are needed for efficient laboratory testing.

  13. Simple methodologies to estimate the energy amount stored in a tree due to an explosive seed dispersal mechanism

    NASA Astrophysics Data System (ADS)

    do Carmo, Eduardo; Goncalves Hönnicke, Marcelo

    2018-05-01

    There are different forms to introduce/illustrate the energy concepts for the basic physics students. The explosive seed dispersal mechanism found in a variety of trees could be one of them. Sibipiruna trees carry out fruits (pods) who show such an explosive mechanism. During the explosion, the pods throw out seeds several meters away. In this manuscript we show simple methodologies to estimate the energy amount stored in the Sibipiruna tree due to such a process. Two different physics approaches were used to carry out this study: by monitoring indoor and in situ the explosive seed dispersal mechanism and by measuring the elastic constant of the pod shell. An energy of the order of kJ was found to be stored in a single tree due to such an explosive mechanism.

  14. Adapting Western Research Methods to Indigenous Ways of Knowing

    PubMed Central

    Christopher, Suzanne

    2013-01-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid. PMID:23678897

  15. Proportional-delayed controllers design for LTI-systems: a geometric approach

    NASA Astrophysics Data System (ADS)

    Hernández-Díez, J.-E.; Méndez-Barrios, C.-F.; Mondié, S.; Niculescu, S.-I.; González-Galván, E. J.

    2018-04-01

    This paper focuses on the design of P-δ controllers for single-input-single-output linear time-invariant systems. The basis of this work is a geometric approach allowing to partitioning the parameter space in regions with constant number of unstable roots. This methodology defines the hyper-planes separating the aforementioned regions and characterises the way in which the number of unstable roots changes when crossing such a hyper-plane. The main contribution of the paper is that it provides an explicit tool to find P-δ gains ensuring the stability of the closed-loop system. In addition, the proposed methodology allows to design a non-fragile controller with a desired exponential decay rate σ. Several numerical examples illustrate the results and a haptic experimental set-up shows the effectiveness of P-δ controllers.

  16. The Constant Comparative Analysis Method Outside of Grounded Theory

    ERIC Educational Resources Information Center

    Fram, Sheila M.

    2013-01-01

    This commentary addresses the gap in the literature regarding discussion of the legitimate use of Constant Comparative Analysis Method (CCA) outside of Grounded Theory. The purpose is to show the strength of using CCA to maintain the emic perspective and how theoretical frameworks can maintain the etic perspective throughout the analysis. My…

  17. Intimate Partner Violence, 1993-2010

    MedlinePlus

    ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...

  18. Development of linear free energy relationships for aqueous phase radical-involved chemical reactions.

    PubMed

    Minakata, Daisuke; Mezyk, Stephen P; Jones, Jace W; Daws, Brittany R; Crittenden, John C

    2014-12-02

    Aqueous phase advanced oxidation processes (AOPs) produce hydroxyl radicals (HO•) which can completely oxidize electron rich organic compounds. The proper design and operation of AOPs require that we predict the formation and fate of the byproducts and their associated toxicity. Accordingly, there is a need to develop a first-principles kinetic model that can predict the dominant reaction pathways that potentially produce toxic byproducts. We have published some of our efforts on predicting the elementary reaction pathways and the HO• rate constants. Here we develop linear free energy relationships (LFERs) that predict the rate constants for aqueous phase radical reactions. The LFERs relate experimentally obtained kinetic rate constants to quantum mechanically calculated aqueous phase free energies of activation. The LFERs have been applied to 101 reactions, including (1) HO• addition to 15 aromatic compounds; (2) addition of molecular oxygen to 65 carbon-centered aliphatic and cyclohexadienyl radicals; (3) disproportionation of 10 peroxyl radicals, and (4) unimolecular decay of nine peroxyl radicals. The LFERs correlations predict the rate constants within a factor of 2 from the experimental values for HO• reactions and molecular oxygen addition, and a factor of 5 for peroxyl radical reactions. The LFERs and the elementary reaction pathways will enable us to predict the formation and initial fate of the byproducts in AOPs. Furthermore, our methodology can be applied to other environmental processes in which aqueous phase radical-involved reactions occur.

  19. Steady-State Computation of Constant Rotational Rate Dynamic Stability Derivatives

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Green, Lawrence L.

    2000-01-01

    Dynamic stability derivatives are essential to predicting the open and closed loop performance, stability, and controllability of aircraft. Computational determination of constant-rate dynamic stability derivatives (derivatives of aircraft forces and moments with respect to constant rotational rates) is currently performed indirectly with finite differencing of multiple time-accurate computational fluid dynamics solutions. Typical time-accurate solutions require excessive amounts of computational time to complete. Formulating Navier-Stokes (N-S) equations in a rotating noninertial reference frame and applying an automatic differentiation tool to the modified code has the potential for directly computing these derivatives with a single, much faster steady-state calculation. The ability to rapidly determine static and dynamic stability derivatives by computational methods can benefit multidisciplinary design methodologies and reduce dependency on wind tunnel measurements. The CFL3D thin-layer N-S computational fluid dynamics code was modified for this study to allow calculations on complex three-dimensional configurations with constant rotation rate components in all three axes. These CFL3D modifications also have direct application to rotorcraft and turbomachinery analyses. The modified CFL3D steady-state calculation is a new capability that showed excellent agreement with results calculated by a similar formulation. The application of automatic differentiation to CFL3D allows the static stability and body-axis rate derivatives to be calculated quickly and exactly.

  20. Navigating the grounded theory terrain. Part 2.

    PubMed

    Hunter, Andrew; Murphy, Kathy; Grealish, Annmarie; Casey, Dympna; Keady, John

    2011-01-01

    In this paper, the choice of classic grounded theory will be discussed and justified in the context of the first author's PhD research. The methodological discussion takes place within the context of PhD research entitled: Development of a stakeholder-led framework for a structured education programme that will prepare nurses and healthcare assistants to deliver a psychosocial intervention for people with dementia. There is a lack of research and limited understanding of the effect of psychosocial interventions on people with dementia. The first author thought classic grounded theory a suitable research methodology to investigate as it is held to be ideal for areas of research where there is little understanding of the social processes at work. The literature relating to the practical application of classic grounded theory is illustrated using examples relating to four key grounded theory components: Theory development: using constant comparison and memoing, Methodological rigour, Emergence of a core category, Inclusion of self and engagement with participants. Following discussion of the choice and application of classic grounded theory, this paper explores the need for researchers to visit and understand the various grounded theory options. This paper argues that researchers new to grounded theory must be familiar with and understand the various options. The researchers will then be able to apply the methodologies they choose consistently and critically. Doing so will allow them to develop theory rigorously and they will ultimately be able to better defend their final methodological destinations.

  1. Analytical Prediction of Lower Leg Injury in a Vehicular Mine Blast Event

    DTIC Science & Technology

    2010-01-01

    the spring constant of the tibia is nearly arbitrary; the spring constant of the boot assumes a hard ethylene propylene diene monomer ( EPDM ) rubber ...the sole of the boot. The significantly lower spring constant of the EPDM rubber in the sole compared to the bone structures greatly diminished the

  2. Potential for utilization of algal biomass for components of the diet in CELSS

    NASA Technical Reports Server (NTRS)

    Kamarei, A. R.; Nakhost, Z.; Karel, M.

    1986-01-01

    The major nutritional components of the green algae (Scenedesmus obliquus) grown in a Constant Cell Density Apparatus were determined. Suitable methodology to prepare proteins from which three major undesirable components of these cells (i.e., cell walls, nucleic acids, and pigments) were either removed or substantially reduced was developed. Results showed that processing of green algae to protein isolate enhances is potential nutritional and organoleptic acceptability as a diet component in controlled Ecological Life Support System.

  3. A Study to Develop a Timeline for Sequencing the Major Transitional Tasks in the Fort Sill Hospital Transition Plan

    DTIC Science & Technology

    1988-07-01

    efforts. 7. Ensure that transition planning materials and manuals are prepared. 8. Ensure proper orientation of staff. 9. Ensure all involved parties...software package which will integrate with the project methodology: User’s manual that is "user friendly". Tutorial with sample data for learning the...program. Menus for direction and assistance. On-line help to avoid constant referral to the manual . Technical support available via telephone. Demo

  4. EVALUATING THE IMPORTANCE OF FACTORS IN ANY GIVEN ORDER OF FACTORING.

    PubMed

    Humphreys, L G; Tucker, L R; Dachler, P

    1970-04-01

    A methodology has been described and illustrated for obtaining an evaluation of the importance of the factors in a particular order of factoring that does not require faotoring beyond that order. For example, one can estimate the intercorrelations of the original measures with the perturbations of the first-order factor held constant or, the reverse, estimate the contribution to the intercorrelations of the originral measures from the first-order factors alone. Similar operations are possible at higher orders.

  5. Forecasting Flying Hour Costs of the B-1, B-2, and the B-52 Bomber Aircraft

    DTIC Science & Technology

    2008-03-01

    reject the null hypothesis that the residuals are normally distributed. Likewise, in the Breusch Pagan test , a p-value greater than 0.05 means we...normality or constant variance, it will be noted in the results tables in Chapter IV. The Shapiro Wilk and Breusch Pagan tests are also very...the model; and • the results of the Shapiro Wilk, Breusch Pagan , and Durbin Watson tests . Summary This chapter outlines the methodology used in

  6. Comparing otoacoustic emissions evoked by chirp transients with constant absorbed sound power and constant incident pressure magnitude

    PubMed Central

    Keefe, Douglas H.; Feeney, M. Patrick; Hunter, Lisa L.; Fitzpatrick, Denis F.

    2017-01-01

    Human ear-canal properties of transient acoustic stimuli are contrasted that utilize measured ear-canal pressures in conjunction with measured acoustic pressure reflectance and admittance. These data are referenced to the tip of a probe snugly inserted into the ear canal. Promising procedures to calibrate across frequency include stimuli with controlled levels of incident pressure magnitude, absorbed sound power, and forward pressure magnitude. An equivalent pressure at the eardrum is calculated from these measured data using a transmission-line model of ear-canal acoustics parameterized by acoustically estimated ear-canal area at the probe tip and length between the probe tip and eardrum. Chirp stimuli with constant incident pressure magnitude and constant absorbed sound power across frequency were generated to elicit transient-evoked otoacoustic emissions (TEOAEs), which were measured in normal-hearing adult ears from 0.7 to 8 kHz. TEOAE stimuli had similar peak-to-peak equivalent sound pressure levels across calibration conditions. Frequency-domain TEOAEs were compared using signal level, signal-to-noise ratio (SNR), coherence synchrony modulus (CSM), group delay, and group spread. Time-domain TEOAEs were compared using SNR, CSM, instantaneous frequency and instantaneous bandwidth. Stimuli with constant incident pressure magnitude or constant absorbed sound power across frequency produce generally similar TEOAEs up to 8 kHz. PMID:28147608

  7. A full set of langatate high-temperature acoustic wave constants: elastic, piezoelectric, dielectric constants up to 900°C.

    PubMed

    Davulis, Peter M; da Cunha, Mauricio Pereira

    2013-04-01

    A full set of langatate (LGT) elastic, dielectric, and piezoelectric constants with their respective temperature coefficients up to 900°C is presented, and the relevance of the dielectric and piezoelectric constants and temperature coefficients are discussed with respect to predicted and measured high-temperature SAW propagation properties. The set of constants allows for high-temperature acoustic wave (AW) propagation studies and device design. The dielectric constants and polarization and conductive losses were extracted by impedance spectroscopy of parallel-plate capacitors. The measured dielectric constants at high temperatures were combined with previously measured LGT expansion coefficients and used to determine the elastic and piezoelectric constants using resonant ultrasound spectroscopy (RUS) measurements at temperatures up to 900°C. The extracted LGT piezoelectric constants and temperature coefficients show that e11 and e14 change by up to 62% and 77%, respectively, for the entire 25°C to 900°C range when compared with room-temperature values. The LGT high-temperature constants and temperature coefficients were verified by comparing measured and predicted phase velocities (vp) and temperature coefficients of delay (TCD) of SAW delay lines fabricated along 6 orientations in the LGT plane (90°, 23°, Ψ) up to 900°C. For the 6 tested orientations, the predicted SAW vp agree within 0.2% of the measured vp on average and the calculated TCD is within 9.6 ppm/°C of the measured value on average over the temperature range of 25°C to 900°C. By including the temperature dependence of both dielectric and piezoelectric constants, the average discrepancies between predicted and measured SAW properties were reduced, on average: 77% for vp, 13% for TCD, and 63% for the turn-over temperatures analyzed.

  8. Solar Cell Calibration and Measurement Techniques

    NASA Technical Reports Server (NTRS)

    Bailey, Sheila; Brinker, Dave; Curtis, Henry; Jenkins, Phillip; Scheiman, Dave

    1997-01-01

    The increasing complexity of space solar cells and the increasing international markets for both cells and arrays has resulted in workshops jointly sponsored by NASDA, ESA and NASA. These workshops are designed to obtain international agreement on standardized values for the AMO spectrum and constant, recommend laboratory measurement practices and establish a set of protocols for international comparison of laboratory measurements. A working draft of an ISO standard, WDI 5387, 'Requirements for Measurement and Calibration Procedures for Space Solar Cells' was discussed with a focus on the scope of the document, a definition of primary standard cell, and required error analysis for all measurement techniques. Working groups addressed the issues of Air Mass Zero (AMO) solar constant and spectrum, laboratory measurement techniques, and the international round robin methodology. A summary is presented of the current state of each area and the formulation of the ISO document.

  9. Solar Cell Calibration and Measurement Techniques

    NASA Technical Reports Server (NTRS)

    Bailey, Sheila; Brinker, Dave; Curtis, Henry; Jenkins, Phillip; Scheiman, Dave

    2004-01-01

    The increasing complexity of space solar cells and the increasing international markets for both cells and arrays has resulted in workshops jointly sponsored by NASDA, ESA and NASA. These workshops are designed to obtain international agreement on standardized values for the AMO spectrum and constant, recommend laboratory measurement practices and establish a set of protocols for international comparison of laboratory measurements. A working draft of an ISO standard, WD15387, "Requirements for Measurement and Calibration Procedures for Space Solar Cells" was discussed with a focus on the scope of the document, a definition of primary standard cell, and required error analysis for all measurement techniques. Working groups addressed the issues of Air Mass Zero (AMO) solar constant and spectrum, laboratory measurement techniques, and te international round robin methodology. A summary is presented of the current state of each area and the formulation of the ISO document.

  10. Fatigue life and crack growth prediction methodology

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.; Phillips, E. P.; Everett, R. A., Jr.

    1993-01-01

    The capabilities of a plasticity-induced crack-closure model and life-prediction code to predict fatigue crack growth and fatigue lives of metallic materials are reviewed. Crack-tip constraint factors, to account for three-dimensional effects, were selected to correlate large-crack growth rate data as a function of the effective-stress-intensity factor range (delta(K(sub eff))) under constant-amplitude loading. Some modifications to the delta(K(sub eff))-rate relations were needed in the near threshold regime to fit small-crack growth rate behavior and endurance limits. The model was then used to calculate small- and large-crack growth rates, and in some cases total fatigue lives, for several aluminum and titanium alloys under constant-amplitude, variable-amplitude, and spectrum loading. Fatigue lives were calculated using the crack growth relations and microstructural features like those that initiated cracks. Results from the tests and analyses agreed well.

  11. Economic design of control charts considering process shift distributions

    NASA Astrophysics Data System (ADS)

    Vommi, Vijayababu; Kasarapu, Rukmini V.

    2014-09-01

    Process shift is an important input parameter in the economic design of control charts. Earlier control chart designs considered constant shifts to occur in the mean of the process for a given assignable cause. This assumption has been criticized by many researchers since it may not be realistic to produce a constant shift whenever an assignable cause occurs. To overcome this difficulty, in the present work, a distribution for the shift parameter has been considered instead of a single value for a given assignable cause. Duncan's economic design model for chart has been extended to incorporate the distribution for the process shift parameter. It is proposed to minimize total expected loss-cost to obtain the control chart parameters. Further, three types of process shifts namely, positively skewed, uniform and negatively skewed distributions are considered and the situations where it is appropriate to use the suggested methodology are recommended.

  12. Cinetica de oxidacion de polimeros conductores: poli-3,4- etilendioxitiofeno

    NASA Astrophysics Data System (ADS)

    Caballero Romero, Maria

    Films of poly-3,4-ethylenedioxythiophene (PEDOT) perchlorate used as electrodes in liquid electrolytes incorporate anions and solvent during oxidation for charge and osmotic balance: the film swells. During reduction the film shrinks, closes its structure trapping counterions getting then rising conformational packed states by expulsion of counterions and solvent. Here by potential step from the same reduced initial state to the same oxidized final state the rate coefficient, the activation energy and reaction orders related to the counterion concentration in solution and to the concentration of active centers in the polymer film, were attained following the usual methodology used for chemical and electrochemical kinetics. Now the full methodology was repeated using different reduced-shrunk or reduced-conformational compacted initial states every time. Those initial states were attained by reduction of the oxidized film at rising cathodic potentials for the same reduction time each. Rising reduced and conformational compacted states give slower subsequent oxidation rates by potential step to the same anodic potential every time. The activation energy, the reaction coefficient and reaction orders change for rising conformational compacted initial states. Decreasing rate constants and increasing activation energies are obtained for the PEDOT oxidation from increasing conformational compacted initial states. The experimental activation energy presents two linear ranges as a function of the initial reduced-compacted state. Using as initial states for the oxidation open structures attained by reduction at low cathodic potentials, activation energies attained were constant: namely the chemical activation energy. Using as initial states for the oxidation deeper reduced, closed and packed conformational structures, the activation energy includes two components: the constant chemical energy plus the conformational energy required to relax the conformational structure generating free volume which allows the entrance of the balancing counterions required for the reaction. The conformational energy increases linearly as a function of the reduction-compaction potential. The kinetic magnitudes include conformational and structural information. The Chemical Kinetics becomes Structural (or conformational) Chemical Kinetics.

  13. Determination of a Degradation Constant for CYP3A4 by Direct Suppression of mRNA in a Novel Human Hepatocyte Model, HepatoPac.

    PubMed

    Ramsden, Diane; Zhou, Jin; Tweedie, Donald J

    2015-09-01

    Accurate determination of rates of de novo synthesis and degradation of cytochrome P450s (P450s) has been challenging. There is a high degree of variability in the multiple published values of turnover for specific P450s that is likely exacerbated by differences in methodologies. For CYP3A4, reported half-life values range from 10 to 140 hours. An accurate value for kdeg has been identified as a major limitation for prediction of drug interactions involving mechanism-based inhibition and/or induction. Estimation of P450 half-life from in vitro test systems, such as human hepatocytes, is complicated by differential decreased enzyme function over culture time, attenuation of the impact of enzyme loss through inclusion of glucocorticoids in media, and viability limitations over long-term culture times. HepatoPac overcomes some of these challenges by providing extended stability of enzymes (2.5 weeks in our hands). As such it is a unique tool for studying rates of enzyme degradation achieved through modulation of enzyme levels. CYP3A4 mRNA levels were rapidly depleted by >90% using either small interfering RNA or addition of interleukin-6, which allowed an estimation of the degradation rate constant for CYP3A protein over an incubation time of 96 hours. The degradation rate constant of 0.0240 ± 0.005 hour(-1) was reproducible in hepatocytes from five different human donors. These donors also reflected the overall population with respect to CYP3A5 genotype. This methodology can be applied to additional enzymes and may provide a more accurate in vitro derived kdeg value for predicting clinical drug-drug interaction outcomes. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.

  14. Employment of Personnel at the Tucson Border Patrol Station

    DTIC Science & Technology

    2017-06-09

    RESEARCH METHODOLOGY How should the Tucson Border Patrol Station optimally employ personnel? Using a case study research methodology141 provided...BORSTAR provide better capabilities to respond and greater mobility in risk management.155 The methodologies of case study comparatives include the...35 CHAPTER 3 RESEARCH METHODOLOGY

  15. Constant amplitude and post-overload fatigue crack growth behavior in PM aluminum alloy AA 8009

    NASA Technical Reports Server (NTRS)

    Reynolds, A. P.

    1992-01-01

    A recently developed, rapidly solidified, powder metallurgy, dispersion strengthened aluminum alloy, AA 8009, was fatigue tested at room temperature in lab air. Constant amplitude/constant delta kappa and single spike overload conditions were examined. High fatigue crack growth rates and low crack closure levels compared to typical ingot metallurgy aluminum alloys were observed. It was proposed that minimal crack roughness, crack path delection, and limited slip reversibility, resulting from ultra-fine microstructure, were responsible for the relatively poor da/dN-delta kappa performance of AA 8009 as compared to that of typical IM aluminum alloys.

  16. Constant amplitude and post-overload fatigue crack growth behavior in PM aluminum alloy AA 8009

    NASA Technical Reports Server (NTRS)

    Reynolds, A. P.

    1991-01-01

    A recently developed, rapidly solidified, powder metallurgy, dispersion strengthened aluminum alloy, AA 8009, was fatigue tested at room temperature in lab air. Constant amplitude/constant delta kappa and single spike overload conditions were examined. High fatigue crack growth rates and low crack closure levels compared to typical ingot metallurgy aluminum alloys were observed. It was proposed that minimal crack roughness, crack path deflection, and limited slip reversibility, resulting from ultra-fine microstructure, were responsible for the relatively poor da/dN-delta kappa performance of AA 8009 as compared to that of typical IM aluminum alloys.

  17. Modelling and analysis of creep deformation and fracture in a 1 Cr 1/2 Mo ferritic steel

    NASA Astrophysics Data System (ADS)

    Dyson, B. F.; Osgerby, D.

    A quantitative model, based upon a proposed new mechanism of creep deformation in particle-hardened alloys, has been validated by analysis of creep data from a 13CrMo 4 4 (1Cr 1/2 Mo) material tested under a range of stresses and temperatures. The methodology that has been used to extract the model parameters quantifies, as a first approximation, only the main degradation (damage) processes - in the case of the 1CR 1/2 Mo steel, these are considered to be the parallel operation of particle-coarsening and a progressively increasing stress due to a constant-load boundary condition. These 'global' model parameters can then be modified (only slightly) as required to obtain a detailed description and 'fit' to the rupture lifetime and strain/time trajectory of any individual test. The global model parameter approach may be thought of as predicting average behavior and the detailed fits as taking account of uncertainties (scatter) due to variability in the material. Using the global parameter dataset, predictions have also been made of behavior under biaxial stressing; constant straining rate; constant total strain (stress relaxation) and the likely success or otherwise of metallographic and mechanical remanent lifetime procedures.

  18. Accelerated Testing Methodology Developed for Determining the Slow Crack Growth of Advanced Ceramics

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.

    1998-01-01

    Constant stress-rate ("dynamic fatigue") testing has been used for several decades to characterize the slow crack growth behavior of glass and structural ceramics at both ambient and elevated temperatures. The advantage of such testing over other methods lies in its simplicity: strengths are measured in a routine manner at four or more stress rates by applying a constant displacement or loading rate. The slow crack growth parameters required for component design can be estimated from a relationship between strength and stress rate. With the proper use of preloading in constant stress-rate testing, test time can be reduced appreciably. If a preload corresponding to 50 percent of the strength is applied to the specimen prior to testing, 50 percent of the test time can be saved as long as the applied preload does not change the strength. In fact, it has been a common, empirical practice in the strength testing of ceramics or optical fibers to apply some preloading (<40 percent). The purpose of this work at the NASA Lewis Research Center is to study the effect of preloading on measured strength in order to add a theoretical foundation to the empirical practice.

  19. Precision controlled atomic resolution scanning transmission electron microscopy using spiral scan pathways

    NASA Astrophysics Data System (ADS)

    Sang, Xiahan; Lupini, Andrew R.; Ding, Jilai; Kalinin, Sergei V.; Jesse, Stephen; Unocic, Raymond R.

    2017-03-01

    Atomic-resolution imaging in an aberration-corrected scanning transmission electron microscope (STEM) can enable direct correlation between atomic structure and materials functionality. The fast and precise control of the STEM probe is, however, challenging because the true beam location deviates from the assigned location depending on the properties of the deflectors. To reduce these deviations, i.e. image distortions, we use spiral scanning paths, allowing precise control of a sub-Å sized electron probe within an aberration-corrected STEM. Although spiral scanning avoids the sudden changes in the beam location (fly-back distortion) present in conventional raster scans, it is not distortion-free. “Archimedean” spirals, with a constant angular frequency within each scan, are used to determine the characteristic response at different frequencies. We then show that such characteristic functions can be used to correct image distortions present in more complicated constant linear velocity spirals, where the frequency varies within each scan. Through the combined application of constant linear velocity scanning and beam path corrections, spiral scan images are shown to exhibit less scan distortion than conventional raster scan images. The methodology presented here will be useful for in situ STEM imaging at higher temporal resolution and for imaging beam sensitive materials.

  20. Precision controlled atomic resolution scanning transmission electron microscopy using spiral scan pathways.

    PubMed

    Sang, Xiahan; Lupini, Andrew R; Ding, Jilai; Kalinin, Sergei V; Jesse, Stephen; Unocic, Raymond R

    2017-03-08

    Atomic-resolution imaging in an aberration-corrected scanning transmission electron microscope (STEM) can enable direct correlation between atomic structure and materials functionality. The fast and precise control of the STEM probe is, however, challenging because the true beam location deviates from the assigned location depending on the properties of the deflectors. To reduce these deviations, i.e. image distortions, we use spiral scanning paths, allowing precise control of a sub-Å sized electron probe within an aberration-corrected STEM. Although spiral scanning avoids the sudden changes in the beam location (fly-back distortion) present in conventional raster scans, it is not distortion-free. "Archimedean" spirals, with a constant angular frequency within each scan, are used to determine the characteristic response at different frequencies. We then show that such characteristic functions can be used to correct image distortions present in more complicated constant linear velocity spirals, where the frequency varies within each scan. Through the combined application of constant linear velocity scanning and beam path corrections, spiral scan images are shown to exhibit less scan distortion than conventional raster scan images. The methodology presented here will be useful for in situ STEM imaging at higher temporal resolution and for imaging beam sensitive materials.

  1. Impact of the differential fluence distribution of brachytherapy sources on the spectroscopic dose-rate constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A., E-mail: mmalin@wisc.edu, E-mail: ladewerd@wisc.edu

    2015-05-15

    Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-onlymore » model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in the spectroscopic technique affect the accuracy of Λ{sub spec}. Results: For all sources studied, the angular and spatial distributions of φ{sub full} were more complex than the distributions used in φ{sub spec}. Differences between Λ{sub spec} and Λ{sub full} ranged from −0.6% to +6.4%, confirming the discrepancies found by Rodriguez and Rogers. The largest contribution to the discrepancy was the assumption of isotropic emission in φ{sub spec}, which caused differences in Λ of up to +5.3% relative to Λ{sub full}. Use of the approximated spatial and energy distributions caused smaller average discrepancies in Λ of −0.4% and +0.1%, respectively. The water-only model introduced an average discrepancy in Λ of −0.4%. Conclusions: The approximations used in φ{sub spec} caused discrepancies between Λ{sub approx,i} and Λ{sub full} of up to 7.8%. With the exception of the energy distribution, the approximations used in φ{sub spec} contributed to this discrepancy for all source models studied. To improve the accuracy of Λ{sub spec}, the spatial and angular distributions of φ{sub full} could be measured, with the measurements replacing the approximated distributions. The methodology used in this work could be used to determine the resolution that such measurements would require by computing the dose-rate constants from phase spaces modified to reflect φ{sub full} binned at different spatial and angular resolutions.« less

  2. The Effects of Varied versus Constant High-, Medium-, and Low-Preference Stimuli on Performance

    ERIC Educational Resources Information Center

    Wine, Byron; Wilder, David A.

    2009-01-01

    The purpose of the current study was to compare the delivery of varied versus constant high-, medium-, and low-preference stimuli on performance of 2 adults on a computer-based task in an analogue employment setting. For both participants, constant delivery of the high-preference stimulus produced the greatest increases in performance over…

  3. Dielectric constant extraction of graphene nanostructured on SiC substrates from spectroscopy ellipsometry measurement using Gauss–Newton inversion method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maulina, Hervin; Santoso, Iman, E-mail: iman.santoso@ugm.ac.id; Subama, Emmistasega

    2016-04-19

    The extraction of the dielectric constant of nanostructured graphene on SiC substrates from spectroscopy ellipsometry measurement using the Gauss-Newton inversion (GNI) method has been done. This study aims to calculate the dielectric constant and refractive index of graphene by extracting the value of ψ and Δ from the spectroscopy ellipsometry measurement using GNI method and comparing them with previous result which was extracted using Drude-Lorentz (DL) model. The results show that GNI method can be used to calculate the dielectric constant and refractive index of nanostructured graphene on SiC substratesmore faster as compared to DL model. Moreover, the imaginary partmore » of the dielectric constant values and coefficient of extinction drastically increases at 4.5 eV similar to that of extracted using known DL fitting. The increase is known due to the process of interband transition and the interaction between the electrons and electron-hole at M-points in the Brillouin zone of graphene.« less

  4. A methodology for risk analysis based on hybrid Bayesian networks: application to the regasification system of liquefied natural gas onboard a floating storage and regasification unit.

    PubMed

    Martins, Marcelo Ramos; Schleder, Adriana Miralles; Droguett, Enrique López

    2014-12-01

    This article presents an iterative six-step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty. © 2014 Society for Risk Analysis.

  5. Influence of test procedures on the thermomechanical properties of a 55NiTi shape memory alloy

    NASA Astrophysics Data System (ADS)

    Padula, Santo A., II; Gaydosh, Darrell J.; Noebe, Ronald D.; Bigelow, Glen S.; Garg, Anita; Lagoudas, Dimitris; Karaman, Ibrahim; Atli, Kadri C.

    2008-03-01

    Over the past few decades, binary NiTi shape memory alloys have received attention due to their unique mechanical characteristics, leading to their potential use in low-temperature, solid-state actuator applications. However, prior to using these materials for such applications, the physical response of these systems to mechanical and thermal stimuli must be thoroughly understood and modeled to aid designers in developing SMA-enabled systems. Even though shape memory alloys have been around for almost five decades, very little effort has been made to standardize testing procedures. Although some standards for measuring the transformation temperatures of SMA's are available, no real standards exist for determining the various mechanical and thermomechanical properties that govern the usefulness of these unique materials. Consequently, this study involved testing a 55NiTi alloy using a variety of different test methodologies. All samples tested were taken from the same heat and batch to remove the influence of sample pedigree on the observed results. When the material was tested under constant-stress, thermal-cycle conditions, variations in the characteristic material responses were observed, depending on test methodology. The transformation strain and irreversible strain were impacted more than the transformation temperatures, which only showed an affect with regard to applied external stress. In some cases, test methodology altered the transformation strain by 0.005-0.01mm/mm, which translates into a difference in work output capability of approximately 2 J/cm 3 (290 in•lbf/in 3). These results indicate the need for the development of testing standards so that meaningful data can be generated and successfully incorporated into viable models and hardware. The use of consistent testing procedures is also important when comparing results from one research organization to another. To this end, differences in the observed responses will be presented, contrasted and rationalized, in hopes of eventually developing standardized testing procedures for shape memory alloys.

  6. Image charge models for accurate construction of the electrostatic self-energy of 3D layered nanostructure devices.

    PubMed

    Barker, John R; Martinez, Antonio

    2018-04-04

    Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.

  7. Image charge models for accurate construction of the electrostatic self-energy of 3D layered nanostructure devices

    NASA Astrophysics Data System (ADS)

    Barker, John R.; Martinez, Antonio

    2018-04-01

    Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.

  8. Influence of Test Procedures on the Thermomechanical Properties of a 55NiTi Shape Memory Alloy

    NASA Technical Reports Server (NTRS)

    Padula, Santo A., II; Gaydosh, Darrell J.; Noebe, Ronald D.; Bigelow, Glen S.; Garg, Anita; Lagoudas, Dimitris; Karaman, Ibrahim; Atli, Kadri C.

    2008-01-01

    Over the past few decades, binary NiTi shape memory alloys have received attention due to their unique mechanical characteristics, leading to their potential use in low-temperature, solid-state actuator applications. However, prior to using these materials for such applications, the physical response of these systems to mechanical and thermal stimuli must be thoroughly understood and modeled to aid designers in developing SMA-enabled systems. Even though shape memory alloys have been around for almost five decades, very little effort has been made to standardize testing procedures. Although some standards for measuring the transformation temperatures of SMA s are available, no real standards exist for determining the various mechanical and thermomechanical properties that govern the usefulness of these unique materials. Consequently, this study involved testing a 55NiTi alloy using a variety of different test methodologies. All samples tested were taken from the same heat and batch to remove the influence of sample pedigree on the observed results. When the material was tested under constant-stress, thermal-cycle conditions, variations in the characteristic material responses were observed, depending on test methodology. The transformation strain and irreversible strain were impacted more than the transformation temperatures, which only showed an affect with regard to applied external stress. In some cases, test methodology altered the transformation strain by 0.005-0.01mm/mm, which translates into a difference in work output capability of approximately 2 J/cu cm (290 in!lbf/cu in). These results indicate the need for the development of testing standards so that meaningful data can be generated and successfully incorporated into viable models and hardware. The use of consistent testing procedures is also important when comparing results from one research organization to another. To this end, differences in the observed responses will be presented, contrasted and rationalized, in hopes of eventually developing standardized testing procedures for shape memory alloys.

  9. Lagrange constraint neural network for audio varying BSS

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.

    2002-03-01

    Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).

  10. Satellite-based terrestrial production efficiency modeling

    PubMed Central

    McCallum, Ian; Wagner, Wolfgang; Schmullius, Christiane; Shvidenko, Anatoly; Obersteiner, Michael; Fritz, Steffen; Nilsson, Sten

    2009-01-01

    Production efficiency models (PEMs) are based on the theory of light use efficiency (LUE) which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP) monitoring. The objectives of this review are as follows: 1) to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS) identified in the literature; 2) to review each model to determine potential improvements to the general PEM methodology; 3) to review the related literature on satellite-based gross primary productivity (GPP) and NPP modeling for additional possibilities for improvement; and 4) based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT) or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra); there is an urgent need for satellite-based biomass measurements to improve Ra estimation; and satellite-based soil moisture data could improve determination of soil water stress. PMID:19765285

  11. Comparative Historical Approaches in Religious Education Research--Methodological Perspectives

    ERIC Educational Resources Information Center

    Schröder, Bernd

    2016-01-01

    This article summarises the state of comparative historical research in the field of religious education. After describing a range of purposes to be fulfilled by comparative studies, it categorises a number of studies written in either English, French or German according to their methodological approach and subject focus. As a result, a…

  12. Experimental determination of the effective strong coupling constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandre Deur; Volker Burkert; Jian-Ping Chen

    2007-07-01

    We extract an effective strong coupling constant from low Q{sup 2} data on the Bjorken sum. Using sum rules, we establish its Q{sup 2}-behavior over the complete Q{sup 2}-range. The result is compared to effective coupling constants extracted from different processes and to calculations based on Schwinger-Dyson equations, hadron spectroscopy or lattice QCD. Although the connection between the experimentally extracted effective coupling constant and the calculations is not clear, the results agree surprisingly well.

  13. Negative Refraction Angular Characterization in One-Dimensional Photonic Crystals

    PubMed Central

    Lugo, Jesus Eduardo; Doti, Rafael; Faubert, Jocelyn

    2011-01-01

    Background Photonic crystals are artificial structures that have periodic dielectric components with different refractive indices. Under certain conditions, they abnormally refract the light, a phenomenon called negative refraction. Here we experimentally characterize negative refraction in a one dimensional photonic crystal structure; near the low frequency edge of the fourth photonic bandgap. We compare the experimental results with current theory and a theory based on the group velocity developed here. We also analytically derived the negative refraction correctness condition that gives the angular region where negative refraction occurs. Methodology/Principal Findings By using standard photonic techniques we experimentally determined the relationship between incidence and negative refraction angles and found the negative refraction range by applying the correctness condition. In order to compare both theories with experimental results an output refraction correction was utilized. The correction uses Snell's law and an effective refractive index based on two effective dielectric constants. We found good agreement between experiment and both theories in the negative refraction zone. Conclusions/Significance Since both theories and the experimental observations agreed well in the negative refraction region, we can use both negative refraction theories plus the output correction to predict negative refraction angles. This can be very useful from a practical point of view for space filtering applications such as a photonic demultiplexer or for sensing applications. PMID:21494332

  14. Optimization of Progressive Freeze Concentration on Apple Juice via Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Samsuri, S.; Amran, N. A.; Jusoh, M.

    2018-05-01

    In this work, a progressive freeze concentration (PFC) system was developed to concentrate apple juice and was optimized by response surface methodology (RSM). The effects of various operating conditions such as coolant temperature, circulation flowrate, circulation time and shaking speed to effective partition constant (K) were investigated. Five different level of central composite design (CCD) was employed to search for optimal concentration of concentrated apple juice. A full quadratic model for K was established by using method of least squares. A coefficient of determination (R2) of this model was found to be 0.7792. The optimum conditions were found to be coolant temperature = -10.59 °C, circulation flowrate = 3030.23 mL/min, circulation time = 67.35 minutes and shaking speed = 30.96 ohm. A validation experiment was performed to evaluate the accuracy of the optimization procedure and the best K value of 0.17 was achieved under the optimized conditions.

  15. Computational modelling of oxygenation processes in enzymes and biomimetic model complexes.

    PubMed

    de Visser, Sam P; Quesne, Matthew G; Martin, Bodo; Comba, Peter; Ryde, Ulf

    2014-01-11

    With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods for studies on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and highlight advances in computational methodology and its application to enzymatic and biomimetic model complexes. In particular, we emphasize on topical and state-of-the-art methodologies that are able to either reproduce experimental findings, e.g., spectroscopic parameters and rate constants, accurately or make predictions of short-lived intermediates and fast reaction processes in nature. Moreover, we give examples of processes where certain computational methods dramatically fail.

  16. Parameters optimization of supercritical fluid-CO2 extracts of frankincense using response surface methodology and its pharmacodynamics effects.

    PubMed

    Zhou, Jing; Ma, Xing-miao; Qiu, Bi-Han; Chen, Jun-xia; Bian, Lin; Pan, Lin-mei

    2013-01-01

    The volatile oil parts of frankincense (Boswellia carterii Birdw.) were extracted with supercritical carbon dioxide under constant pressure (15, 20, or 25 MPa) and fixed temperature (40, 50, or 60°C), given time (60, 90, or 120 min) aiming at the acquisition of enriched fractions containing octyl acetate, compounds of pharmaceutical interest. A mathematical model was created by Box-Behnken design, a popular template for response surface methodology, for the extraction process. The response value was characterized by synthetical score, which comprised yields accounting for 20% and content of octyl acetate for 80%. The content of octyl acetate was determined by GC. The supercritical fluid extraction showed higher selectivity than conventional steam distillation. Supercritical fluid-CO(2) for extracting frankincense under optimum condition was of great validity, which was also successfully verified by the pharmacological experiments. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. [Methodology for estimating total direct costs of comprehensive care for non-communicable diseases].

    PubMed

    Castillo, Nancy; Malo, Miguel; Villacres, Nilda; Chauca, José; Cornetero, Víctor; de Flores, Karin Roedel; Tapia, Rafaela; Ríos, Raúl

    2017-01-01

    RESUMEN Diseases like diabetes mellitus (DM) and hypertension (HT) generate high costs and are the most common cause of mortality in the Americas. In the case of Peru, given demographic and epidemiological changes, particularly the alarming increase in overweight and obesity, the burden of these diseases is constantly increasing, resulting in the need to budget more financial resources to the health services. The total care costs of these diseases and their complications represent a financial burden that should be considered very carefully by health institutions when they draft their budgets. With this aim, the Pan American Health Organization has assisted the Ministry of Health (MINSA) with a study to estimate these costs. This article graphically describes the methodology developed to estimate the direct costs of comprehensive care for DM and HT to the health services of MINSA and regional governments.

  18. Risk Assessment Methodology for Hazardous Waste Management (1998)

    EPA Pesticide Factsheets

    A methodology is described for systematically assessing and comparing the risks to human health and the environment of hazardous waste management alternatives. The methodology selects and links appropriate models and techniques for performing the process.

  19. Determination of mass density, dielectric, elastic, and piezoelectric constants of bulk GaN crystal.

    PubMed

    Soluch, Waldemar; Brzozowski, Ernest; Lysakowska, Magdalena; Sadura, Jolanta

    2011-11-01

    Mass density, dielectric, elastic, and piezoelectric constants of bulk GaN crystal were determined. Mass density was obtained from the measured ratio of mass to volume of a cuboid. The dielectric constants were determined from the measured capacitances of an interdigital transducer (IDT) deposited on a Z-cut plate and from a parallel plate capacitor fabricated from this plate. The elastic and piezoelectric constants were determined by comparing the measured and calculated SAW velocities and electromechanical coupling coefficients on the Z- and X-cut plates. The following new constants were obtained: mass density p = 5986 kg/m(3); relative dielectric constants (at constant strain S) ε(S)(11)/ε(0) = 8.6 and ε(S)(11)/ε(0) = 10.5, where ε(0) is a dielectric constant of free space; elastic constants (at constant electric field E) C(E)(11) = 349.7, C(E)(12) = 128.1, C(E)(13) = 129.4, C(E)(33) = 430.3, and C(E)(44) = 96.5 GPa; and piezoelectric constants e(33) = 0.84, e(31) = -0.47, and e(15) = -0.41 C/m(2).

  20. Frequency modulation atomic force microscopy in ambient environments utilizing robust feedback tuning

    NASA Astrophysics Data System (ADS)

    Kilpatrick, J. I.; Gannepalli, A.; Cleveland, J. P.; Jarvis, S. P.

    2009-02-01

    Frequency modulation atomic force microscopy (FM-AFM) is rapidly evolving as the technique of choice in the pursuit of high resolution imaging of biological samples in ambient environments. The enhanced stability afforded by this dynamic AFM mode combined with quantitative analysis enables the study of complex biological systems, at the nanoscale, in their native physiological environment. The operational bandwidth and accuracy of constant amplitude FM-AFM in low Q environments is heavily dependent on the cantilever dynamics and the performance of the demodulation and feedback loops employed to oscillate the cantilever at its resonant frequency with a constant amplitude. Often researchers use ad hoc feedback gains or instrument default values that can result in an inability to quantify experimental data. Poor choice of gains or exceeding the operational bandwidth can result in imaging artifacts and damage to the tip and/or sample. To alleviate this situation we present here a methodology to determine feedback gains for the amplitude and frequency loops that are specific to the cantilever and its environment, which can serve as a reasonable "first guess," thus making quantitative FM-AFM in low Q environments more accessible to the nonexpert. This technique is successfully demonstrated for the low Q systems of air (Q ˜40) and water (Q ˜1). In addition, we present FM-AFM images of MC3T3-E1 preosteoblast cells acquired using the gains calculated by this methodology demonstrating the effectiveness of this technique.

  1. Probabilistic models and uncertainty quantification for the ionization reaction rate of atomic Nitrogen

    NASA Astrophysics Data System (ADS)

    Miki, K.; Panesi, M.; Prudencio, E. E.; Prudhomme, S.

    2012-05-01

    The objective in this paper is to analyze some stochastic models for estimating the ionization reaction rate constant of atomic Nitrogen (N + e- → N+ + 2e-). Parameters of the models are identified by means of Bayesian inference using spatially resolved absolute radiance data obtained from the Electric Arc Shock Tube (EAST) wind-tunnel. The proposed methodology accounts for uncertainties in the model parameters as well as physical model inadequacies, providing estimates of the rate constant that reflect both types of uncertainties. We present four different probabilistic models by varying the error structure (either additive or multiplicative) and by choosing different descriptions of the statistical correlation among data points. In order to assess the validity of our methodology, we first present some calibration results obtained with manufactured data and then proceed by using experimental data collected at EAST experimental facility. In order to simulate the radiative signature emitted in the shock-heated air plasma, we use a one-dimensional flow solver with Park's two-temperature model that simulates non-equilibrium effects. We also discuss the implications of the choice of the stochastic model on the estimation of the reaction rate and its uncertainties. Our analysis shows that the stochastic models based on correlated multiplicative errors are the most plausible models among the four models proposed in this study. The rate of the atomic Nitrogen ionization is found to be (6.2 ± 3.3) × 1011 cm3 mol-1 s-1 at 10,000 K.

  2. Measuring and monitoring equity in access to deceased donor kidney transplantation.

    PubMed

    Stewart, D E; Wilk, A R; Toll, A E; Harper, A M; Lehman, R R; Robinson, A M; Noreen, S A; Edwards, E B; Klassen, D K

    2018-05-07

    The Organ Procurement and Transplantation Network monitors progress toward strategic goals such as increasing the number of transplants and improving waitlisted patient, living donor, and transplant recipient outcomes. However, a methodology for assessing system performance in providing equity in access to transplants was lacking. We present a novel approach for quantifying the degree of disparity in access to deceased donor kidney transplants among waitlisted patients and determine which factors are most associated with disparities. A Poisson rate regression model was built for each of 29 quarterly, period-prevalent cohorts (January 1, 2010-March 31, 2017; 5 years pre-kidney allocation system [KAS], 2 years post-KAS) of active kidney waiting list registrations. Inequity was quantified as the outlier-robust standard deviation (SD w ) of predicted transplant rates (log scale) among registrations, after "discounting" for intentional, policy-induced disparities (eg, pediatric priority) by holding such factors constant. The overall SD w declined by 40% after KAS implementation, suggesting substantially increased equity. Risk-adjusted, factor-specific disparities were measured with the SD w after holding all other factors constant. Disparities associated with calculated panel-reactive antibodies decreased sharply. Donor service area was the factor most associated with access disparities post-KAS. This methodology will help the transplant community evaluate tradeoffs between equity and utility-centric goals when considering new policies and help monitor equity in access as policies change. © 2018 The American Society of Transplantation and the American Society of Transplant Surgeons.

  3. Servo-control for maintaining abdominal skin temperature at 36C in low birth weight infants.

    PubMed

    Sinclair, J C

    2002-01-01

    Randomized trials have shown that the neonatal mortality rate of low birth-weight babies can be reduced by keeping them warm. For low birth-weight babies nursed in incubators, warm conditions may be achieved either by heating the air to a desired temperature, or by servo-controlling the baby's body temperature at a desired set-point. In low birth weight infants, to determine the effect on death and other important clinical outcomes of targeting body temperature rather than air temperature as the end-point of control of incubator heating. Standard search strategy of the Cochrane Neonatal Review Group. Searches were made of the Cochrane Controlled Trials Register (CCTR) (Cochrane Library, Issue 4, 2001) and MEDLINE, 1966 to November 2001. Randomized or quasi-randomized trials which test the effects of having the heat output of the incubator servo-controlled from body temperature compared with setting a constant incubator air temperature. Trial methodologic quality was systematically assessed. Outcome measures included death, timing of death, cause of death, and other clinical outcomes. Categorical outcomes were analyzed using relative risk and risk difference. Meta-analysis assumed a fixed effect model. Two eligible trials were found. In total, they included 283 babies and 112 deaths. Compared to setting a constant incubator air temperature of 31.8C, servo-control of abdominal skin temperature at 36C reduces the neonatal death rate among low birth weight infants: relative risk 0.72 (95% CI 0.54, 0.97); risk difference -12.7% (95% CI -1.6, -23.9). This effect is even greater among VLBW infants. During at least the first week after birth, low birth weight babies should be provided with a carefully regulated thermal environment that is near the thermoneutral point. For LBW babies in incubators, this can be achieved by adjusting incubator temperature to maintain an anterior abdominal skin temperature of at least 36C, using either servo-control or frequent manual adjustment of incubator air temperature.

  4. Quantum chemistry in arbitrary dielectric environments: Theory and implementation of nonequilibrium Poisson boundary conditions and application to compute vertical ionization energies at the air/water interface

    NASA Astrophysics Data System (ADS)

    Coons, Marc P.; Herbert, John M.

    2018-06-01

    Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ɛ. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ɛ(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F-(aq), Cl-(aq), neat liquid water, and the hydrated electron, although errors for Li+(aq) and Na+(aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.

  5. Quantum chemistry in arbitrary dielectric environments: Theory and implementation of nonequilibrium Poisson boundary conditions and application to compute vertical ionization energies at the air/water interface.

    PubMed

    Coons, Marc P; Herbert, John M

    2018-06-14

    Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ε. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ε(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F - (aq), Cl - (aq), neat liquid water, and the hydrated electron, although errors for Li + (aq) and Na + (aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.

  6. A method of inferring collision ratio based on maneuverability of own ship under critical collision conditions

    NASA Astrophysics Data System (ADS)

    You, Youngjun; Rhee, Key-Pyo; Ahn, Kyoungsoo

    2013-06-01

    In constructing a collision avoidance system, it is important to determine the time for starting collision avoidance maneuver. Many researchers have attempted to formulate various indices by applying a range of techniques. Among these indices, collision risk obtained by combining Distance to the Closest Point of Approach (DCPA) and Time to the Closest Point of Approach (TCPA) information with fuzzy theory is mostly used. However, the collision risk has a limit, in that membership functions of DCPA and TCPA are empirically determined. In addition, the collision risk is not able to consider several critical collision conditions where the target ship fails to take appropriate actions. It is therefore necessary to design a new concept based on logical approaches. In this paper, a collision ratio is proposed, which is the expected ratio of unavoidable paths to total paths under suitably characterized operation conditions. Total paths are determined by considering categories such as action space and methodology of avoidance. The International Regulations for Preventing Collisions at Sea (1972) and collision avoidance rules (2001) are considered to solve the slower ship's dilemma. Different methods which are based on a constant speed model and simulated speed model are used to calculate the relative positions between own ship and target ship. In the simulated speed model, fuzzy control is applied to determination of command rudder angle. At various encounter situations, the time histories of the collision ratio based on the simulated speed model are compared with those based on the constant speed model.

  7. Precursor and Neutral Loss Scans in an RF Scanning Linear Quadrupole Ion Trap

    NASA Astrophysics Data System (ADS)

    Snyder, Dalton T.; Szalwinski, Lucas J.; Schrader, Robert L.; Pirro, Valentina; Hilger, Ryan; Cooks, R. Graham

    2018-03-01

    Methodology for performing precursor and neutral loss scans in an RF scanning linear quadrupole ion trap is described and compared to the unconventional ac frequency scan technique. In the RF scanning variant, precursor ions are mass selectively excited by a fixed frequency resonance excitation signal at low Mathieu q while the RF amplitude is ramped linearly to pass ions through the point of excitation such that the excited ion's m/z varies linearly with time. Ironically, a nonlinear ac frequency scan is still required for ejection of the product ions since their frequencies vary nonlinearly with the linearly varying RF amplitude. In the case of the precursor scan, the ejection frequency must be scanned so that it is fixed on a product ion m/z throughout the RF scan, whereas in the neutral loss scan, it must be scanned to maintain a constant mass offset from the excited precursor ions. Both simultaneous and sequential permutation scans are possible; only the former are demonstrated here. The scans described are performed on a variety of samples using different ionization sources: protonated amphetamine ions generated by nanoelectrospray ionization (nESI), explosives ionized by low-temperature plasma (LTP), and chemical warfare agent simulants sampled from a surface and analyzed with swab touch spray (TS). We lastly conclude that the ac frequency scan variant of these MS/MS scans is preferred due to electronic simplicity. In an accompanying manuscript, we thus describe the implementation of orthogonal double resonance precursor and neutral loss scans on the Mini 12 using constant RF voltage. [Figure not available: see fulltext.

  8. Combination of human acetylcholinesterase and serum albumin sensing surfaces as highly informative analytical tool for inhibitor screening.

    PubMed

    Fabini, Edoardo; Tramarin, Anna; Bartolini, Manuela

    2018-06-05

    In the continuous research for potential drug lead candidates, the availability of highly informative screening methodologies may constitute a decisive element in the selection of best-in-class compounds. In the present study, a surface plasmon resonance (SPR)-based assay was developed and employed to investigate interactions between human recombinant AChE (hAChE) and four known ligands: galantamine, tacrine, donepezil and edrophonium. To this aim, a sensor chip was functionalized with hAChE using mild immobilization conditions to best preserve enzyme integrity. Binding affinities and, for the first time, kinetic rate constants for all drug-hAChE complexes formation/disruption were determined. Inhibitors were classified in two groups: slow-reversible and fast-reversible binders according to respective target residence time. Combining data obtained on drug-target residence time with data obtained on serum albumin binding levels, a good correlation with potency, plasma protein binding in vivo, and administration regimen was found. The outcomes of this work demonstrated that the developed SPR-based assay is suitable for the screening, the binding affinity ranking and the kinetic evaluation of hAChE inhibitors. The method proposed ensures a simpler and cost-effective assay to quantify kinetic rate constants for inhibitor-hAChE interaction as compared with other proposed and published methods. Eventually, the determination of residence time in combination with preliminary ADME studies might constitute a better tool to predict in vivo behaviour, a key information for the research of new potential drug candidates. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. EXPLORING BIASES OF ATMOSPHERIC RETRIEVALS IN SIMULATED JWST TRANSMISSION SPECTRA OF HOT JUPITERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocchetto, M.; Waldmann, I. P.; Tinetti, G.

    2016-12-10

    With a scheduled launch in 2018 October, the James Webb Space Telescope ( JWST ) is expected to revolutionize the field of atmospheric characterization of exoplanets. The broad wavelength coverage and high sensitivity of its instruments will allow us to extract far more information from exoplanet spectra than what has been possible with current observations. In this paper, we investigate whether current retrieval methods will still be valid in the era of JWST , exploring common approximations used when retrieving transmission spectra of hot Jupiters. To assess biases, we use 1D photochemical models to simulate typical hot Jupiter cloud-free atmospheresmore » and generate synthetic observations for a range of carbon-to-oxygen ratios. Then, we retrieve these spectra using TauREx, a Bayesian retrieval tool, using two methodologies: one assuming an isothermal atmosphere, and one assuming a parameterized temperature profile. Both methods assume constant-with-altitude abundances. We found that the isothermal approximation biases the retrieved parameters considerably, overestimating the abundances by about one order of magnitude. The retrieved abundances using the parameterized profile are usually within 1 σ of the true state, and we found the retrieved uncertainties to be generally larger compared to the isothermal approximation. Interestingly, we found that by using the parameterized temperature profile we could place tight constraints on the temperature structure. This opens the possibility of characterizing the temperature profile of the terminator region of hot Jupiters. Lastly, we found that assuming a constant-with-altitude mixing ratio profile is a good approximation for most of the atmospheres under study.« less

  10. i4OilSpill, an operational marine oil spill forecasting model for Bohai Sea

    NASA Astrophysics Data System (ADS)

    Yu, Fangjie; Yao, Fuxin; Zhao, Yang; Wang, Guansuo; Chen, Ge

    2016-10-01

    Oil spill models can effectively simulate the trajectories and fate of oil slicks, which is an essential element in contingency planning and effective response strategies prepared for oil spill accidents. However, when applied to offshore areas such as the Bohai Sea, the trajectories and fate of oil slicks would be affected by time-varying factors in a regional scale, which are assumed to be constant in most of the present models. In fact, these factors in offshore regions show much more variation over time than in the deep sea, due to offshore bathymetric and climatic characteristics. In this paper, the challenge of parameterizing these offshore factors is tackled. The remote sensing data of the region are used to analyze the modification of wind-induced drift factors, and a well-suited solution is established in parameter correction mechanism for oil spill models. The novelty of the algorithm is the self-adaptive modification mechanism of the drift factors derived from the remote sensing data for the targeted sea region, in respect to empirical constants in the present models. Considering this situation, a new regional oil spill model (i4OilSpill) for the Bohai Sea is developed, which can simulate oil transformation and fate processes by Eulerian-Lagrangian methodology. The forecasting accuracy of the proposed model is proven by the validation results in the comparison between model simulation and subsequent satellite observations on the Penglai 19-3 oil spill accident. The performance of the model parameter correction mechanism is evaluated by comparing with the real spilled oil position extracted from ASAR images.

  11. Evolution of circadian rhythms in Drosophila melanogaster populations reared in constant light and dark regimes for over 330 generations.

    PubMed

    Shindey, Radhika; Varma, Vishwanath; Nikhil, K L; Sharma, Vijay Kumar

    2017-01-01

    Organisms are believed to have evolved circadian clocks as adaptations to deal with cyclic environmental changes, and therefore it has been hypothesized that evolution in constant environments would lead to regression of such clocks. However, previous studies have yielded mixed results, and evolution of circadian clocks under constant conditions has remained an unsettled topic of debate in circadian biology. In continuation of our previous studies, which reported persistence of circadian rhythms in Drosophila melanogaster populations evolving under constant light, here we intended to examine whether circadian clocks and the associated properties evolve differently under constant light and constant darkness. In this regard, we assayed activity-rest, adult emergence and oviposition rhythms of D. melanogaster populations which have been maintained for over 19 years (~330 generations) under three different light regimes - constant light (LL), light-dark cycles of 12:12 h (LD) and constant darkness (DD). We observed that while circadian rhythms in all the three behaviors persist in both LL and DD stocks with no differences in circadian period, they differed in certain aspects of the entrained rhythms when compared to controls reared in rhythmic environment (LD). Interestingly, we also observed that DD stocks have evolved significantly higher robustness or power of free-running activity-rest and adult emergence rhythms compared to LL stocks. Thus, our study, in addition to corroborating previous results of circadian clock evolution in constant light, also highlights that, contrary to the expected regression of circadian clocks, rearing in constant darkness leads to the evolution of more robust circadian clocks which may be attributed to an intrinsic adaptive advantage of circadian clocks and/or pleiotropic functions of clock genes in other traits.

  12. Teaching for Conceptual Change in a Density Unit Taught to 7th Graders: Comparing Two Teaching Methodologies--Scientific Inquiry and a Traditional Approach

    ERIC Educational Resources Information Center

    Holveck, Susan E.

    2012-01-01

    This mixed methods study was designed to compare the effect of using an inquiry teaching methodology and a more traditional teaching methodology on the learning gains of students who were taught a five-week conceptual change unit on density. Seventh graders (N = 479) were assigned to five teachers who taught the same unit on density using either a…

  13. The Necessity of Company-Grade Air Defense Artillery Officers in the Air Defense and Airspace Management Cells Within the Brigade Combat Team

    DTIC Science & Technology

    2014-06-13

    the role of ADAM Cell OIC. Utilizing the Army design methodology, the study compares the current training and performance of Air Defense officers to...junior company-grade officers to fulfill the role of ADAM Cell OIC. Utilizing the Army design methodology, the study compares the current training...Page Figure 1. Army design methodology ...............................................................................34 Figure 2. The cross-walk

  14. A Quantitative Examination of Critical Success Factors Comparing Agile and Waterfall Project Management Methodologies

    ERIC Educational Resources Information Center

    Pedersen, Mitra

    2013-01-01

    This study investigated the rate of success for IT projects using agile and standard project management methodologies. Any successful project requires use of project methodology. Specifically, large projects require formal project management methodologies or models, which establish a blueprint of processes and project planning activities. This…

  15. Theoretical microwave spectral constants for C2N, C2N/+/, and C3H

    NASA Technical Reports Server (NTRS)

    Green, S.

    1980-01-01

    Theoretical microwave spectral constants have been computed for C2N, C3H, and C2N(+). For C2N these are compared with values obtained from optical data. Calculated hyperfine constants are also presented for HNC, DNC, and HCNH(+). The possibility of observing these species in dense interstellar clouds is discussed.

  16. The comparison of various approach to evaluation erosion risks and design control erosion measures

    NASA Astrophysics Data System (ADS)

    Kapicka, Jiri

    2015-04-01

    In the present is in the Czech Republic one methodology how to compute and compare erosion risks. This methodology contain also method to design erosion control measures. The base of this methodology is Universal Soil Loss Equation (USLE) and their result long-term average annual rate of erosion (G). This methodology is used for landscape planners. Data and statistics from database of erosion events in the Czech Republic shows that many troubles and damages are from local episodes of erosion events. An extent of these events and theirs impact are conditional to local precipitation events, current plant phase and soil conditions. These erosion events can do troubles and damages on agriculture land, municipally property and hydro components and even in a location is from point of view long-term average annual rate of erosion in good conditions. Other way how to compute and compare erosion risks is episodes approach. In this paper is presented the compare of various approach to compute erosion risks. The comparison was computed to locality from database of erosion events on agricultural land in the Czech Republic where have been records two erosion events. The study area is a simple agriculture land without any barriers that can have high influence to water flow and soil sediment transport. The computation of erosion risks (for all methodology) was based on laboratory analysis of soil samples which was sampled on study area. Results of the methodology USLE, MUSLE and results from mathematical model Erosion 3D have been compared. Variances of the results in space distribution of the places with highest soil erosion where compared and discussed. Other part presents variances of design control erosion measures where their design was done on based different methodology. The results shows variance of computed erosion risks which was done by different methodology. These variances can start discussion about different approach how compute and evaluate erosion risks in areas with different importance.

  17. [Methodology of Screening New Antibiotics: Present Status and Prospects].

    PubMed

    Trenin, A S

    2015-01-01

    Due to extensive distribution of pathogen resistance to available pharmaceuticals and serious problems in the treatment of various infections and tumor diseases, the necessity of new antibiotics is urgent. The basic methodological approaches to chemical synthesis of antibiotics and screening of new antibiotics among natural products, mainly among microbial secondary metabolites, are considered in the review. Since the natural compounds are very much diverse, screening of such substances gives a good opportunity to discover antibiotics of various chemical structure and mechanism of action. Such an approach followed by chemical or biological transformation, is capable of providing the health care with new effective pharmaceuticals. The review is mainly concentrated on screening of natural products and methodological problems, such as: isolation of microbial producers from the habitats, cultivation of microorganisms producing appropriate substances, isolation and chemical characterization of microbial metabolites, identification of the biological activity of the metabolites. The main attention is paid to the problems of microbial secondary metabolism and design of new models for screening biologically active compounds. The last achievements in the field of antibiotics and most perspective approaches to future investigations are discussed. The main methodological approach to isolation and cultivation of the producers remains actual and needs constant improvement. The increase of the screening efficiency can be achieved by more rapid chemical identification of antibiotics and design of new screening models based on the biological activity detection.

  18. Quantification aspects of constant pressure (ultra) high pressure liquid chromatography using mass-sensitive detectors with a nebulizing interface.

    PubMed

    Verstraeten, M; Broeckhoven, K; Lynen, F; Choikhet, K; Landt, K; Dittmann, M; Witt, K; Sandra, P; Desmet, G

    2013-01-25

    The present contribution investigates the quantitation aspects of mass-sensitive detectors with nebulizing interface (ESI-MSD, ELSD, CAD) in the constant pressure gradient elution mode. In this operation mode, the pressure is controlled and maintained at a set value and the liquid flow rate will vary according to the inverse mobile phase viscosity. As the pressure is continuously kept at the allowable maximum during the entire gradient run, the average liquid flow rate is higher compared to that in the conventional constant flow rate operation mode, thus shortening the analysis time. The following three mass-sensitive detectors were investigated: mass spectrometry detector (MS), evaporative light scattering detector (ELSD) and charged aerosol detector (CAD) and a wide variety of samples (phenones, polyaromatic hydrocarbons, wine, cocoa butter) has been considered. It was found that the nebulizing efficiency of the LC-interfaces of the three detectors under consideration changes with the increasing liquid flow rate. For the MS, the increasing flow rate leads to a lower peak area whereas for the ELSD the peak area increases compared to the constant flow rate mode. The peak area obtained with a CAD is rather insensitive to the liquid flow rate. The reproducibility of the peak area remains similar in both modes, although variation in system permeability compromises the 'long-term' reproducibility. This problem can however be overcome by running a flow rate program with an optimized flow rate and composition profile obtained from the constant pressure mode. In this case, the quantification remains reproducibile, despite any occuring variations of the system permeability. Furthermore, the same fragmentation pattern (MS) has been found in the constant pressure mode compared to the customary constant flow rate mode. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Evaluation of the HARDMAN comparability methodology for manpower, personnel and training

    NASA Technical Reports Server (NTRS)

    Zimmerman, W.; Butler, R.; Gray, V.; Rosenberg, L.

    1984-01-01

    The methodology evaluation and recommendation are part of an effort to improve Hardware versus Manpower (HARDMAN) methodology for projecting manpower, personnel, and training (MPT) to support new acquisition. Several different validity tests are employed to evaluate the methodology. The methodology conforms fairly well with both the MPT user needs and other accepted manpower modeling techniques. Audits of three completed HARDMAN applications reveal only a small number of potential problem areas compared to the total number of issues investigated. The reliability study results conform well with the problem areas uncovered through the audits. The results of the accuracy studies suggest that the manpower life-cycle cost component is only marginally sensitive to changes in other related cost variables. Even with some minor problems, the methodology seem sound and has good near term utility to the Army. Recommendations are provided to firm up the problem areas revealed through the evaluation.

  20. Teachers' Self-Perceptions in England and Germany: Methodological Nationalism and Negative Points of Reference in Comparative Research

    ERIC Educational Resources Information Center

    von Bargen, Imke

    2017-01-01

    The field of comparative education traditionally compares nations and cultures to gain a deeper understanding of educational phenomena. The nation-state often functions as the only unit of comparison, which increases methodological nationalism. It is dangerous to draw simplistic conclusions because they focus on national particularities alone and…

  1. NMR shielding and spin–rotation constants of {sup 175}LuX (X = {sup 19}F, {sup 35}Cl, {sup 79}Br, {sup 127}I) molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demissie, Taye B.

    2015-12-31

    This presentation demonstrates the relativistic effects on the spin-rotation constants, absolute nuclear magnetic resonance (NMR) shielding constants and shielding spans of {sup 175}LuX (X = {sup 19}F, {sup 35}Cl, {sup 79}Br, {sup 127}I) molecules. The results are obtained from calculations performed using density functional theory (non-relativistic and four-component relativistic) and coupled-cluster calculations. The spin-rotation constants are compared with available experimental values. In most of the molecules studied, relativistic effects make an order of magnitude difference on the NMR absolute shielding constants.

  2. Stress-stress fluctuation formula for elastic constants in the NPT ensemble

    NASA Astrophysics Data System (ADS)

    Lips, Dominik; Maass, Philipp

    2018-05-01

    Several fluctuation formulas are available for calculating elastic constants from equilibrium correlation functions in computer simulations, but the ones available for simulations at constant pressure exhibit slow convergence properties and cannot be used for the determination of local elastic constants. To overcome these drawbacks, we derive a stress-stress fluctuation formula in the NPT ensemble based on known expressions in the NVT ensemble. We validate the formula in the NPT ensemble by calculating elastic constants for the simple nearest-neighbor Lennard-Jones crystal and by comparing the results with those obtained in the NVT ensemble. For both local and bulk elastic constants we find an excellent agreement between the simulated data in the two ensembles. To demonstrate the usefulness of the formula, we apply it to determine the elastic constants of a simulated lipid bilayer.

  3. Methodologies for evaluating performance and assessing uncertainty of atmospheric dispersion models

    NASA Astrophysics Data System (ADS)

    Chang, Joseph C.

    This thesis describes methodologies to evaluate the performance and to assess the uncertainty of atmospheric dispersion models, tools that predict the fate of gases and aerosols upon their release into the atmosphere. Because of the large economic and public-health impacts often associated with the use of the dispersion model results, these models should be properly evaluated, and their uncertainty should be properly accounted for and understood. The CALPUFF, HPAC, and VLSTRACK dispersion modeling systems were applied to the Dipole Pride (DP26) field data (˜20 km in scale), in order to demonstrate the evaluation and uncertainty assessment methodologies. Dispersion model performance was found to be strongly dependent on the wind models used to generate gridded wind fields from observed station data. This is because, despite the fact that the test site was a flat area, the observed surface wind fields still showed considerable spatial variability, partly because of the surrounding mountains. It was found that the two components were comparable for the DP26 field data, with variability more important than uncertainty closer to the source, and less important farther away from the source. Therefore, reducing data errors for input meteorology may not necessarily increase model accuracy due to random turbulence. DP26 was a research-grade field experiment, where the source, meteorological, and concentration data were all well-measured. Another typical application of dispersion modeling is a forensic study where the data are usually quite scarce. An example would be the modeling of the alleged releases of chemical warfare agents during the 1991 Persian Gulf War, where the source data had to rely on intelligence reports, and where Iraq had stopped reporting weather data to the World Meteorological Organization since the 1981 Iran-Iraq-war. Therefore the meteorological fields inside Iraq must be estimated by models such as prognostic mesoscale meteorological models, based on observational data from areas outside of Iraq, and using the global fields simulated by the global meteorological models as the initial and boundary conditions for the mesoscale models. It was found that while comparing model predictions to observations in areas outside of Iraq, the predicted surface wind directions had errors between 30 to 90 deg, but the inter-model differences (or uncertainties) in the predicted surface wind directions inside Iraq, where there were no onsite data, were fairly constant at about 70 deg. (Abstract shortened by UMI.)

  4. Robotic application of a dynamic resultant force vector using real-time load-control: simulation of an ideal follower load on Cadaveric L4-L5 segments.

    PubMed

    Bennett, Charles R; Kelly, Brian P

    2013-08-09

    Standard in-vitro spine testing methods have focused on application of isolated and/or constant load components while the in-vivo spine is subject to multiple components that can be resolved into resultant dynamic load vectors. To advance towards more in-vivo like simulations the objective of the current study was to develop a methodology to apply robotically-controlled, non-zero, real-time dynamic resultant forces during flexion-extension on human lumbar motion segment units (MSU) with initial application towards simulation of an ideal follower load (FL) force vector. A proportional-integral-derivative (PID) controller with custom algorithms coordinated the motion of a Cartesian serial manipulator comprised of six axes each capable of position- or load-control. Six lumbar MSUs (L4-L5) were tested with continuously increasing sagittal plane bending to 8 Nm while force components were dynamically programmed to deliver a resultant 400 N FL that remained normal to the moving midline of the intervertebral disc. Mean absolute load-control tracking errors between commanded and experimental loads were computed. Global spinal ranges of motion and sagittal plane inter-body translations were compared to previously published values for non-robotic applications. Mean TEs for zero-commanded force and moment axes were 0.7 ± 0.4N and 0.03 ± 0.02 Nm, respectively. For non-zero force axes mean TEs were 0.8 ± 0.8 N, 1.3 ± 1.6 Nm, and 1.3 ± 1.6N for Fx, Fz, and the resolved ideal follower load vector FL(R), respectively. Mean extension and flexion ranges of motion were 2.6° ± 1.2° and 5.0° ± 1.7°, respectively. Relative vertebral body translations and rotations were very comparable to data collected with non-robotic systems in the literature. The robotically coordinated Cartesian load controlled testing system demonstrated robust real-time load-control that permitted application of a real-time dynamic non-zero load vector during flexion-extension. For single MSU investigations the methodology has potential to overcome conventional follower load limitations, most notably via application outside the sagittal plane. This methodology holds promise for future work aimed at reducing the gap between current in-vitro testing and in-vivo circumstances. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A cost-based comparison of quarantine strategies for new emerging diseases.

    PubMed

    Mubayi, Anuj; Zaleta, Christopher Kribs; Martcheva, Maia; Castillo-Chávez, Carlos

    2010-07-01

    A classical epidemiological framework is used to provide a preliminary cost analysis of the effects of quarantine and isolation on the dynamics of infectious diseases for which no treatment or immediate diagnosis tools are available. Within this framework we consider the cost incurred from the implementation of three types of dynamic control strategies. Taking the context of the 2003 SARS outbreak in Hong Kong as an example, we use a simple cost function to compare the total cost of each mixed (quarantine and isolation) control strategy from a public health resource allocation perspective. The goal is to extend existing epi-economics methodology by developing a theoretical framework of dynamic quarantine strategies aimed at emerging diseases, by drawing upon the large body of literature on the dynamics of infectious diseases. We find that the total cost decreases with increases in the quarantine rates past a critical value, regardless of the resource allocation strategy. In the case of a manageable outbreak resources must be used early to achieve the best results whereas in case of an unmanageable outbreak, a constant-effort strategy seems the best among our limited plausible sets.

  6. Numerical investigation on layout optimization of obstacles in a three-dimensional passive micromixer.

    PubMed

    Chen, Xueye; Zhao, Zhongyi

    2017-04-29

    This paper aims at layout optimization design of obstacles in a three-dimensional T-type micromixer. Numerical analysis shows that the direction of flow velocity change constantly due to the obstacles blocking, which produces the chaotic convection and increases species mixing effectively. The orthogonal experiment method was applied for determining the effects of some key parameters on mixing efficiency. The weights in the order are: height of obstacles > geometric shape > symmetry = number of obstacles. Based on the optimized results, a multi-units obstacle micromixer was designed. Compared with T-type micromixer, the multi-units obstacle micromixer is more efficient, and more than 90% mixing efficiency were obtained for a wide range of peclet numbers. It can be demonstrated that the presented optimal design method of obstacles layout in three-dimensional microchannels is a simple and effective technology to improve species mixing in microfluidic devices. The obstacles layout methodology has the potential for applications in chemical engineering and bioengineering. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. An improved genetic algorithm for designing optimal temporal patterns of neural stimulation

    NASA Astrophysics Data System (ADS)

    Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.

    2017-12-01

    Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.

  8. “I’m Not Going to Die from the AIDS”: Resilience in Aging with HIV Disease

    PubMed Central

    Emlet, Charles A.; Tozay, Shakima; Raveis, Victoria H.

    2011-01-01

    Purpose: Adults aging with HIV/AIDS can experience resilience in spite of the deleterious affects of the disease. This study seeks to examine the lived experiences of older adults with HIV/AIDS as it relates to strengths and resilience in dealing with this devastating disease. Design and methods: Semistructured in-depth interviews were conducted with 25 adults, 50 years and older, living with HIV/AIDS. The interview transcripts were analyzed using constant comparative methodology following the tenets of adaptive theory. Results: The majority of informants expressed experiences of resilience and strengths as related to living with HIV/AIDS. Seven major themes emerged from the analysis including self-acceptance, optimism, will to live, generativity, self-management, relational living, and independence. Implications: The research identified the importance of strengths and resilience among older adults living with HIV/AIDS. Further research is needed to explore these phenomena with larger samples. Practitioners should identify and implement methods for assessing resilience among older HIV-infected adults. PMID:20650948

  9. Pursuit of psychological well-being (ikigai) and the evolution of self-understanding in the context of caregiving in Japan.

    PubMed

    Yamamoto-Mitani, Noriko; Wallhagen, Margaret I

    2002-12-01

    Using the Japanese concept of ikigai, which describes a certain state of psychological well-being, this study explores how Japanese family caregivers of elderly parents with dementia pursue, maintain, or attempt to regain their psychological well-being in the face of the hardship of caregiving. Using constant comparative methodology, twenty-six Japanese women who were caring for an elderly demented parent or parent-in-law were interviewed. Based on the analysis of interview data, we define ikigai as certain life experiences and/or the positive emotion felt through those experiences that allow the caregiver to judge her life as good and meaningful, and to feel that it is worthwhile to continue living. Caregivers use various different means to pursue their ikigai depending on the context of care. The types of their pursuit of ikigai are examined in varying contexts of caregiving. Because the data suggest that ikigai experience influences how the caregivers' self-understanding changes over time, the notion of ikigai is further explored in relation to the construct of self-understanding.

  10. Reduced Stress Tensor and Dissipation and the Transport of Lamb Vector

    NASA Technical Reports Server (NTRS)

    Wu, Jie-Zhi; Zhou, Ye; Wu, Jian-Ming

    1996-01-01

    We develop a methodology to ensure that the stress tensor, regardless of its number of independent components, can be reduced to an exactly equivalent one which has the same number of independent components as the surface force. It is applicable to the momentum balance if the shear viscosity is constant. A direct application of this method to the energy balance also leads to a reduction of the dissipation rate of kinetic energy. Following this procedure, significant saving in analysis and computation may be achieved. For turbulent flows, this strategy immediately implies that a given Reynolds stress model can always be replaced by a reduced one before putting it into computation. Furthermore, we show how the modeling of Reynolds stress tensor can be reduced to that of the mean turbulent Lamb vector alone, which is much simpler. As a first step of this alternative modeling development, we derive the governing equations for the Lamb vector and its square. These equations form a basis of new second-order closure schemes and, we believe, should be favorably compared to that of traditional Reynolds stress transport equation.

  11. A functional relation for field-scale nonaqueous phase liquid dissolution developed using a pore network model

    USGS Publications Warehouse

    Dillard, L.A.; Essaid, H.I.; Blunt, M.J.

    2001-01-01

    A pore network model with cubic chambers and rectangular tubes was used to estimate the nonaqueous phase liquid (NAPL) dissolution rate coefficient, Kdissai, and NAPL/water total specific interfacial area, ai. Kdissai was computed as a function of modified Peclet number (Pe???) for various NAPL saturations (SN) and ai during drainage and imbibition and during dissolution without displacement. The largest contributor to ai was the interfacial area in the water-filled corners of chambers and tubes containing NAPL. When Kdissai was divided by ai, the resulting curves of dissolution coefficient, Kdiss versus Pe??? suggested that an approximate value of Kdiss could be obtained as a weak function of hysteresis or SN. Spatially and temporally variable maps of Kdissai calculated using the network model were used in field-scale simulations of NAPL dissolution. These simulations were compared to simulations using a constant value of Kdissai and the empirical correlation of Powers et al. [Water Resour. Res. 30(2) (1994b) 321]. Overall, a methodology was developed for incorporating pore-scale processes into field-scale prediction of NAPL dissolution. Copyright ?? 2001 .

  12. Comparison of probability statistics for automated ship detection in SAR imagery

    NASA Astrophysics Data System (ADS)

    Henschel, Michael D.; Rey, Maria T.; Campbell, J. W. M.; Petrovic, D.

    1998-12-01

    This paper discuses the initial results of a recent operational trial of the Ocean Monitoring Workstation's (OMW) ship detection algorithm which is essentially a Constant False Alarm Rate filter applied to Synthetic Aperture Radar data. The choice of probability distribution and methodologies for calculating scene specific statistics are discussed in some detail. An empirical basis for the choice of probability distribution used is discussed. We compare the results using a l-look, k-distribution function with various parameter choices and methods of estimation. As a special case of sea clutter statistics the application of a (chi) 2-distribution is also discussed. Comparisons are made with reference to RADARSAT data collected during the Maritime Command Operation Training exercise conducted in Atlantic Canadian Waters in June 1998. Reference is also made to previously collected statistics. The OMW is a commercial software suite that provides modules for automated vessel detection, oil spill monitoring, and environmental monitoring. This work has been undertaken to fine tune the OMW algorithm's, with special emphasis on the false alarm rate of each algorithm.

  13. Speaking rate effects on locus equation slope.

    PubMed

    Berry, Jeff; Weismer, Gary

    2013-11-01

    A locus equation describes a 1st order regression fit to a scatter of vowel steady-state frequency values predicting vowel onset frequency values. Locus equation coefficients are often interpreted as indices of coarticulation. Speaking rate variations with a constant consonant-vowel form are thought to induce changes in the degree of coarticulation. In the current work, the hypothesis that locus slope is a transparent index of coarticulation is examined through the analysis of acoustic samples of large-scale, nearly continuous variations in speaking rate. Following the methodological conventions for locus equation derivation, data pooled across ten vowels yield locus equation slopes that are mostly consistent with the hypothesis that locus equations vary systematically with coarticulation. Comparable analyses between different four-vowel pools reveal variations in the locus slope range and changes in locus slope sensitivity to rate change. Analyses across rate but within vowels are substantially less consistent with the locus hypothesis. Taken together, these findings suggest that the practice of vowel pooling exerts a non-negligible influence on locus outcomes. Results are discussed within the context of articulatory accounts of locus equations and the effects of speaking rate change.

  14. A Stopped-Flow Kinetics Experiment for Advanced Undergraduate Laboratories: Formation of Iron(III) Thiocyannate

    NASA Astrophysics Data System (ADS)

    Clark, Charles R.

    1997-10-01

    A series of 15 stopped-flow kinetic experiments relating to the formation of iron(III)- thiocyanate at 25.0 °C and I = 1.0 M (NaClO4) is described. A methodology is given whereby solution preparation and data collection are able to be carried out within the time scale of a single laboratory period (3-4 h). Kinetic data are obtained using constant [SCN-], and at three H+ concentrations (0.10, 0.20, 0.30 M) for varying concentrations of Fe3+ (ca. 0.0025 - 0.020 M). Rate data (450 nm) are consistent with rate laws for the forward and reverse reactions: kf = (k1 + k2Ka1/[H+])[Fe3+] and kr = k-1 + k-2Ka2/[H+] respectively, with k1,k-1 corresponding to the rate constants for formation and decay of FeSCN2+, k2, k-2 to the rate constants for formation and decay of the FeSCN(OH)+ ion and Ka1,Ka2 to the acid dissociation constants (coordinated OH2 ionization) of Fe3+ and FeSCN2+. Using literature values for the latter two quantities ( Ka1 = 2.04 x 10-3 M, Ka2 = 6.5 x 10-5 M) allows values for the four rate constants to be obtained. A typical data set is analyzed to give k1 = 109(10) M-1s-1, k-1 = 0.79(0.10) s-1, k2= 8020(800) M-1s-1, k-2 = 2630(230) s-1. Absorbance change data for reaction (DeltaA) follow the expression: DeltaA = Alim.Kf.[Fe3+]/(1 + Kf.[Fe3+]), with Alim corresponding to the absorbance of fully formed FeSCN2+ (i.e. free SCN- absent) and Kf to the formation constant of this complex (value in the example 112(5) M-1, c.f. 138(29) M-1 from the kinetic data).

  15. Understanding nutritional epidemiology and its role in policy.

    PubMed

    Satija, Ambika; Yu, Edward; Willett, Walter C; Hu, Frank B

    2015-01-01

    Nutritional epidemiology has recently been criticized on several fronts, including the inability to measure diet accurately, and for its reliance on observational studies to address etiologic questions. In addition, several recent meta-analyses with serious methodologic flaws have arrived at erroneous or misleading conclusions, reigniting controversy over formerly settled debates. All of this has raised questions regarding the ability of nutritional epidemiologic studies to inform policy. These criticisms, to a large degree, stem from a misunderstanding of the methodologic issues of the field and the inappropriate use of the drug trial paradigm in nutrition research. The exposure of interest in nutritional epidemiology is human diet, which is a complex system of interacting components that cumulatively affect health. Consequently, nutritional epidemiology constantly faces a unique set of challenges and continually develops specific methodologies to address these. Misunderstanding these issues can lead to the nonconstructive and sometimes naive criticisms we see today. This article aims to clarify common misunderstandings of nutritional epidemiology, address challenges to the field, and discuss the utility of nutritional science in guiding policy by focusing on 5 broad questions commonly asked of the field. © 2015 American Society for Nutrition.

  16. Role of endothelium sensitivity to shear stress in noradrenaline-induced constriction of feline femoral arterial bed under constant flow and constant pressure perfusions.

    PubMed

    Kartamyshev, Sergey P; Balashov, Sergey A; Melkumyants, Arthur M

    2007-01-01

    The effect of shear stress at the endothelium in the attenuation of the noradrenaline-induced constriction of the femoral vascular bed perfused at a constant blood flow was investigated in 16 anesthetized cats. It is known that the adrenergic vasoconstriction of the femoral vascular bed is considerably greater at a constant pressure perfusion than at a constant blood flow. This difference may depend on the ability of the endothelium to relax smooth muscle in response to an increase in wall shear stress. Since the shear stress is directly related to the blood flow and inversely related to the third power of vessel diameter, vasoconstriction at a constant blood flow increases the wall shear stress that is the stimulus for smooth muscle relaxation opposing constriction. On the other hand, at a constant perfusion pressure, vasoconstriction is accompanied by a decrease in flow rate, which prevents a wall shear stress increase. To reveal the effect of endothelial sensitivity to shear stress, we compared noradrenaline-induced changes in total and proximal arterial resistances during perfusion of the hind limb at a constant blood flow and at a constant pressure in vessels with intact and injured endothelium. We found that in the endothelium-intact bed the same concentration of noradrenaline at a constant flow caused an increase in overall vascular peripheral resistance that was half as large as at a constant perfusion pressure. This difference is mainly confined to the proximal arterial vessels (arteries and large arterioles) whose resistance at a constant flow increased only 0.19 +/- 0.03 times compared to that at a constant pressure. The removal of the endothelium only slightly increased constrictor responses at the perfusion under a constant pressure (noradrenaline-induced increases of both overall and proximal arterial resistance augmented by 12%), while the responses of the proximal vessels at a constant flow became 4.7 +/- 0.4 times greater than in the endothelium-intact bed. A selective blockage of endothelium sensitivity to shear stress using a glutaraldehyde dimer augmented the constrictor responses of the proximal vessels at a constant flow 4.6-fold (+/-0.3), but had no significant effect on the responses at a constant pressure. These results are consistent with the conclusion that the difference in constrictor responses at constant flow and pressure perfusions depends mainly on the smooth muscle relaxation caused by increased wall shear stress. Copyright (c) 2007 S. Karger AG, Basel.

  17. Pressure ulcer image segmentation technique through synthetic frequencies generation and contrast variation using toroidal geometry.

    PubMed

    David, Ortiz P; Sierra-Sosa, Daniel; Zapirain, Begoña García

    2017-01-06

    Pressure ulcers have become subject of study in recent years due to the treatment high costs and decreased life quality from patients. These chronic wounds are related to the global life expectancy increment, being the geriatric and physical disable patients the principal affected by this condition. Injuries diagnosis and treatment usually takes weeks or even months by medical personel. Using non-invasive techniques, such as image processing techniques, it is possible to conduct an analysis from ulcers and aid in its diagnosis. This paper proposes a novel technique for image segmentation based on contrast changes by using synthetic frequencies obtained from the grayscale value available in each pixel of the image. These synthetic frequencies are calculated using the model of energy density over an electric field to describe a relation between a constant density and the image amplitude in a pixel. A toroidal geometry is used to decompose the image into different contrast levels by variating the synthetic frequencies. Then, the decomposed image is binarized applying Otsu's threshold allowing for obtaining the contours that describe the contrast variations. Morphological operations are used to obtain the desired segment of the image. The proposed technique is evaluated by synthesizing a Data Base with 51 images of pressure ulcers, provided by the Centre IGURCO. With the segmentation of these pressure ulcer images it is possible to aid in its diagnosis and treatment. To provide evidences of technique performance, digital image correlation was used as a measure, where the segments obtained using the methodology are compared with the real segments. The proposed technique is compared with two benchmarked algorithms. The results over the technique present an average correlation of 0.89 with a variation of ±0.1 and a computational time of 9.04 seconds. The methodology presents better segmentation results than the benchmarked algorithms using less computational time and without the need of an initial condition.

  18. Realist explanatory theory building method for social epidemiology: a protocol for a mixed method multilevel study of neighbourhood context and postnatal depression.

    PubMed

    Eastwood, John G; Jalaludin, Bin B; Kemp, Lynn A

    2014-01-01

    A recent criticism of social epidemiological studies, and multi-level studies in particular has been a paucity of theory. We will present here the protocol for a study that aims to build a theory of the social epidemiology of maternal depression. We use a critical realist approach which is trans-disciplinary, encompassing both quantitative and qualitative traditions, and that assumes both ontological and hierarchical stratification of reality. We describe a critical realist Explanatory Theory Building Method comprising of an: 1) emergent phase, 2) construction phase, and 3) confirmatory phase. A concurrent triangulated mixed method multilevel cross-sectional study design is described. The Emergent Phase uses: interviews, focus groups, exploratory data analysis, exploratory factor analysis, regression, and multilevel Bayesian spatial data analysis to detect and describe phenomena. Abductive and retroductive reasoning will be applied to: categorical principal component analysis, exploratory factor analysis, regression, coding of concepts and categories, constant comparative analysis, drawing of conceptual networks, and situational analysis to generate theoretical concepts. The Theory Construction Phase will include: 1) defining stratified levels; 2) analytic resolution; 3) abductive reasoning; 4) comparative analysis (triangulation); 5) retroduction; 6) postulate and proposition development; 7) comparison and assessment of theories; and 8) conceptual frameworks and model development. The strength of the critical realist methodology described is the extent to which this paradigm is able to support the epistemological, ontological, axiological, methodological and rhetorical positions of both quantitative and qualitative research in the field of social epidemiology. The extensive multilevel Bayesian studies, intensive qualitative studies, latent variable theory, abductive triangulation, and Inference to Best Explanation provide a strong foundation for Theory Construction. The study will contribute to defining the role that realism and mixed methods can play in explaining the social determinants and developmental origins of health and disease.

  19. Detecting organisational innovations leading to improved ICU outcomes: a protocol for a double-blinded national positive deviance study of critical care delivery.

    PubMed

    Chiou, Howard; Jopling, Jeffrey K; Scott, Jennifer Yang; Ramsey, Meghan; Vranas, Kelly; Wagner, Todd H; Milstein, Arnold

    2017-06-14

    There is substantial variability in intensive care unit (ICU) utilisation and quality of care. However, the factors that drive this variation are poorly understood. This study uses a novel adaptation of positive deviance approach-a methodology used in public health that assumes solutions to challenges already exist within the system to detect innovations that are likely to improve intensive care. We used the Philips eICU Research Institute database, containing 3.3 million patient records from over 50 health systems across the USA. Acute Physiology and Chronic Health Evaluation IVa scores were used to identify the study cohort, which included ICU patients whose outcomes were felt to be most sensitive to organisational innovations. The primary outcomes included mortality and length of stay. Outcome measurements were directly standardised, and bootstrapped CIs were calculated with adjustment for false discovery rate. Using purposive sampling, we then generated a blinded list of five positive outliers and five negative comparators.Using rapid qualitative inquiry (RQI), blinded interdisciplinary site visit teams will conduct interviews and observations using a team ethnography approach. After data collection is completed, the data will be unblinded and analysed using a cross-case method to identify themes, patterns and innovations using a constant comparative grounded theory approach. This process detects the innovations in intensive care and supports an evaluation of how positive deviance and RQI methods can be adapted to healthcare. The study protocol was approved by the Stanford University Institutional Review Board (reference: 39509). We plan on publishing study findings and methodological guidance in peer-reviewed academic journals, white papers and presentations at conferences. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Application of quality improvement analytic methodology in emergency medicine research: A comparative evaluation.

    PubMed

    Harries, Bruce; Filiatrault, Lyne; Abu-Laban, Riyad B

    2018-05-30

    Quality improvement (QI) analytic methodology is rarely encountered in the emergency medicine literature. We sought to comparatively apply QI design and analysis techniques to an existing data set, and discuss these techniques as an alternative to standard research methodology for evaluating a change in a process of care. We used data from a previously published randomized controlled trial on triage-nurse initiated radiography using the Ottawa ankle rules (OAR). QI analytic tools were applied to the data set from this study and evaluated comparatively against the original standard research methodology. The original study concluded that triage nurse-initiated radiographs led to a statistically significant decrease in mean emergency department length of stay. Using QI analytic methodology, we applied control charts and interpreted the results using established methods that preserved the time sequence of the data. This analysis found a compelling signal of a positive treatment effect that would have been identified after the enrolment of 58% of the original study sample, and in the 6th month of this 11-month study. Our comparative analysis demonstrates some of the potential benefits of QI analytic methodology. We found that had this approach been used in the original study, insights regarding the benefits of nurse-initiated radiography using the OAR would have been achieved earlier, and thus potentially at a lower cost. In situations where the overarching aim is to accelerate implementation of practice improvement to benefit future patients, we believe that increased consideration should be given to the use of QI analytic methodology.

  1. Emotional resistance building: how family members of loved ones undergoing chemotherapy treatment process their fear of emotional collapse.

    PubMed

    McCarthy, Bridie; Andrews, Tom; Hegarty, Josephine

    2015-04-01

    To explore family members' experiences when their loved one is undergoing chemotherapy treatment as an outpatient for newly diagnosed colorectal cancer and to develop an explanatory theory of how they process their main concern. Most individuals with cancer are now treated as outpatients and cared for by family members. International research highlights the many side effects of chemotherapy, which in the absence of specific information and/or experience can be difficult for family members to deal with. Unmet needs can have an impact on the health of both patients and family members. Classic grounded theory methodology was used for this study. Using classic grounded theory methodology, family members (n = 35) of patients undergoing chemotherapy treatment for cancer were interviewed (June 2010-July 2011). Data were analysed using the concurrent processes of constant comparative analysis, data collection, theoretical sampling and memo writing. The main concern that emerged for participants was fear of emotional collapse. This fear was dealt with through a process conceptualized as 'Emotional Resistance Building'. This is a basic social process with three phases: 'Figuring out', 'Getting on with it' and 'Uncertainty adjustment'. The phases are not linear, but interrelated as participants can be in any one or more of the phases at any one time. This theory has the potential to be used by healthcare professionals working in oncology to support family members of patients undergoing chemotherapy. New ways of supporting family members through this most difficult and challenging period are articulated within this theory. © 2014 John Wiley & Sons Ltd.

  2. Development of USEtox characterisation factors for dishwasher detergents using data made available under REACH.

    PubMed

    Igos, Elorri; Moeller, Ruth; Benetto, Enrico; Biwer, Arno; Guiton, Mélanie; Dieumegard, Philippe

    2014-04-01

    Because of the more and more stringent regulations and customer demand, dishwasher detergent manufacturers are constantly improving the composition of the products towards better environmental performances. In order to quantify the pros and cons of these changes on the lifecycle of detergents, as compared to conventional products, the use of Life Cycle Assessment (LCA) is a meaningful opportunity. However, the application of the methodology is hampered by the lack of Characterisation Factors (CFs) relative to the specific chemical substances included in the detergents composition, which cannot be included in the impact assessment of the effluent discharge. In this study we have tackled this problem, taking advantage of the specific case of three dishwasher detergents produced by the Chemolux/McBride group: phosphate-based, eco-labelled and phosphate-free formulations. Nine CFs for freshwater ecotoxicity and seven CFs for human toxicity have been developed, using the USEtox methodology and data made available under the REACH regulation. As a result, the dishwasher effluent composition could be characterised by more than 95% for freshwater ecotoxicity whereas for human toxicity the percentage was less than 36%, due to the lack of adequate and reliable repeated dose toxicity studies. The main contributing substances to freshwater ecotoxicity were found to be sodium percarbonate and sodium triphosphate, the latter confirming the pertinence of phosphates banning in the detergent industry. Regarding human toxicity, zinc shows the highest contribution. Further comparison to previous studies and sensitivity analysis substantiated the robustness of these conclusions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Design of a modulated orthovoltage stereotactic radiosurgery system.

    PubMed

    Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S

    2017-07-01

    To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.

  4. Utilization of non-conventional systems for conversion of biomass to food components: Potential for utilization of algae in engineered foods

    NASA Technical Reports Server (NTRS)

    Karel, M.; Kamarei, A. R.; Nakhost, Z.

    1985-01-01

    The major nutritional components of the green algae (Scenedesmus obliquus) grown in a Constant Cell density Apparatus were determined. Suitable methodology to prepare proteins from which three major undesirable components of these cells (i.e., cell walls, nucleic acids, and pigments) were either removed or substantially reduced was developed. Results showed that processing of green algae to protein isolate enhances its potential nutritional and organoleptic acceptability as a diet component in a Controlled Ecological Life Support System.

  5. Coating Life Prediction

    NASA Technical Reports Server (NTRS)

    Nesbitt, J. A.; Gedwill, M. A.

    1984-01-01

    Hot-section gas-turbine components typically require some form of coating for oxidation and corrosion protection. Efficient use of coatings requires reliable and accurate predictions of the protective life of the coating. Currently engine inspections and component replacements are often made on a conservative basis. As a result, there is a constant need to improve and develop the life-prediction capability of metallic coatings for use in various service environments. The purpose of this present work is aimed at developing of an improved methodology for predicting metallic coating lives in an oxidizing environment and in a corrosive environment.

  6. Increased water retention in polymer electrolyte membranes at elevated temperatures assisted by capillary condensation.

    PubMed

    Park, Moon Jeong; Downing, Kenneth H; Jackson, Andrew; Gomez, Enrique D; Minor, Andrew M; Cookson, David; Weber, Adam Z; Balsara, Nitash P

    2007-11-01

    We establish a new systematic methodology for controlling the water retention of polymer electrolyte membranes. Block copolymer membranes comprising hydrophilic phases with widths ranging from 2 to 5 nm become wetter as the temperature of the surrounding air is increased at constant relative humidity. The widths of the moist hydrophilic phases were measured by cryogenic electron microscopy experiments performed on humid membranes. Simple calculations suggest that capillary condensation is important at these length scales. The correlation between moisture content and proton conductivity of the membranes is demonstrated.

  7. A Description of Methodologies Used in Estimation of A-Weighted Sound Levels for FAA Advisory Circular AC-36-3B.

    DTIC Science & Technology

    1982-01-01

    second) Dia propeller diameter (expressed in inches) T°F air temperature in degrees Farenheit T°C air temperature in degrees Celsius T:dBA total dBA...eMpiriC31 function to the absolute noise level ordinate. The term 240 log ( MH is the most sensitive and important part of the equation. The constant (240...standard day, zero wind, dry, zero gradient runway, at a sea level airport. 2. All aircraft operate at maximum takeoff gross weight. 3. All aircraft climb

  8. An experimental comparison of several current viscoplastic constitutive models at elevated temperature

    NASA Technical Reports Server (NTRS)

    James, G. H.; Imbrie, P. K.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    Four current viscoplastic models are compared experimentally for Inconel 718 at 593 C. This material system responds with apparent negative strain rate sensitivity, undergoes cyclic work softening, and is susceptible to low cycle fatigue. A series of tests were performed to create a data base from which to evaluate material constants. A method to evaluate the constants is developed which draws on common assumptions for this type of material, recent advances by other researchers, and iterative techniques. A complex history test, not used in calculating the constants, is then used to compare the predictive capabilities of the models. The combination of exponentially based inelastic strain rate equations and dynamic recovery is shown to model this material system with the greatest success. The method of constant calculation developed was successfully applied to the complex material response encountered. Backstress measuring tests were found to be invaluable and to warrant further development.

  9. Laser ultrasonic investigations of vertical Bridgman crystal growth

    NASA Astrophysics Data System (ADS)

    Queheillalt, Douglas Ted

    The many difficulties associated with the growth of premium quality CdTe and (Cd,Zn)Te alloys has stimulated an interest in the development of a non-invasive ultrasonic approach to monitor critical growth parameters such as the solid-liquid interface position and shape during vertical Bridgman growth. This sensor methodology is based upon the recognition that in most materials, the ultrasonic velocity (and the elastic stiffness constants that control it) of the solid and liquid phases are temperature dependent and an abrupt increase of the longitudinal wave velocity occurs upon solidification. The laser ultrasonic approach has also been used to measure the ultrasonic velocity of solid and liquid Cd0.96Zn0.04Te as a function of temperature up to 1140°C. Using longitudinal and shear wave velocity values together with data for the temperature dependent density allowed a complete evaluation of the temperature dependent single crystal elastic stiffness constants for solid and the adiabatic bulk modulus for liquid Cd0.96Zn0.04 Te. It was found that the ultrasonic velocities exhibited a strong monotonically decreasing function of temperature in the solid and liquid phases and the longitudinal wave indicated an abrupt almost 50% decrease upon melting. Because ray propagation in partially solidified bodies is complex and defines the sensing methodology, a ray tracing algorithm has been developed to analyze two-dimensional wave propagation in the diametral plane of cylindrical solid-liquid interfaces. Ray path, wavefront and time-of-flight (TOF) projections for rays that travel from a source to an arbitrarily positioned receiver on the diametral plane have been calculated and compared to experimentally measured data on a model liquid-solid interface. The simulations and the experimental results reveal that the interfacial region can be identified from transmission TOF data and when used in conjunction with a nonlinear least squares reconstruction algorithm, the interface geometry (i.e. axial location and shape) can be precisely recovered and the ultrasonic velocities of both solid and liquid phases obtained. To gain insight into the melting and solidification process, a single zone VB growth furnace was integrated with the laser ultrasonic sensor system and used to monitor the melting-solidification and directional solidification characteristics of Cd0.96Zn 0.04Te.

  10. 3-D modeling of ductile tearing using finite elements: Computational aspects and techniques

    NASA Astrophysics Data System (ADS)

    Gullerud, Arne Stewart

    This research focuses on the development and application of computational tools to perform large-scale, 3-D modeling of ductile tearing in engineering components under quasi-static to mild loading rates. Two standard models for ductile tearing---the computational cell methodology and crack growth controlled by the crack tip opening angle (CTOA)---are described and their 3-D implementations are explored. For the computational cell methodology, quantification of the effects of several numerical issues---computational load step size, procedures for force release after cell deletion, and the porosity for cell deletion---enables construction of computational algorithms to remove the dependence of predicted crack growth on these issues. This work also describes two extensions of the CTOA approach into 3-D: a general 3-D method and a constant front technique. Analyses compare the characteristics of the extensions, and a validation study explores the ability of the constant front extension to predict crack growth in thin aluminum test specimens over a range of specimen geometries, absolutes sizes, and levels of out-of-plane constraint. To provide a computational framework suitable for the solution of these problems, this work also describes the parallel implementation of a nonlinear, implicit finite element code. The implementation employs an explicit message-passing approach using the MPI standard to maintain portability, a domain decomposition of element data to provide parallel execution, and a master-worker organization of the computational processes to enhance future extensibility. A linear preconditioned conjugate gradient (LPCG) solver serves as the core of the solution process. The parallel LPCG solver utilizes an element-by-element (EBE) structure of the computations to permit a dual-level decomposition of the element data: domain decomposition of the mesh provides efficient coarse-grain parallel execution, while decomposition of the domains into blocks of similar elements (same type, constitutive model, etc.) provides fine-grain parallel computation on each processor. A major focus of the LPCG solver is a new implementation of the Hughes-Winget element-by-element (HW) preconditioner. The implementation employs a weighted dependency graph combined with a new coloring algorithm to provide load-balanced scheduling for the preconditioner and overlapped communication/computation. This approach enables efficient parallel application of the HW preconditioner for arbitrary unstructured meshes.

  11. Methods for the guideline-based development of quality indicators--a systematic review

    PubMed Central

    2012-01-01

    Background Quality indicators (QIs) are used in many healthcare settings to measure, compare, and improve quality of care. For the efficient development of high-quality QIs, rigorous, approved, and evidence-based development methods are needed. Clinical practice guidelines are a suitable source to derive QIs from, but no gold standard for guideline-based QI development exists. This review aims to identify, describe, and compare methodological approaches to guideline-based QI development. Methods We systematically searched medical literature databases (Medline, EMBASE, and CINAHL) and grey literature. Two researchers selected publications reporting methodological approaches to guideline-based QI development. In order to describe and compare methodological approaches used in these publications, we extracted detailed information on common steps of guideline-based QI development (topic selection, guideline selection, extraction of recommendations, QI selection, practice test, and implementation) to predesigned extraction tables. Results From 8,697 hits in the database search and several grey literature documents, we selected 48 relevant references. The studies were of heterogeneous type and quality. We found no randomized controlled trial or other studies comparing the ability of different methodological approaches to guideline-based development to generate high-quality QIs. The relevant publications featured a wide variety of methodological approaches to guideline-based QI development, especially regarding guideline selection and extraction of recommendations. Only a few studies reported patient involvement. Conclusions Further research is needed to determine which elements of the methodological approaches identified, described, and compared in this review are best suited to constitute a gold standard for guideline-based QI development. For this research, we provide a comprehensive groundwork. PMID:22436067

  12. A comparative review of nurse turnover rates and costs across countries.

    PubMed

    Duffield, Christine M; Roche, Michael A; Homer, Caroline; Buchan, James; Dimitrelis, Sofia

    2014-12-01

    To compare nurse turnover rates and costs from four studies in four countries (US, Canada, Australia, New Zealand) that have used the same costing methodology; the original Nursing Turnover Cost Calculation Methodology. Measuring and comparing the costs and rates of turnover is difficult because of differences in definitions and methodologies. Comparative review. Searches were carried out within CINAHL, Business Source Complete and Medline for studies that used the original Nursing Turnover Cost Calculation Methodology and reported on both costs and rates of nurse turnover, published from 2014 and prior. A comparative review of turnover data was conducted using four studies that employed the original Nursing Turnover Cost Calculation Methodology. Costing data items were converted to percentages, while total turnover costs were converted to US 2014 dollars and adjusted according to inflation rates, to permit cross-country comparisons. Despite using the same methodology, Australia reported significantly higher turnover costs ($48,790) due to higher termination (~50% of indirect costs) and temporary replacement costs (~90% of direct costs). Costs were almost 50% lower in the US ($20,561), Canada ($26,652) and New Zealand ($23,711). Turnover rates also varied significantly across countries with the highest rate reported in New Zealand (44·3%) followed by the US (26·8%), Canada (19·9%) and Australia (15·1%). A significant proportion of turnover costs are attributed to temporary replacement, highlighting the importance of nurse retention. The authors suggest a minimum dataset is also required to eliminate potential variability across countries, states, hospitals and departments. © 2014 John Wiley & Sons Ltd.

  13. ChargeOut! : discounted cash flow compared with traditional machine-rate analysis

    Treesearch

    Ted Bilek

    2008-01-01

    ChargeOut!, a discounted cash-flow methodology in spreadsheet format for analyzing machine costs, is compared with traditional machine-rate methodologies. Four machine-rate models are compared and a common data set representative of logging skidders’ costs is used to illustrate the differences between ChargeOut! and the machine-rate methods. The study found that the...

  14. Local conformational dynamics in alpha-helices measured by fast triplet transfer.

    PubMed

    Fierz, Beat; Reiner, Andreas; Kiefhaber, Thomas

    2009-01-27

    Coupling fast triplet-triplet energy transfer (TTET) between xanthone and naphthylalanine to the helix-coil equilibrium in alanine-based peptides allowed the observation of local equilibrium fluctuations in alpha-helices on the nanoseconds to microseconds time scale. The experiments revealed faster helix unfolding in the terminal regions compared with the central parts of the helix with time constants varying from 250 ns to 1.4 micros at 5 degrees C. Local helix formation occurs with a time constant of approximately 400 ns, independent of the position in the helix. Comparing the experimental data with simulations using a kinetic Ising model showed that the experimentally observed dynamics can be explained by a 1-dimensional boundary diffusion with position-independent elementary time constants of approximately 50 ns for the addition and of approximately 65 ns for the removal of an alpha-helical segment. The elementary time constant for helix growth agrees well with previously measured time constants for formation of short loops in unfolded polypeptide chains, suggesting that helix elongation is mainly limited by a conformational search.

  15. On the anisotropic elastic properties of hydroxyapatite.

    NASA Technical Reports Server (NTRS)

    Katz, J. L.; Ukraincik, K.

    1971-01-01

    Experimental measurements of the isotropic elastic moduli on polycrystalline specimens of hydroxyapatite and fluorapatite are compared with elastic constants measured directly from single crystals of fluorapatite in order to derive a set of pseudo single crystal elastic constants for hydroxyapatite. The stiffness coefficients thus derived are given. The anisotropic and isotropic elastic properties are then computed and compared with similar properties derived from experimental observations of the anisotropic behavior of bone.

  16. Accurate Determination of the Values of Fundamental Physical Constants: The Basis of the New "Quantum" SI Units

    NASA Astrophysics Data System (ADS)

    Karshenboim, S. G.

    2018-03-01

    The metric system appeared as the system of units designed for macroscopic (laboratory scale) measurements. The progress in accurate determination of the values of quantum constants (such as the Planck constant) in SI units shows that the capabilities in high-precision measurement of microscopic and macroscopic quantities in terms of the same units have increased substantially recently. At the same time, relative microscopic measurements (for example, the comparison of atomic transition frequencies or atomic masses) are often much more accurate than relative measurements of macroscopic quantities. This is the basis for the strategy to define units in microscopic phenomena and then use them on the laboratory scale, which plays a crucial role in practical methodological applications determined by everyday life and technologies. The international CODATA task group on fundamental constants regularly performs an overall analysis of the precision world data (the so-called Adjustment of the Fundamental Constants) and publishes their recommended values. The most recent evaluation was based on the data published by the end of 2014; here, we review the corresponding data and results. The accuracy in determination of the Boltzmann constant has increased, the consistency of the data on determination of the Planck constant has improved; it is these two dimensional constants that will be used in near future as the basis for the new definition of the kelvin and kilogram, respectively. The contradictions in determination of the Rydberg constant and the proton charge radius remain. The accuracy of determination of the fine structure constant and relative atomic weight of the electron has improved. Overall, we give a detailed review of the state of the art in precision determination of the values of fundamental constants. The mathematical procedure of the Adjustment, the new data and results are considered in detail. The limitations due to macroscopic properties of material standards (such as the International prototype of the kilogram) and the isotopic composition of substances involved in precision studies in general (as standard measures for the triple point of water) and, in particular, in the determination of the fundamental constants are discussed. The perspectives of the introduction of the new quantum units, which will be free from the mentioned problems, are considered. Many physicists feel no sympathy for the International system of units (SI), believing that it does not properly reflect the character of physical laws. In fact, there are three parallel systems, namely the systems of quantities, system of their units and the related standards. The definition of the units, in particular, the SI units, above all, reflects our ability to perform precision measurements of physical values under certain conditions, in particular, to create appropriate standards. This requirement is not related to the beauty of fundamental laws of nature. More accurate determination of the fundamental constants is one of the areas where we accumulate such experience.

  17. A Methodology for Loading the Advanced Test Reactor Driver Core for Experiment Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowherd, Wilson M.; Nielsen, Joseph W.; Choe, Dong O.

    In support of experiments in the ATR, a new methodology was devised for loading the ATR Driver Core. This methodology will replace the existing methodology used by the INL Neutronic Analysis group to analyze experiments. Studied in this paper was the as-run analysis for ATR Cycle 152B, specifically comparing measured lobe powers and eigenvalue calculations.

  18. The Development of a Methodology for Estimating the Cost of Air Force On-the-Job Training.

    ERIC Educational Resources Information Center

    Samers, Bernard N.; And Others

    The Air Force uses a standardized costing methodology for resident technical training schools (TTS); no comparable methodology exists for computing the cost of on-the-job training (OJT). This study evaluates three alternative survey methodologies and a number of cost models for estimating the cost of OJT for airmen training in the Administrative…

  19. Toward a Framework for Comparative HRD Research

    ERIC Educational Resources Information Center

    Wang, Greg G.; Sun, Judy Y.

    2012-01-01

    Purpose: This paper seeks to address the recent challenges in the international human resource development (HRD) research and the related methodological strategy. Design/methodology/approach: This inquiry is based on a survey of literatures and integrates various comparative research strategies adopted in other major social science disciplines.…

  20. An equivalent method of mixed dielectric constant in passive microwave/millimeter radiometric measurement

    NASA Astrophysics Data System (ADS)

    Su, Jinlong; Tian, Yan; Hu, Fei; Gui, Liangqi; Cheng, Yayun; Peng, Xiaohui

    2017-10-01

    Dielectric constant is an important role to describe the properties of matter. This paper proposes This paper proposes the concept of mixed dielectric constant(MDC) in passive microwave radiometric measurement. In addition, a MDC inversion method is come up, Ratio of Angle-Polarization Difference(RAPD) is utilized in this method. The MDC of several materials are investigated using RAPD. Brightness temperatures(TBs) which calculated by MDC and original dielectric constant are compared. Random errors are added to the simulation to test the robustness of the algorithm. Keywords: Passive detection, microwave/millimeter, radiometric measurement, ratio of angle-polarization difference (RAPD), mixed dielectric constant (MDC), brightness temperatures, remote sensing, target recognition.

  1. A Constant-Factor Approximation Algorithm for the Link Building Problem

    NASA Astrophysics Data System (ADS)

    Olsen, Martin; Viglas, Anastasios; Zvedeniouk, Ilia

    In this work we consider the problem of maximizing the PageRank of a given target node in a graph by adding k new links. We consider the case that the new links must point to the given target node (backlinks). Previous work [7] shows that this problem has no fully polynomial time approximation schemes unless P = NP. We present a polynomial time algorithm yielding a PageRank value within a constant factor from the optimal. We also consider the naive algorithm where we choose backlinks from nodes with high PageRank values compared to the outdegree and show that the naive algorithm performs much worse on certain graphs compared to the constant factor approximation scheme.

  2. Constant fields and constant gradients in open ionic channels.

    PubMed Central

    Chen, D P; Barcilon, V; Eisenberg, R S

    1992-01-01

    Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant. Images FIGURE 1 PMID:1376159

  3. Reflectance and optical constants for Cer-Vit from 250 to 1050 A

    NASA Technical Reports Server (NTRS)

    Osantowski, J. F.

    1974-01-01

    The reflectance for a bowl-feed polished Cer-Vit sample was measured at nine wavelengths and five angles of incidence from 15 to 85 deg. Optical constants were derived by the reflectance-vs-angle-of-incidence method and compared to previously reported values for ultralow-expansion fused silica and several other glasses. Surface-roughness corrections of the reflectance data and optical constants are discussed.

  4. Methodology to design a municipal solid waste generation and composition map: A case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallardo, A., E-mail: gallardo@uji.es; Carlos, M., E-mail: mcarlos@uji.es; Peris, M., E-mail: perism@uji.es

    Highlights: • To draw a waste generation and composition map of a town a lot of factors must be taken into account. • The methodology proposed offers two different depending on the available data combined with geographical information systems. • The methodology has been applied to a Spanish city with success. • The methodology will be a useful tool to organize the municipal solid waste management. - Abstract: The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve naturalmore » resources. To design an adequate MSW management plan the first step consist in defining the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the selective collection. To verify the methodology it has been successfully applied to a Spanish town.« less

  5. Precision controlled atomic resolution scanning transmission electron microscopy using spiral scan pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sang, Xiahan; Lupini, Andrew R.; Ding, Jilai

    Atomic-resolution imaging in an aberration-corrected scanning transmission electron microscope (STEM) can enable direct correlation between atomic structure and materials functionality. The fast and precise control of the STEM probe is, however, challenging because the true beam location deviates from the assigned location depending on the properties of the deflectors. To reduce these deviations, i.e. image distortions, we use spiral scanning paths, allowing precise control of a sub-Å sized electron probe within an aberration-corrected STEM. Although spiral scanning avoids the sudden changes in the beam location (fly-back distortion) present in conventional raster scans, it is not distortion-free. “Archimedean” spirals, with amore » constant angular frequency within each scan, are used to determine the characteristic response at different frequencies. We then show that such characteristic functions can be used to correct image distortions present in more complicated constant linear velocity spirals, where the frequency varies within each scan. Through the combined application of constant linear velocity scanning and beam path corrections, spiral scan images are shown to exhibit less scan distortion than conventional raster scan images. The methodology presented here will be useful for in situ STEM imaging at higher temporal resolution and for imaging beam sensitive materials.« less

  6. Low-level laser therapy on skeletal muscle inflammation: evaluation of irradiation parameters

    NASA Astrophysics Data System (ADS)

    Mantineo, Matías; Pinheiro, João P.; Morgado, António M.

    2014-09-01

    We evaluated the effect of different irradiation parameters in low-level laser therapy (LLLT) for treating inflammation induced in the gastrocnemius muscle of rats through cytokines concentration in systemic blood and analysis of muscle tissue. We used continuous (830 and 980 nm) and pulsed illuminations (830 nm). Animals were divided into five groups per wavelength (10, 20, 30, 40, and 50 mW), and a control group. LLLT was applied during 5 days with a constant irradiation time and area. TNF-α, IL-1β, IL-2, and IL-6 cytokines were quantified by ELISA. Inflammatory cells were counted using microscopy. Identical methodology was used with pulsed illumination. Average power (40 mW) and duty cycle were kept constant (80%) at five frequencies (5, 25, 50, 100, and 200 Hz). For continuous irradiation, treatment effects occurred for all doses, with a reduction of TNF-α, IL-1β, and IL-6 cytokines and inflammatory cells. Continuous irradiation at 830 nm was more effective, a result explained by the action spectrum of cytochrome c oxidase (CCO). Best results were obtained for 40 mW, with data suggesting a biphasic dose response. Pulsed wave irradiation was only effective for higher frequencies, a result that might be related to the rate constants of the CCO internal electron transfer process.

  7. Precision controlled atomic resolution scanning transmission electron microscopy using spiral scan pathways

    DOE PAGES

    Sang, Xiahan; Lupini, Andrew R.; Ding, Jilai; ...

    2017-03-08

    Atomic-resolution imaging in an aberration-corrected scanning transmission electron microscope (STEM) can enable direct correlation between atomic structure and materials functionality. The fast and precise control of the STEM probe is, however, challenging because the true beam location deviates from the assigned location depending on the properties of the deflectors. To reduce these deviations, i.e. image distortions, we use spiral scanning paths, allowing precise control of a sub-Å sized electron probe within an aberration-corrected STEM. Although spiral scanning avoids the sudden changes in the beam location (fly-back distortion) present in conventional raster scans, it is not distortion-free. “Archimedean” spirals, with amore » constant angular frequency within each scan, are used to determine the characteristic response at different frequencies. We then show that such characteristic functions can be used to correct image distortions present in more complicated constant linear velocity spirals, where the frequency varies within each scan. Through the combined application of constant linear velocity scanning and beam path corrections, spiral scan images are shown to exhibit less scan distortion than conventional raster scan images. The methodology presented here will be useful for in situ STEM imaging at higher temporal resolution and for imaging beam sensitive materials.« less

  8. Interaction between DNA and Drugs Having Protonable Basic Groups: Characterization through Affinity Constants, Drug Release Kinetics, and Conformational Changes

    PubMed Central

    Alarcón, Liliana P.; Baena, Yolima; Manzo, Rubén H.

    2017-01-01

    This paper reports the in vitro characterization of the interaction between the phosphate groups of DNA and the protonated species of drugs with basic groups through the determination of the affinity constants, the reversibility of the interaction, and the effect on the secondary structure of the macromolecule. Affinity constants of the counterionic condensation DNA–drug were in the order of 106. The negative electrokinetic potential of DNA decreased with the increase of the proportion of loading drugs. The drugs were slowly released from the DNA–drug complexes and had release kinetics consistent with the high degree of counterionic condensation. The circular dichroism profile of DNA was not modified by complexation with atenolol, lidocaine, or timolol, but was significantly altered by the more lipophilic drugs benzydamine and propranolol, revealing modifications in the secondary structure of the DNA. The in vitro characterization of such interactions provides a physicochemical basis that would contribute to identify the effects of this kind of drugs in cellular cultures, as well as side effects observed under their clinical use. Moreover, this methodology could also be projected to the fields of intracellular DNA transfection and the use of DNA as a carrier of active drugs. PMID:28054999

  9. Electromechanical conversion efficiency for dielectric elastomer generator in different energy harvesting cycles

    NASA Astrophysics Data System (ADS)

    Cao, Jian-Bo; E, Shi-Ju; Guo, Zhuang; Gao, Zhao; Luo, Han-Pin

    2017-11-01

    In order to improve electromechanical conversion efficiency for dielectric elastomer generators (DEG), on the base of studying DEG energy harvesting cycles of constant voltage, constant charge and constant electric field intensity, a new combined cycle mode and optimization theory in terms of the generating mechanism and electromechanical coupling process have been built. By controlling the switching point to achieve the best energy conversion cycle, the energy loss in the energy conversion process is reduced. DEG generating test bench which was used to carry out comparative experiments has been established. Experimental results show that the collected energy in constant voltage cycle, constant charge cycle and constant electric field intensity energy harvesting cycle decreases in turn. Due to the factors such as internal resistance losses, electrical losses and so on, actual energy values are less than the theoretical values. The electric energy conversion efficiency by combining constant electric field intensity cycle with constant charge cycle is larger than that of constant electric field intensity cycle. The relevant conclusions provide a basis for the further applications of DEG.

  10. Globalization and Its Influence on Comparative Education Methodology

    ERIC Educational Resources Information Center

    Chigisheva, Oksana

    2015-01-01

    The article is devoted to the research of the methodological changes that occur in the field of comparative education as a result of globalization. A deep analysis of the globalization phenomenon is undertaken with a special focus on the differentiation of globalization, internationalisation, regionalisation and integration. UNESCO's role in the…

  11. Network Analysis in Comparative Social Sciences

    ERIC Educational Resources Information Center

    Vera, Eugenia Roldan; Schupp, Thomas

    2006-01-01

    This essay describes the pertinence of Social Network Analysis (SNA) for the social sciences in general, and discusses its methodological and conceptual implications for comparative research in particular. The authors first present a basic summary of the theoretical and methodological assumptions of SNA, followed by a succinct overview of its…

  12. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    ERIC Educational Resources Information Center

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  13. The transformed-stationary approach: a generic and simplified methodology for non-stationary extreme value analysis

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Vousdoukas, Michalis; Voukouvalas, Evangelos; Sartini, Ludovica; Feyen, Luc; Besio, Giovanni; Alfieri, Lorenzo

    2016-09-01

    Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MATLAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/ (Mentaschi et al., 2016).

  14. Flexural-torsional vibration of simply supported open cross-section steel beams under moving loads

    NASA Astrophysics Data System (ADS)

    Michaltsos, G. T.; Sarantithou, E.; Sophianopoulos, D. S.

    2005-02-01

    SummaryThe present work deals with linearized modal analysis of the combined flexural-torsional vibration of simply supported steel beams with open monosymmetric cross-sections, acted upon by a load of constant magnitude, traversing its span eccentrically with constant velocity. After thoroughly investigating the free vibrations of the structure, which simulates a commonly used highway bridge, its forced motions under the aforementioned loading type are investigated. Utilizing the capabilities of symbolic computations within modern mathematical software, the effect of the most significant geometrical and cross-sectional beam properties on the free vibration characteristics of the beam are established and presented in tabular and graphical form. Moreover, adopting realistic values of the simplified vehicle model adopted, the effects of eccentricity, load magnitude and corresponding velocity are assessed and interesting conclusions for structural design purposes are drawn. The proposed methodology may serve as a starting point for further in-depth study of the whole scientific subject, in which sophisticated vehicle models, energy dissipation and more complicated bridge models may be used.

  15. New Ways of Treating Data for Diatomic Molecule 'shelf' and Double-Minimum States

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.; Tao, Jason; Khanna, Shirin; Pashov, Asen; Tellinghuisen, Joel

    2017-06-01

    Electronic states whose potential energy functions have 'shelf' or double-minimum shapes have always presented special challenges because, as functions of vibrational quantum number, the vibrational energies/spacings and inertial rotational constants either have an abrupt change of character with discontinuous slope, or past a given point, become completely chaotic. The present work shows that a `traditional' methodology developed for deep `regular' single-well potentials can also provide accurate `parameter-fit' descriptions of the v-dependence of the vibrational energies and rotational constants of shelf-state potentials that allow a conventional RKR calculation of their Potential energy functions. It is also shown that a merging of Pashov's uniquely flexible 'spline point-wise' potential function representation with Le Roy's `Morse/Long-Range' (MLR) analytic functional form which automatically incorporates the correct theoretically known long-range form, yields an analytic function that incorporates most of the advantages of both approaches. An illustrative application of this method to data to a double-minimum state of Na_2 will be described.

  16. Fault-free behavior of reliable multiprocessor systems: FTMP experiments in AIRLAB

    NASA Technical Reports Server (NTRS)

    Clune, E.; Segall, Z.; Siewiorek, D.

    1985-01-01

    This report describes a set of experiments which were implemented on the Fault tolerant Multi-Processor (FTMP) at NASA/Langley's AIRLAB facility. These experiments are part of an effort to formulate and evaluate validation methodologies for fault-tolerant computers. This report deals with the measurement of single parameters (baselines) of a fault free system. The initial set of baseline experiments lead to the following conclusions: (1) The system clock is constant and independent of workload in the tested cases; (2) the instruction execution times are constant; (3) the R4 frame size is 40mS with some variation; (4) the frame stretching mechanism has some flaws in its implementation that allow the possibility of an infinite stretching of frame duration. Future experiments are planned. Some will broaden the results of these initial experiments. Others will measure the system more dynamically. The implementation of a synthetic workload generation mechanism for FTMP is planned to enhance the experimental environment of the system.

  17. On Generating Fatigue Crack Growth Thresholds

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Newman, James, Jr.; Forman, Royce G.

    2003-01-01

    The fatigue crack growth threshold, defining crack growth as either very slow or nonexistent, has been traditionally determined with standardized load reduction methodologies. These experimental procedures can induce load history effects that result in crack closure. This history can affect the crack driving force, i.e. during the unloading process the crack will close first at some point along the wake or blunt at the crack tip, reducing the effective load at the crack tip. One way to reduce the effects of load history is to propagate a crack under constant amplitude loading. As a crack propagates under constant amplitude loading, the stress intensity factor range, Delta K, will increase, as will the crack growth rate. da/dN. A fatigue crack growth threshold test procedure is experimentally validated that does not produce load history effects and can be conducted at a specified stress ratio, R. The authors have chosen to study a ductile aluminum alloy where the plastic deformations generated during testing may be of the magnitude to impact the crack opening.

  18. Comparison of the Experimental Performance of Ferroelectric CPW Circuits with Method of Moment Simulations and Conformal Mapping

    NASA Technical Reports Server (NTRS)

    VanKeuls, Fred W.; Chevalier, Chris T.; Miranda, Felix A.; Carlson, C. M.; Rivkin, T. V.; Parilla, P. A.; Perkins, J. D.; Ginley, D. S.

    2001-01-01

    Experimental measurements of coplanar waveguide (CPW) circuits atop thin films of ferroelectric Ba(x)Sr(1-x)TiO3 (BST) were made as a function bias from 0 to 200 V and frequency from 0.045 to 20 GHz. The resulting phase shifts are compared with method of moments electromagnetic simulations and a conformal mapping analysis to determine the dielectric constant of the BST films. Based on the correlation between the experimental and the modeled data, an analysis of the extent to which the electromagnetic simulators provide reliable values for the dielectric constant of the ferroelectric in these structures has been performed. In addition, to determine how well the modeled data compare with experimental data, the dielectric constant values were also compared to low frequency measurements of interdigitated capacitor circuits on the same films. Results of these comparisons will be presented.

  19. A dimensionless approach for the runoff peak assessment: effects of the rainfall event structure

    NASA Astrophysics Data System (ADS)

    Gnecco, Ilaria; Palla, Anna; La Barbera, Paolo

    2018-02-01

    The present paper proposes a dimensionless analytical framework to investigate the impact of the rainfall event structure on the hydrograph peak. To this end a methodology to describe the rainfall event structure is proposed based on the similarity with the depth-duration-frequency (DDF) curves. The rainfall input consists of a constant hyetograph where all the possible outcomes in the sample space of the rainfall structures can be condensed. Soil abstractions are modelled using the Soil Conservation Service method and the instantaneous unit hydrograph theory is undertaken to determine the dimensionless form of the hydrograph; the two-parameter gamma distribution is selected to test the proposed methodology. The dimensionless approach is introduced in order to implement the analytical framework to any study case (i.e. natural catchment) for which the model assumptions are valid (i.e. linear causative and time-invariant system). A set of analytical expressions are derived in the case of a constant-intensity hyetograph to assess the maximum runoff peak with respect to a given rainfall event structure irrespective of the specific catchment (such as the return period associated with the reference rainfall event). Looking at the results, the curve of the maximum values of the runoff peak reveals a local minimum point corresponding to the design hyetograph derived according to the statistical DDF curve. A specific catchment application is discussed in order to point out the dimensionless procedure implications and to provide some numerical examples of the rainfall structures with respect to observed rainfall events; finally their effects on the hydrograph peak are examined.

  20. MICROBIAL TRANSFORMATION RATE CONSTANTS OF STRUCTURALLY DIVERSE MAN-MADE CHEMICALS

    EPA Science Inventory

    To assist in estimating microbially mediated transformation rates of man-made chemicals from their chemical structures, all second order rate constants that have been measured under conditions that make the values comparable have been extracted from the literature and combined wi...

  1. Methodology to design a municipal solid waste generation and composition map: a case study.

    PubMed

    Gallardo, A; Carlos, M; Peris, M; Colomer, F J

    2014-11-01

    The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve natural resources. To design an adequate MSW management plan the first step consist in defining the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the selective collection. To verify the methodology it has been successfully applied to a Spanish town. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Methodology to design a municipal solid waste generation and composition map: a case study.

    PubMed

    Gallardo, A; Carlos, M; Peris, M; Colomer, F J

    2015-02-01

    The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve natural resources. To design an adequate MSW management plan the first step consists in defining the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the selective collection. To verify the methodology it has been successfully applied to a Spanish town. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Kny Coupling Constants and Form Factors from the Chiral Bag Model

    NASA Astrophysics Data System (ADS)

    Jeong, M. T.; Cheon, Il-T.

    2000-09-01

    The form factors and coupling constants for KNΛ and KNΣ interactions have been calculated in the framework of the Chiral Bag Model with vector mesons. Taking into account vector meson (ρ, ω, K*) field effects, we find -3.88 ≤ gKNΛ ≤ -3.67 and 1.15 ≤ gKNΣ ≤ 1.24, where the quark-meson coupling constants are determined by fitting the renormalized, πNN coupling constant, [gπNN(0)]2/4π = 14.3. It is shown that vector mesons make significant contributions to the coupling constants gKNΛ and gKNΣ. Our values are existing within the experimental limits compared to the phenomenological values extracted from the kaon photo production experiments.

  4. The Theoretical and Methodological Crisis of the Afrocentric Conception.

    ERIC Educational Resources Information Center

    Banks, W. Curtis

    1992-01-01

    Defines the theory of the Afrocentric conception, and comments on Afrocentric research methodology. The Afrocentric conception is likely to succeed if it constructs a particularist theory in contrast to cross-cultural relativism and because it relies on the methodology of the absolute rather than the comparative. (SLD)

  5. Comparing the Energy Content of Batteries, Fuels, and Materials

    ERIC Educational Resources Information Center

    Balsara, Nitash P.; Newman, John

    2013-01-01

    A methodology for calculating the theoretical and practical specific energies of rechargeable batteries, fuels, and materials is presented. The methodology enables comparison of the energy content of diverse systems such as the lithium-ion battery, hydrocarbons, and ammonia. The methodology is relevant for evaluating the possibility of using…

  6. U.S. Comparative and International Graduate Programs: An Overview of Programmatic Size, Relevance, Philosophy, and Methodology

    ERIC Educational Resources Information Center

    Drake, Timothy A.

    2011-01-01

    Previous work has concentrated on the epistemological foundation of comparative and international education (CIE) graduate programs. This study focuses on programmatic size, philosophy, methodology, and pedagogy. It begins by reviewing previous studies. It then provides a theoretical framework and describes the size, relevance, content, and…

  7. Traditional vs. Experiential: A Comparative Study of Instructional Methodologies on Student Achievement in New York City Public Schools

    ERIC Educational Resources Information Center

    Mohan, Subhas

    2015-01-01

    This study explores the differences in student achievement on state standardized tests between experiential learning and direct learning instructional methodologies. Specifically, the study compares student performances in Expeditionary Learning schools, which is a Comprehensive School Reform model that utilizes experiential learning, to their…

  8. Use of Comparative Case Study Methodology for US Public Health Policy Analysis: A Review.

    PubMed

    Dinour, Lauren M; Kwan, Amy; Freudenberg, Nicholas

    There is growing recognition that policies influence population health, highlighting the need for evidence to inform future policy development and reform. This review describes how comparative case study methodology has been applied to public health policy research and discusses the methodology's potential to contribute to this evidence. English-language, peer-reviewed articles published between 1995 and 2012 were sought from 4 databases. Articles were included if they described comparative case studies addressing US public health policy. Two researchers independently assessed the 20 articles meeting review criteria. Case-related characteristics and research design tactics utilized to minimize threats to reliability and validity, such as the use of multiple sources of evidence and a case study protocol, were extracted from each article. Although comparative case study methodology has been used to analyze a range of public health policies at all stages and levels, articles reported an average use of only 3.65 (out of 10) research design tactics. By expanding the use of accepted research design tactics, public health policy researchers can contribute to expanding the evidence needed to advance health-promoting policies.

  9. Ab initio elastic tensor of cubic Ti0.5Al0.5N alloys: Dependence of elastic constants on size and shape of the supercell model and their convergence

    NASA Astrophysics Data System (ADS)

    Tasnádi, Ferenc; Odén, M.; Abrikosov, Igor A.

    2012-04-01

    In this study we discuss the performance of the special quasirandom structure (SQS) method in predicting the elastic properties of B1 (rocksalt) Ti0.5Al0.5N alloy. We use a symmetry-based projection technique, which gives the closest cubic approximate of the elastic tensor and allows us to align the SQSs of different shapes and sizes for a comparison in modeling elastic tensors. We show that the derived closest cubic approximate of the elastic tensor converges faster with respect to SQS size than the elastic tensor itself. That establishes a less demanding computational strategy to achieve convergence for the elastic constants. We determine the cubic elastic constants (Cij) and Zener's type elastic anisotropy (A) of Ti0.5Al0.5N. Optimal supercells, which capture accurately both the configurational disorder and cubic symmetry of elastic tensor, result in C11=447 GPa, C12=158 GPa, and C44=203 GPa with 3% of error and A=1.40 with 6% of error. In addition, we establish the general importance of selecting proper SQS with symmetry arguments to reliably model elasticity of alloys. We suggest the calculation of nine elastic tensor elements: C11, C22, C33, C12, C13, C23, C44, C55, and C66, to analyze the performance of SQSs and predict elastic constants of cubic alloys. The described methodology is general enough to be extended for alloys with other symmetry at arbitrary composition.

  10. Establishing the biomechanical properties of the pelvic soft tissues through an inverse finite element analysis using magnetic resonance imaging.

    PubMed

    Silva, M E T; Brandão, S; Parente, M P L; Mascarenhas, T; Natal Jorge, R M

    2016-04-01

    The mechanical characteristics of the female pelvic floor are relevant when explaining pelvic dysfunction. The decreased elasticity of the tissue often causes inability to maintain urethral position, also leading to vaginal and rectal descend when coughing or defecating as a response to an increase in the internal abdominal pressure. These conditions can be associated with changes in the mechanical properties of the supportive structures-namely, the pelvic floor muscles-including impairment. In this work, we used an inverse finite element analysis to calculate the material constants for the passive mechanical behavior of the pelvic floor muscles. The numerical model of the pelvic floor muscles and bones was built from magnetic resonance axial images acquired at rest. Muscle deformation, simulating the Valsalva maneuver with a pressure of 4 KPa, was compared with the muscle displacement obtained through additional dynamic magnetic resonance imaging. The difference in displacement was of 0.15 mm in the antero-posterior direction and 3.69 mm in the supero-inferior direction, equating to a percentage error of 7.0% and 16.9%, respectively. We obtained the shortest difference in the displacements using an iterative process that reached the material constants for the Mooney-Rivlin constitutive model (c10=11.8 KPa and c20=5.53 E-02 KPa). For each iteration, the orthogonal distance between each node from the group of nodes which defined the puborectal muscle in the numerical model versus dynamic magnetic resonance imaging was computed. With the methodology used in this work, it was possible to obtain in vivo biomechanical properties of the pelvic floor muscles for a specific subject using input information acquired non-invasively. © IMechE 2016.

  11. High Oxygen Delivery to Preserve Exercise Capacity in Patients with Idiopathic Pulmonary Fibrosis Treated with Nintedanib. Methodology of the HOPE-IPF Study.

    PubMed

    Ryerson, Christopher J; Camp, Pat G; Eves, Neil D; Schaeffer, Michele; Syed, Nafeez; Dhillon, Satvir; Jensen, Dennis; Maltais, Francois; O'Donnell, Denis E; Raghavan, Natya; Roman, Michael; Stickland, Michael K; Assayag, Deborah; Bourbeau, Jean; Dion, Genevieve; Fell, Charlene D; Hambly, Nathan; Johannson, Kerri A; Kalluri, Meena; Khalil, Nasreen; Kolb, Martin; Manganas, Helene; Morán-Mendoza, Onofre; Provencher, Steve; Ramesh, Warren; Rolf, J Douglass; Wilcox, Pearce G; Guenette, Jordan A

    2016-09-01

    Pulmonary rehabilitation improves dyspnea and exercise capacity in idiopathic pulmonary fibrosis (IPF); however, it is unknown whether breathing high amounts of oxygen during exercise training leads to further benefits. Herein, we describe the design of the High Oxygen Delivery to Preserve Exercise Capacity in IPF Patients Treated with Nintedanib study (the HOPE-IPF study). The primary objective of this study is to determine the physiological and perceptual impact of breathing high levels of oxygen during exercise training in patients with IPF who are receiving antifibrotic therapy. HOPE-IPF is a two-arm double-blind multicenter randomized placebo-controlled trial of 88 patients with IPF treated with nintedanib. Patients will undergo 8 weeks of three times weekly aerobic cycle exercise training, breathing a hyperoxic gas mixture with a constant fraction of 60% inhaled oxygen, or breathing up to 40% oxygen as required to maintain an oxygen saturation level of at least 88%. End points will be assessed at baseline, postintervention (Week 8), and follow-up (Week 26). The primary analysis will compare the between-group baseline with post-training change in endurance time during constant work rate cycle exercise tests. Additional analyses will evaluate the impact of training with high oxygen delivery on 6-minute walk distance, dyspnea, physical activity, and quality of life. The HOPE-IPF study will lead to a comprehensive understanding of IPF exercise physiology, with the potential to change clinical practice by indicating the need for increased delivery of supplemental oxygen during pulmonary rehabilitation in patients with IPF. Clinical trial registered with www.clinicaltrials.gov (NCT02551068).

  12. Unmanned aircraft system sense and avoid integrity and continuity

    NASA Astrophysics Data System (ADS)

    Jamoom, Michael B.

    This thesis describes new methods to guarantee safety of sense and avoid (SAA) functions for Unmanned Aircraft Systems (UAS) by evaluating integrity and continuity risks. Previous SAA efforts focused on relative safety metrics, such as risk ratios, comparing the risk of using an SAA system versus not using it. The methods in this thesis evaluate integrity and continuity risks as absolute measures of safety, as is the established practice in commercial aircraft terminal area navigation applications. The main contribution of this thesis is a derivation of a new method, based on a standard intruder relative constant velocity assumption, that uses hazard state estimates and estimate error covariances to establish (1) the integrity risk of the SAA system not detecting imminent loss of '"well clear," which is the time and distance required to maintain safe separation from intruder aircraft, and (2) the probability of false alert, the continuity risk. Another contribution is applying these integrity and continuity risk evaluation methods to set quantifiable and certifiable safety requirements on sensors. A sensitivity analysis uses this methodology to evaluate the impact of sensor errors on integrity and continuity risks. The penultimate contribution is an integrity and continuity risk evaluation where the estimation model is refined to address realistic intruder relative linear accelerations, which goes beyond the current constant velocity standard. The final contribution is an integrity and continuity risk evaluation addressing multiple intruders. This evaluation is a new innovation-based method to determine the risk of mis-associating intruder measurements. A mis-association occurs when the SAA system incorrectly associates a measurement to the wrong intruder, causing large errors in the estimated intruder trajectories. The new methods described in this thesis can help ensure safe encounters between aircraft and enable SAA sensor certification for UAS integration into the National Airspace System.

  13. Absolute rate constant for the reaction of atomic chlorine with hydrogen peroxide vapor over the temperature range 265-400 K

    NASA Technical Reports Server (NTRS)

    Michael, J. V.; Whytock, D. A.; Lee, J. H.; Payne, W. A.; Stief, L. J.

    1977-01-01

    Rate constants for the reaction of atomic chlorine with hydrogen peroxide were measured from 265-400 K using the flash photolysis-resonance fluorescence technique. Analytical techniques were developed to measure H2O2 under reaction conditions. Due to ambiguity in the interpretation of the analytical results, the data combine to give two equally acceptable representations of the temperature dependence. The results are compared to previous work at 298 K and are theoretically discussed in terms of the mechanism of the reaction. Additional experiments on the H + H2O2 reaction at 298 and 359 K are compared with earlier results from this laboratory and give a slightly revised bimolecular rate constant.

  14. Most systematic reviews of high methodological quality on psoriasis interventions are classified as high risk of bias using ROBIS tool.

    PubMed

    Gómez-García, Francisco; Ruano, Juan; Gay-Mimbrera, Jesus; Aguilar-Luque, Macarena; Sanz-Cabanillas, Juan Luis; Alcalde-Mellado, Patricia; Maestre-López, Beatriz; Carmona-Fernández, Pedro Jesús; González-Padilla, Marcelino; García-Nieto, Antonio Vélez; Isla-Tejera, Beatriz

    2017-12-01

    No gold standard exists to assess methodological quality of systematic reviews (SRs). Although Assessing the Methodological Quality of Systematic Reviews (AMSTAR) is widely accepted for analyzing quality, the ROBIS instrument has recently been developed. This study aimed to compare the capacity of both instruments to capture the quality of SRs concerning psoriasis interventions. Systematic literature searches were undertaken on relevant databases. For each review, methodological quality and bias risk were evaluated using the AMSTAR and ROBIS tools. Descriptive and principal component analyses were conducted to describe similarities and discrepancies between both assessment tools. We classified 139 intervention SRs as displaying high/moderate/low methodological quality and as high/low risk of bias. A high risk of bias was detected for most SRs classified as displaying high or moderate methodological quality by AMSTAR. When comparing ROBIS result profiles, responses to domain 4 signaling questions showed the greatest differences between bias risk assessments, whereas domain 2 items showed the least. When considering SRs published about psoriasis, methodological quality remains suboptimal, and the risk of bias is elevated, even for SRs exhibiting high methodological quality. Furthermore, the AMSTAR and ROBIS tools may be considered as complementary when conducting quality assessment of SRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Potentiostatic pulse-deposition of calcium phosphate on magnesium alloy for temporary implant applications--an in vitro corrosion study.

    PubMed

    Kannan, M Bobby; Wallipa, O

    2013-03-01

    In this study, a magnesium alloy (AZ91) was coated with calcium phosphate using potentiostatic pulse-potential and constant-potential methods and the in vitro corrosion behaviour of the coated samples was compared with the bare metal. In vitro corrosion studies were carried out using electrochemical impedance spectroscopy and potentiodynamic polarization in simulated body fluid (SBF) at 37 °C. Calcium phosphate coatings enhanced the corrosion resistance of the alloy, however, the pulse-potential coating performed better than the constant-potential coating. The pulse-potential coating exhibited ~3 times higher polarization resistance than that of the constant-potential coating. The corrosion current density obtained from the potentiodynamic polarization curves was significantly less (~60%) for the pulse-deposition coating as compared to the constant-potential coating. Post-corrosion analysis revealed only slight corrosion on the pulse-potential coating, whereas the constant-potential coating exhibited a large number of corrosion particles attached to the coating. The better in vitro corrosion performance of the pulse-potential coating can be attributed to the closely packed calcium phosphate particles. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Methodologic Considerations for Quantitative 18F-FDG PET/CT Studies of Hepatic Glucose Metabolism in Healthy Subjects.

    PubMed

    Trägårdh, Malene; Møller, Niels; Sørensen, Michael

    2015-09-01

    PET with the glucose analog (18)F-FDG is used to measure regional tissue metabolism of glucose. However, (18)F-FDG may have affinities different from those of glucose for plasma membrane transporters and intracellular enzymes; the lumped constant (LC) can be used to correct these differences kinetically. The aims of this study were to investigate the feasibility of measuring human hepatic glucose metabolism with dynamic (18)F-FDG PET/CT and to determine an operational LC for (18)F-FDG by comparison with (3)H-glucose measurements. Eight healthy human subjects were included. In all studies, (18)F-FDG and (3)H-glucose were mixed in saline and coadministered. A 60-min dynamic PET recording of the liver was performed for 180 min with blood sampling from catheters in a hepatic vein and a radial artery (concentrations of (18)F-FDG and (3)H-glucose in blood). Hepatic blood flow was determined by indocyanine green infusion. First, 3 subjects underwent studies comparing bolus administration and constant-infusion administration of tracers during hyperinsulinemic-euglycemic clamping. Next, 5 subjects underwent studies comparing fasting and hyperinsulinemic-euglycemic clamping with tracer infusions. Splanchnic extraction fractions of (18)F-FDG (E*) and (3)H-glucose (E) were calculated from concentrations in blood, and the LC was calculated as ln(1 - E*)/ln(1 - E). Volumes of interest were drawn in the liver tissue, and hepatic metabolic clearance of (18)F-FDG (mL of blood/100 mL of liver tissue/min) was estimated. For bolus versus infusion, E* values were always negative when (18)F-FDG was administered as a bolus and were always positive when it was administered as an infusion. For fasting versus clamping, E* values were positive in 4 of 5 studies during fasting and were always positive during clamping. Negative extraction fractions were ascribed to the tracer distribution in the large volume of distribution in the prehepatic splanchnic bed. The LC ranged from 0.43 to 2.53, with no significant difference between fasting and clamping. The large volume of distribution of (18)F-FDG in the prehepatic splanchnic bed may complicate the analysis of dynamic PET data because it represents the mixed tracer input to the liver via the portal vein. Therefore, dynamic (18)F-FDG data for human hepatic glucose metabolism should be interpreted with caution, but constant tracer infusion seems to yield more robust results than bolus injection. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  17. Effective optical constants of anisotropic materials

    NASA Technical Reports Server (NTRS)

    Aronson, J. R.; Emslie, A. G.

    1980-01-01

    The applicability of a technique for determining the optical constants of soil or aerosol components on the basis of measurements of the reflectance or transmittance of inhomogeneous samples of component material is investigated. Optical constants for a sample of very pure quartzite were obtained by a specular reflection technique and line parameters were calculated by classical dispersion theory. Predictions of the reflectance of powdered quartz were then derived from optical constants measured for the anisotropic quartz and for pure quartz crystals, and compared with experimental measurements. The calculated spectra are found to resemble each other moderately well in shape, however the reflectance level calculated from the psuedo-optical constants (quartzite) is consistently below that calculated from quartz values. The spectrum calculated from the quartz optical constants is also shown to represent the experimental nonrestrahlen features more accurately. It is thus concluded that although optical constants derived from inhomogeneous materials may represent the spectral features of a powdered sample qualitatively a quantitative fit to observed data is not likely.

  18. Methodological standards and patient-centeredness in comparative effectiveness research: the PCORI perspective.

    PubMed

    2012-04-18

    Rigorous methodological standards help to ensure that medical research produces information that is valid and generalizable, and are essential in patient-centered outcomes research (PCOR). Patient-centeredness refers to the extent to which the preferences, decision-making needs, and characteristics of patients are addressed, and is the key characteristic differentiating PCOR from comparative effectiveness research. The Patient Protection and Affordable Care Act signed into law in 2010 created the Patient-Centered Outcomes Research Institute (PCORI), which includes an independent, federally appointed Methodology Committee. The Methodology Committee is charged to develop methodological standards for PCOR. The 4 general areas identified by the committee in which standards will be developed are (1) prioritizing research questions, (2) using appropriate study designs and analyses, (3) incorporating patient perspectives throughout the research continuum, and (4) fostering efficient dissemination and implementation of results. A Congressionally mandated PCORI methodology report (to be issued in its first iteration in May 2012) will begin to provide standards in each of these areas, and will inform future PCORI funding announcements and review criteria. The work of the Methodology Committee is intended to enable generation of information that is relevant and trustworthy for patients, and to enable decisions that improve patient-centered outcomes.

  19. Stability of aerosol droplets in Bessel beam optical traps under constant and pulsed external forces

    NASA Astrophysics Data System (ADS)

    David, Grégory; Esat, Kıvanç; Hartweg, Sebastian; Cremer, Johannes; Chasovskikh, Egor; Signorell, Ruth

    2015-04-01

    We report on the dynamics of aerosol droplets in optical traps under the influence of additional constant and pulsed external forces. Experimental results are compared with simulations of the three-dimensional droplet dynamics for two types of optical traps, the counter-propagating Bessel beam (CPBB) trap and the quadruple Bessel beam (QBB) trap. Under the influence of a constant gas flow (constant external force), the QBB trap is found to be more stable compared with the CPBB trap. By contrast, under pulsed laser excitation with laser pulse durations of nanoseconds (pulsed external force), the type of trap is of minor importance for the droplet stability. It typically needs pulsed laser forces that are several orders of magnitude higher than the optical forces to induce escape of the droplet from the trap. If the droplet strongly absorbs the pulsed laser light, these escape forces can be strongly reduced. The lower stability of absorbing droplets is a result of secondary thermal processes that cause droplet escape.

  20. Stability of aerosol droplets in Bessel beam optical traps under constant and pulsed external forces.

    PubMed

    David, Grégory; Esat, Kıvanç; Hartweg, Sebastian; Cremer, Johannes; Chasovskikh, Egor; Signorell, Ruth

    2015-04-21

    We report on the dynamics of aerosol droplets in optical traps under the influence of additional constant and pulsed external forces. Experimental results are compared with simulations of the three-dimensional droplet dynamics for two types of optical traps, the counter-propagating Bessel beam (CPBB) trap and the quadruple Bessel beam (QBB) trap. Under the influence of a constant gas flow (constant external force), the QBB trap is found to be more stable compared with the CPBB trap. By contrast, under pulsed laser excitation with laser pulse durations of nanoseconds (pulsed external force), the type of trap is of minor importance for the droplet stability. It typically needs pulsed laser forces that are several orders of magnitude higher than the optical forces to induce escape of the droplet from the trap. If the droplet strongly absorbs the pulsed laser light, these escape forces can be strongly reduced. The lower stability of absorbing droplets is a result of secondary thermal processes that cause droplet escape.

  1. Comparative analysis of three prehospital emergency medical services organizations in India and Pakistan.

    PubMed

    Sriram, V; Gururaj, G; Razzak, J A; Naseer, R; Hyder, A A

    2016-08-01

    Strengthened emergency medical services (EMS) are urgently required in South Asia to reduce needless death and disability. Several EMS models have been introduced in India and Pakistan, and research on these models can facilitate improvements to EMS in the region. Our objective was to conduct a cross-case comparative analysis of three EMS organizations in India and Pakistan - GVK EMRI, Aman Foundation and Rescue 1122 - in order to draw out similarities and differences in their models. Case study methodology was used to systematically explore the organizational models of GVK EMRI (Karnataka, India), Aman Foundation (Karachi, Pakistan), and Rescue 1122 (Punjab, Pakistan). Qualitative methods - interviews, document review and non-participant observation - were utilized, and using a process of constant comparison, data were analysed across cases according to the WHO health system 'building blocks'. Emergent themes under each health system 'building block' of service delivery, health workforce, medical products and technology, health information systems, leadership and governance, and financing were described. Cross-cutting issues not applicable to any single building block were further identified. This cross-case comparison, the first of its kind in low- and middle-income countries, highlights key innovations and lessons, and areas of further research across EMS organizations in India, Pakistan and other resource-poor settings. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  2. Accuracy, precision, and economic efficiency for three methods of thrips (Thysanoptera: Thripidae) population density assessment.

    PubMed

    Sutherland, Andrew M; Parrella, Michael P

    2011-08-01

    Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.

  3. Deterministic Multiaxial Creep and Creep Rupture Enhancements for CARES/Creep Integrated Design Code

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama M.

    1998-01-01

    High temperature and long duration applications of monolithic ceramics can place their failure mode in the creep rupture regime. A previous model advanced by the authors described a methodology by which the creep rupture life of a loaded component can be predicted. That model was based on the life fraction damage accumulation rule in association with the modified Monkman-Grant creep rupture criterion. However, that model did not take into account the deteriorating state of the material due to creep damage (e.g., cavitation) as time elapsed. In addition, the material creep parameters used in that life prediction methodology, were based on uniaxial creep curves displaying primary and secondary creep behavior, with no tertiary regime. The objective of this paper is to present a creep life prediction methodology based on a modified form of the Kachanov-Rabotnov continuum damage mechanics (CDM) theory. In this theory, the uniaxial creep rate is described in terms of sum, temperature, time, and the current state of material damage. This scalar damage state parameter is basically an abstract measure of the current state of material damage due to creep deformation. The damage rate is assumed to vary with stress, temperature, time, and the current state of damage itself. Multiaxial creep and creep rupture formulations of the CDM approach are presented in this paper. Parameter estimation methodologies based on nonlinear regression analysis are also described for both, isothermal constant stress states and anisothermal variable stress conditions This creep life prediction methodology was preliminarily added to the integrated design code CARES/Creep (Ceramics Analysis and Reliability Evaluation of Structures/Creep), which is a postprocessor program to commercially available finite element analysis (FEA) packages. Two examples, showing comparisons between experimental and predicted creep lives of ceramic specimens, are used to demonstrate the viability of Ns methodology and the CARES/Creep program.

  4. Deterministic and Probabilistic Creep and Creep Rupture Enhancement to CARES/Creep: Multiaxial Creep Life Prediction of Ceramic Structures Using Continuum Damage Mechanics and the Finite Element Method

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.

    1998-01-01

    High temperature and long duration applications of monolithic ceramics can place their failure mode in the creep rupture regime. A previous model advanced by the authors described a methodology by which the creep rupture life of a loaded component can be predicted. That model was based on the life fraction damage accumulation rule in association with the modified Monkman-Grant creep ripture criterion However, that model did not take into account the deteriorating state of the material due to creep damage (e.g., cavitation) as time elapsed. In addition, the material creep parameters used in that life prediction methodology, were based on uniaxial creep curves displaying primary and secondary creep behavior, with no tertiary regime. The objective of this paper is to present a creep life prediction methodology based on a modified form of the Kachanov-Rabotnov continuum damage mechanics (CDM) theory. In this theory, the uniaxial creep rate is described in terms of stress, temperature, time, and the current state of material damage. This scalar damage state parameter is basically an abstract measure of the current state of material damage due to creep deformation. The damage rate is assumed to vary with stress, temperature, time, and the current state of damage itself. Multiaxial creep and creep rupture formulations of the CDM approach are presented in this paper. Parameter estimation methodologies based on nonlinear regression analysis are also described for both, isothermal constant stress states and anisothermal variable stress conditions This creep life prediction methodology was preliminarily added to the integrated design code CARES/Creep (Ceramics Analysis and Reliability Evaluation of Structures/Creep), which is a postprocessor program to commercially available finite element analysis (FEA) packages. Two examples, showing comparisons between experimental and predicted creep lives of ceramic specimens, are used to demonstrate the viability of this methodology and the CARES/Creep program.

  5. A one-dimensional model for gas-solid heat transfer in pneumatic conveying

    NASA Astrophysics Data System (ADS)

    Smajstrla, Kody Wayne

    A one-dimensional ODE model reduced from a two-fluid model of a higher dimensional order is developed to study dilute, two-phase (air and solid particles) flows with heat transfer in a horizontal pneumatic conveying pipe. Instead of using constant air properties (e.g., density, viscosity, thermal conductivity) evaluated at the initial flow temperature and pressure, this model uses an iteration approach to couple the air properties with flow pressure and temperature. Multiple studies comparing the use of constant or variable air density, viscosity, and thermal conductivity are conducted to study the impact of the changing properties to system performance. The results show that the fully constant property calculation will overestimate the results of the fully variable calculation by 11.4%, while the constant density with variable viscosity and thermal conductivity calculation resulted in an 8.7% overestimation, the constant viscosity with variable density and thermal conductivity overestimated by 2.7%, and the constant thermal conductivity with variable density and viscosity calculation resulted in a 1.2% underestimation. These results demonstrate that gas properties varying with gas temperature can have a significant impact on a conveying system and that the varying density accounts for the majority of that impact. The accuracy of the model is also validated by comparing the simulation results to the experimental values found in the literature.

  6. Evaluation of the Circulatory Dynamics by using the Windkessel Model in Different Body Positions

    NASA Astrophysics Data System (ADS)

    Kotani, Kiyoshi; Iida, Fumiaki; Ogawa, Yutaro; Takamasu, Kiyoshi; Jimbo, Yasuhiko

    Autonomic nervous system is important in maintaining homeostasis by the opposing effects of sympathetic and parasympathetic nervous activity on organs. However, it is known that they are at times simultaneously increased or decreased in cases of strong fear or depression. Therefore, it is required to evaluate sympathetic and parasympathetic nervous activity independently. In this paper, we propose a method to evaluate sympathetic nervous activity by analyzing the decreases in blood pressure by utilizing the Windkessel model. Experiments are performed in sitting and standing positions for 380 s, respectively. First, we evaluate the effects of length for analysis on the Windkessel time constant. We shorten the length for analysis by multiplying constant coefficients (1.0, 0.9, and 0.8) to the length of blood pressure decrease and then cut-out the waveform for analysis. Then it is found that the Windkessel time constant is decreased as the length for analysis is shortened. This indicates that the length for analysis should be matched when the different experiments are compared. Second, we compare the Windkessel time constant of sitting to that of standing by matching their length for analysis. With statistically significant difference (P<0.05) the results indicate that the Windkessel time constant is larger in the sitting position. Through our observations this difference in the Windkessel time constant is caused by sympathetic nervous activity on vascular smooth muscle.

  7. A new approach to assessing the water footprint of wine: an Italian case study.

    PubMed

    Lamastra, Lucrezia; Suciu, Nicoleta Alina; Novelli, Elisa; Trevisan, Marco

    2014-08-15

    Agriculture is the largest freshwater consumer, accounting for 70% of the world's water withdrawal. Water footprints (WFs) are being increasingly used to indicate the impacts of water use by production systems. A new methodology to assess WF of wine was developed in the framework of the V.I.V.A. project (Valutazione Impatto Viticoltura sull'Ambiente), launched by the Italian Ministry for the Environment in 2011 to improve the Italian wine sector's sustainability. The new methodology has been developed that enables different vines from the same winery to be compared. This was achieved by calculating the gray water footprint, following Tier III approach proposed by Hoekstra et al. (2011). The impact of water use during the life cycle of grape-wine production was assessed for six different wines from the same winery in Sicily, Italy using both the newly developed methodology (V.I.V.A.) and the classical methodology proposed by the Water Footprint Network (WFN). In all cases green water was the largest contributor to WF, but the new methodology also detected differences between vines of the same winery. Furthermore, V.I.V.A. methodology assesses water body contamination by pesticides application whereas the WFN methodology considers just fertilization. This fact ended highlights the highest WF of vineyard 4 calculated by V.I.V.A. if compared with the WF calculated with WFN methodology. Comparing the WF of wine produced with grapes from the six different wines, the factors most greatly influencing the results obtained in this study were: distance from the water body, fertilization rate, amount and eco-toxicological behavior of the active ingredients used. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Diversity of nursing student views about simulation design: a q-methodological study.

    PubMed

    Paige, Jane B; Morin, Karen H

    2015-05-01

    Education of future nurses benefits from well-designed simulation activities. Skillful teaching with simulation requires educators to be constantly aware of how students experience learning and perceive educators' actions. Because revision of simulation activities considers feedback elicited from students, it is crucial to understand the perspective from which students base their response. In a Q-methodological approach, 45 nursing students rank-ordered 60 opinion statements about simulation design into a distribution grid. Factor analysis revealed that nursing students hold five distinct and uniquely personal perspectives-Let Me Show You, Stand By Me, The Agony of Defeat, Let Me Think It Through, and I'm Engaging and So Should You. Results suggest that nurse educators need to reaffirm that students clearly understand the purpose of each simulation activity. Nurse educators should incorporate presimulation assignments to optimize learning and help allay anxiety. The five perspectives discovered in this study can serve as a tool to discern individual students' learning needs. Copyright 2015, SLACK Incorporated.

  9. A Fatigue Life Prediction Model of Welded Joints under Combined Cyclic Loading

    NASA Astrophysics Data System (ADS)

    Goes, Keurrie C.; Camarao, Arnaldo F.; Pereira, Marcos Venicius S.; Ferreira Batalha, Gilmar

    2011-01-01

    A practical and robust methodology is developed to evaluate the fatigue life in seam welded joints when subjected to combined cyclic loading. The fatigue analysis was conducted in virtual environment. The FE stress results from each loading were imported to fatigue code FE-Fatigue and combined to perform the fatigue life prediction using the S x N (stress x life) method. The measurement or modelling of the residual stresses resulting from the welded process is not part of this work. However, the thermal and metallurgical effects, such as distortions and residual stresses, were considered indirectly through fatigue curves corrections in the samples investigated. A tube-plate specimen was submitted to combined cyclic loading (bending and torsion) with constant amplitude. The virtual durability analysis result was calibrated based on these laboratory tests and design codes such as BS7608 and Eurocode 3. The feasibility and application of the proposed numerical-experimental methodology and contributions for the technical development are discussed. Major challenges associated with this modelling and improvement proposals are finally presented.

  10. Large Engine Technology (LET) Short Haul Civil Tiltrotor Contingency Power Materials Knowledge and Lifing Methodologies

    NASA Technical Reports Server (NTRS)

    Spring, Samuel D.

    2006-01-01

    This report documents the results of an experimental program conducted on two advanced metallic alloy systems (Rene' 142 directionally solidified alloy (DS) and Rene' N6 single crystal alloy) and the characterization of two distinct internal state variable inelastic constitutive models. The long term objective of the study was to develop a computational life prediction methodology that can integrate the obtained material data. A specialized test matrix for characterizing advanced unified viscoplastic models was specified and conducted. This matrix included strain controlled tensile tests with intermittent relaxtion test with 2 hr hold times, constant stress creep tests, stepped creep tests, mixed creep and plasticity tests, cyclic temperature creep tests and tests in which temperature overloads were present to simulate actual operation conditions for validation of the models. The selected internal state variable models where shown to be capable of representing the material behavior exhibited by the experimental results; however the program ended prior to final validation of the models.

  11. A methodological proposal to evaluate the cost of duration moral hazard in workplace accident insurance.

    PubMed

    Martín-Román, Ángel; Moral, Alfonso

    2017-12-01

    The cost of duration moral hazard in workplace accident insurance has been amply explored by North-American scholars. Given the current context of financial constraints in public accounts, and particularly in the Social Security system, we feel that the issue merits inquiry in the case of Spain. The present research posits a methodological proposal using the econometric technique of stochastic frontiers, which allows us to break down the duration of work-related leave into what we term "economic days" and "medical days". Our calculations indicate that during the 9-year period spanning 2005-2013, the cost of sick leave amongst full-time salaried workers amounted to 6920 million Euros (in constant 2011 Euros). Of this total, and bearing in mind that "economic days" are those attributable to duration moral hazard, over 3000 million Euros might be linked to workplace absenteeism. It is on this figure where economic policy measures might prove more effective.

  12. E-cigarettes: methodological and ideological issues and research priorities.

    PubMed

    Etter, Jean-François

    2015-02-16

    Cigarette combustion, rather than either tobacco or nicotine, is the cause of a public health disaster. Fortunately, several new technologies that vaporize nicotine or tobacco, and may make cigarettes obsolete, have recently appeared. Research priorities include the effects of vaporizers on smoking cessation and initiation, their safety and toxicity, use by non-smokers, dual use of vaporizers and cigarettes, passive vaping, renormalization of smoking, and the development of messages that effectively communicate the continuum of risk for tobacco and nicotine products. A major difficulty is that we are chasing a moving target. New products constantly appear, and research results are often obsolete by the time they are published. Vaporizers do not need to be safe, only safer than cigarettes. However, harm reduction principles are often misunderstood or rejected. In the context of a fierce ideological debate, and major investments by the tobacco industry, it is crucial that independent researchers provide regulators and the public with evidence-based guidance. The methodological and ideological hurdles on this path are discussed in this commentary.

  13. New Developments in the Embedded Statistical Coupling Method: Atomistic/Continuum Crack Propagation

    NASA Technical Reports Server (NTRS)

    Saether, E.; Yamakov, V.; Glaessgen, E.

    2008-01-01

    A concurrent multiscale modeling methodology that embeds a molecular dynamics (MD) region within a finite element (FEM) domain has been enhanced. The concurrent MD-FEM coupling methodology uses statistical averaging of the deformation of the atomistic MD domain to provide interface displacement boundary conditions to the surrounding continuum FEM region, which, in turn, generates interface reaction forces that are applied as piecewise constant traction boundary conditions to the MD domain. The enhancement is based on the addition of molecular dynamics-based cohesive zone model (CZM) elements near the MD-FEM interface. The CZM elements are a continuum interpretation of the traction-displacement relationships taken from MD simulations using Cohesive Zone Volume Elements (CZVE). The addition of CZM elements to the concurrent MD-FEM analysis provides a consistent set of atomistically-based cohesive properties within the finite element region near the growing crack. Another set of CZVEs are then used to extract revised CZM relationships from the enhanced embedded statistical coupling method (ESCM) simulation of an edge crack under uniaxial loading.

  14. Application of anaerobic granular sludge for competitive biosorption of methylene blue and Pb(II): Fluorescence and response surface methodology.

    PubMed

    Shi, Li; Wei, Dong; Ngo, Huu Hao; Guo, Wenshan; Du, Bin; Wei, Qin

    2015-10-01

    This study assessed the biosorption of anaerobic granular sludge (AGS) and its capacity as a biosorbent to remove Pb(II) and methylene blue (MB) from multi-components aqueous solution. It emerged that the biosorption data fitted well to the pseudo-second-order and Langmuir adsorption isotherm models in both single and binary systems. In competitive biosorption systems, Pb(II) and MB will suppress each other's biosorption capacity. Spectroscopic analysis, including Fourier transform infrared spectroscopy (FTIR) and fluorescence spectroscopy were integrated to explain this interaction. Hydroxyl and amine groups in AGS were the key functional groups for sorption. Three-dimensional excitation-emission matrix (3D-EEM) implied that two main protein-like substances were identified and quenched when Pb(II) or MB were present. Response surface methodology (RSM) confirmed that the removal efficiency of Pb(II) and MB reached its peak when the concentration ratios of Pb(II) and MB achieved a constant value of 1. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Utilization of integrated Michaelis-Menten equations for enzyme inhibition diagnosis and determination of kinetic constants using Solver supplement of Microsoft Office Excel.

    PubMed

    Bezerra, Rui M F; Fraga, Irene; Dias, Albino A

    2013-01-01

    Enzyme kinetic parameters are usually determined from initial rates nevertheless, laboratory instruments only measure substrate or product concentration versus reaction time (progress curves). To overcome this problem we present a methodology which uses integrated models based on Michaelis-Menten equation. The most severe practical limitation of progress curve analysis occurs when the enzyme shows a loss of activity under the chosen assay conditions. To avoid this problem it is possible to work with the same experimental points utilized for initial rates determination. This methodology is illustrated by the use of integrated kinetic equations with the well-known reaction catalyzed by alkaline phosphatase enzyme. In this work nonlinear regression was performed with the Solver supplement (Microsoft Office Excel). It is easy to work with and track graphically the convergence of SSE (sum of square errors). The diagnosis of enzyme inhibition was performed according to Akaike information criterion. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Multi-Dielectric Brownian Dynamics and Design-Space-Exploration Studies of Permeation in Ion Channels.

    PubMed

    Siksik, May; Krishnamurthy, Vikram

    2017-09-01

    This paper proposes a multi-dielectric Brownian dynamics simulation framework for design-space-exploration (DSE) studies of ion-channel permeation. The goal of such DSE studies is to estimate the channel modeling-parameters that minimize the mean-squared error between the simulated and expected "permeation characteristics." To address this computational challenge, we use a methodology based on statistical inference that utilizes the knowledge of channel structure to prune the design space. We demonstrate the proposed framework and DSE methodology using a case study based on the KcsA ion channel, in which the design space is successfully reduced from a 6-D space to a 2-D space. Our results show that the channel dielectric map computed using the framework matches with that computed directly using molecular dynamics with an error of 7%. Finally, the scalability and resolution of the model used are explored, and it is shown that the memory requirements needed for DSE remain constant as the number of parameters (degree of heterogeneity) increases.

  17. Rate constants for the reactions of OH with CH3Cl, CH2Cl2, CHCl3, and CH3Br

    NASA Technical Reports Server (NTRS)

    Hsu, K.-J.; Demore, W. B.

    1994-01-01

    Rate constants for the reactions of OH with CH3Cl, CH2Cl2, CHCl3, and CH3Br have been measured by a relative rate technique in which the reaction rate of each compound was compared to that of HFC-152a (CH3CHF2) and (for CH2Cl2) HFC-161 (CH3CH2F). Using absolute rate constants for HFC-152a and HFC-161, which we have determined relative to those for CH4, CH3CCl3, and C2H6, temperature dependent rate constants of both compounds were derived. The derived rate constant for CH3Br is in good agreement with recent absolute measurements. However, for the chloromethanes all the rate constants are lower at atmospheric temperatures than previously reported, especially for CH2Cl2 where the present rate constant is about a factor of 1.6 below the JPL 92-20 value. The new rate constant appears to resolve a discrepancy between the observed atmospheric concentrations and those calculated from the previous rate constant and estimated release rates.

  18. Dissipative NEGF methodology to treat short range Coulomb interaction: Current through a 1D nanostructure.

    PubMed

    Martinez, Antonio; Barker, John R; Di Prieto, Riccardo

    2018-06-13

    A methodology describing Coulomb blockade in the Non-equilibrium Green Function formalism is presented. We carried out ballistic and dissipative simulations through a 1D quantum dot using an Einstein phonon model. Inelastic phonons with different energies have been considered. The methodology incorporates the short-range Coulomb interaction between two electrons through the use of a two-particle Green's function. Unlike previous work, the quantum dot has spatial resolution i.e. it is not just parameterized by the energy level and coupling constants of the dot. Our method intends to describe the effect of electron localization while maintaining an open boundary or extended wave function. The formalism conserves the current through the nanostructure. A simple 1D model is used to explain the increase of mobility in semi-crystalline polymers as a function of the electron concentration. The mechanism suggested is based on the lifting of energy levels into the transmission window as a result of the local electron-electron repulsion inside a crystalline domain. The results are aligned with recent experimental findings. Finally, as a proof of concept, we present a simulation of a low temperature resonant structure showing the stability diagram in the Coulomb blockade regime. . © 2018 IOP Publishing Ltd.

  19. Horizontal Gene Transfer and the History of Life

    PubMed Central

    Daubin, Vincent; Szöllősi, Gergely J.

    2016-01-01

    Microbes acquire DNA from a variety of sources. The last decades, which have seen the development of genome sequencing, have revealed that horizontal gene transfer has been a major evolutionary force that has constantly reshaped genomes throughout evolution. However, because the history of life must ultimately be deduced from gene phylogenies, the lack of methods to account for horizontal gene transfer has thrown into confusion the very concept of the tree of life. As a result, many questions remain open, but emerging methodological developments promise to use information conveyed by horizontal gene transfer that remains unexploited today. PMID:26801681

  20. Evaluation of kinesthetic-tactual displays using a critical tracking task

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.; Ault, R. T.

    1977-01-01

    The study sought to investigate the feasibility of applying the critical tracking task paradigm to the evaluation of kinesthetic-tactual displays. Four subjects attempted to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. Display aiding was introduced in both modalities in the form of velocity quickening. Visual tracking performance was better than tactual tracking, and velocity aiding improved the critical tracking scores for visual and tactual tracking about equally. The results suggest that the critical task methodology holds considerable promise for evaluating kinesthetic-tactual displays.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Yao, E-mail: Yao.Fu@colorado.edu; Song, Jeong-Hoon, E-mail: JH.Song@colorado.edu

    Heat flux expressions are derived for multibody potential systems by extending the original Hardy's methodology and modifying Admal & Tadmor's formulas. The continuum thermomechanical quantities obtained from these two approaches are easy to compute from molecular dynamics (MD) results, and have been tested for a constant heat flux model in two distinctive systems: crystalline iron and polyethylene (PE) polymer. The convergence criteria and affecting parameters, i.e. spatial and temporal window size, and specific forms of localization function are found to be different between the two systems. The conservation of mass, momentum, and energy are discussed and validated within this atomistic–continuummore » bridging.« less

  2. Defect-induced change of temperature-dependent elastic constants in BCC iron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, N.; Setyawan, W.; Zhang, S. H.

    2017-07-01

    The effects of radiation-induced defects (randomly distributed vacancies, voids, and interstitial dislocation loops) on temperature-dependent elastic constants, C11, C12, and C44 in BCC iron, are studied with molecular dynamics method. The elastic constants are found to decrease with increasing temperatures for all cases containing different defects. The presence of vacancies, voids, or interstitial loops further decreases the elastic constants. For a given number of point defects, the randomly distributed vacancies show the strongest effect compared to voids or interstitial loops. All these results are expected to provide useful information to combine with experimental results for further understanding of radiation damage.

  3. Time variation of fundamental constants in nonstandard cosmological models

    NASA Astrophysics Data System (ADS)

    Mosquera, M. E.; Civitarese, O.

    2017-10-01

    In this work we have studied the lithium problem in nonstandard cosmological models. In particular, by using the public code alterbbn, we have included in the computation of the primordial light nuclei abundances, the effects of the inclusion of dark energy and dark entropy, along with the variation of the fine structure constant and the Higgs vacuum expectation value. In order to set constrains on the variation of the fundamental constants we have compared our theoretical results with the available observational data. We have found that the lithium abundance is reduced for not-null variation at the 3 σ -level of both constants.

  4. Predicting elastic properties of β-HMX from first-principles calculations.

    PubMed

    Peng, Qing; Rahul; Wang, Guangyu; Liu, Gui-Rong; Grimme, Stefan; De, Suvranu

    2015-05-07

    We investigate the performance of van der Waals (vdW) functions in predicting the elastic constants of β cyclotetramethylene tetranitramine (HMX) energetic molecular crystals using density functional theory (DFT) calculations. We confirm that the accuracy of the elastic constants is significantly improved using the vdW corrections with environment-dependent C6 together with PBE and revised PBE exchange-correlation functionals. The elastic constants obtained using PBE-D3(0) calculations yield the most accurate mechanical response of β-HMX when compared with experimental stress-strain data. Our results suggest that PBE-D3 calculations are reliable in predicting the elastic constants of this material.

  5. Methodological Limitations of the Application of Expert Systems Methodology in Reading.

    ERIC Educational Resources Information Center

    Willson, Victor L.

    Methodological deficiencies inherent in expert-novice reading research make it impossible to draw inferences about curriculum change. First, comparisons of intact groups are often used as a basis for making causal inferences about how observed characteristics affect behaviors. While comparing different groups is not by itself a useless activity,…

  6. Using Design-Based Research in Gifted Education

    ERIC Educational Resources Information Center

    Jen, Enyi; Moon, Sidney; Samarapungavan, Ala

    2015-01-01

    Design-based research (DBR) is a new methodological framework that was developed in the context of the learning sciences; however, it has not been used very often in the field of gifted education. Compared with other methodologies, DBR is more process-oriented and context-sensitive. In this methodological brief, the authors introduce DBR and…

  7. "It's the Method, Stupid." Interrelations between Methodological and Theoretical Advances: The Example of Comparing Higher Education Systems Internationally

    ERIC Educational Resources Information Center

    Hoelscher, Michael

    2017-01-01

    This article argues that strong interrelations between methodological and theoretical advances exist. Progress in, especially comparative, methods may have important impacts on theory evaluation. By using the example of the "Varieties of Capitalism" approach and an international comparison of higher education systems, it can be shown…

  8. Methodology of Comparative Analysis of Public School Teachers' Continuing Professional Development in Great Britain, Canada and the USA

    ERIC Educational Resources Information Center

    Mukan, Nataliya; Kravets, Svitlana

    2015-01-01

    In the article the methodology of comparative analysis of public school teachers' continuing professional development (CPD) in Great Britain, Canada and the USA has been presented. The main objectives are defined as theoretical analysis of scientific and pedagogical literature, which highlights different aspects of the problem under research;…

  9. Adjoint-based constant-mass partial derivatives

    DOE PAGES

    Favorite, Jeffrey A.

    2017-09-01

    In transport theory, adjoint-based partial derivatives with respect to mass density are constant-volume derivatives. Likewise, adjoint-based partial derivatives with respect to surface locations (i.e., internal interface locations and the outer system boundary) are constant-density derivatives. This study derives the constant-mass partial derivative of a response with respect to an internal interface location or the outer system boundary and the constant-mass partial derivative of a response with respect to the mass density of a region. Numerical results are given for a multiregion two-dimensional (r-z) cylinder for three very different responses: the uncollided gamma-ray flux at an external detector point, k effmore » of the system, and the total neutron leakage. Finally, results from the derived formulas compare extremely well with direct perturbation calculations.« less

  10. The Challenge of Timely, Responsive and Rigorous Ethics Review of Disaster Research: Views of Research Ethics Committee Members.

    PubMed

    Hunt, Matthew; Tansey, Catherine M; Anderson, James; Boulanger, Renaud F; Eckenwiler, Lisa; Pringle, John; Schwartz, Lisa

    2016-01-01

    Research conducted following natural disasters such as earthquakes, floods or hurricanes is crucial for improving relief interventions. Such research, however, poses ethical, methodological and logistical challenges for researchers. Oversight of disaster research also poses challenges for research ethics committees (RECs), in part due to the rapid turnaround needed to initiate research after a disaster. Currently, there is limited knowledge available about how RECs respond to and appraise disaster research. To address this knowledge gap, we investigated the experiences of REC members who had reviewed disaster research conducted in low- or middle-income countries. We used interpretive description methodology and conducted in-depth interviews with 15 respondents. Respondents were chairs, members, advisors, or coordinators from 13 RECs, including RECs affiliated with universities, governments, international organizations, a for-profit REC, and an ad hoc committee established during a disaster. Interviews were analyzed inductively using constant comparative techniques. Through this process, three elements were identified as characterizing effective and high-quality review: timeliness, responsiveness and rigorousness. To ensure timeliness, many RECs rely on adaptations of review procedures for urgent protocols. Respondents emphasized that responsive review requires awareness of and sensitivity to the particularities of disaster settings and disaster research. Rigorous review was linked with providing careful assessment of ethical considerations related to the research, as well as ensuring independence of the review process. Both the frequency of disasters and the conduct of disaster research are on the rise. Ensuring effective and high quality review of disaster research is crucial, yet challenges, including time pressures for urgent protocols, exist for achieving this goal. Adapting standard REC procedures may be necessary. However, steps should be taken to ensure that ethics review of disaster research remains diligent and thorough.

  11. The Challenge of Timely, Responsive and Rigorous Ethics Review of Disaster Research: Views of Research Ethics Committee Members

    PubMed Central

    Hunt, Matthew; Tansey, Catherine M.

    2016-01-01

    Background Research conducted following natural disasters such as earthquakes, floods or hurricanes is crucial for improving relief interventions. Such research, however, poses ethical, methodological and logistical challenges for researchers. Oversight of disaster research also poses challenges for research ethics committees (RECs), in part due to the rapid turnaround needed to initiate research after a disaster. Currently, there is limited knowledge available about how RECs respond to and appraise disaster research. To address this knowledge gap, we investigated the experiences of REC members who had reviewed disaster research conducted in low- or middle-income countries. Methods We used interpretive description methodology and conducted in-depth interviews with 15 respondents. Respondents were chairs, members, advisors, or coordinators from 13 RECs, including RECs affiliated with universities, governments, international organizations, a for-profit REC, and an ad hoc committee established during a disaster. Interviews were analyzed inductively using constant comparative techniques. Results Through this process, three elements were identified as characterizing effective and high-quality review: timeliness, responsiveness and rigorousness. To ensure timeliness, many RECs rely on adaptations of review procedures for urgent protocols. Respondents emphasized that responsive review requires awareness of and sensitivity to the particularities of disaster settings and disaster research. Rigorous review was linked with providing careful assessment of ethical considerations related to the research, as well as ensuring independence of the review process. Conclusion Both the frequency of disasters and the conduct of disaster research are on the rise. Ensuring effective and high quality review of disaster research is crucial, yet challenges, including time pressures for urgent protocols, exist for achieving this goal. Adapting standard REC procedures may be necessary. However, steps should be taken to ensure that ethics review of disaster research remains diligent and thorough. PMID:27327165

  12. Comparison of methodological quality of positive versus negative comparative studies published in Indian medical journals: a systematic review

    PubMed Central

    Charan, Jaykaran; Chaudhari, Mayur; Jackson, Ryan; Mhaskar, Rahul; Reljic, Tea; Kumar, Ambuj

    2015-01-01

    Objectives Published negative studies should have the same rigour of methodological quality as studies with positive findings. However, the methodological quality of negative versus positive studies is not known. The objective was to assess the reported methodological quality of positive versus negative studies published in Indian medical journals. Design A systematic review (SR) was performed of all comparative studies published in Indian medical journals with a clinical science focus and impact factor >1 between 2011 and 2013. The methodological quality of randomised controlled trials (RCTs) was assessed using the Cochrane risk of bias tool, and the Newcastle-Ottawa scale for observational studies. The results were considered positive if the primary outcome was statistically significant and negative otherwise. When the primary outcome was not specified, we used data on the first outcome reported in the history followed by the results section. Differences in various methodological quality domains between positive versus negative studies were assessed by Fisher's exact test. Results Seven journals with 259 comparative studies were included in this SR. 24% (63/259) were RCTs, 24% (63/259) cohort studies, and 49% (128/259) case–control studies. 53% (137/259) of studies explicitly reported the primary outcome. Five studies did not report sufficient data to enable us to determine if results were positive or negative. Statistical significance was determined by p value in 78.3% (199/254), CI in 2.8% (7/254), both p value and CI in 11.8% (30/254), and only descriptive in 6.3% (16/254) of studies. The overall methodological quality was poor and no statistically significant differences between reporting of methodological quality were detected between studies with positive versus negative findings. Conclusions There was no difference in the reported methodological quality of positive versus negative studies. However, the uneven reporting of positive versus negative studies (72% vs 28%) indicates a publication bias in Indian medical journals with an impact factor of >1. PMID:26109118

  13. Methodology to design a municipal solid waste pre-collection system. A case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallardo, A., E-mail: gallardo@uji.es; Carlos, M., E-mail: mcarlos@uji.es; Peris, M., E-mail: perism@uji.es

    Highlights: • MSW recovery starts at homes; therefore it is important to facilitate it to people. • Additionally, to optimize MSW collection a previous pre-collection must be planned. • A methodology to organize pre-collection considering several factors is presented. • The methodology has been verified applying it to a Spanish middle town. - Abstract: The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve natural resources. To design an adequate MSW management plan the first step consists in definingmore » the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the selective collection. To verify the methodology it has been successfully applied to a Spanish town.« less

  14. Major Upgrades to the AIRS Version-6 Water Vapor Profile Methodology

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2015-01-01

    This research is a continuation of part of what was shown at the last AIRS Science Team Meeting and the AIRS 2015 NetMeeting. AIRS Version 6 was finalized in late 2012 and is now operational. Version 6 contained many significant improvements in retrieval methodology compared to Version 5. Version 6 retrieval methodology used for the water vapor profile q(p) and ozone profile O3(p) retrievals is basically unchanged from Version 5, or even from Version 4. Subsequent research has made significant improvements in both water vapor and O3 profiles compared to Version 6.

  15. Instrumentation Methodology for Automobile Crash Testing

    DOT National Transportation Integrated Search

    1974-08-01

    Principal characteristics of existing data acquisition practices and instrumentation methodologies have been reviewed to identify differences which are responsible for difficulties in comparing and interpreting structural crash test data. Recommendat...

  16. BSM2 Plant-Wide Model construction and comparative analysis with other methodologies for integrated modelling.

    PubMed

    Grau, P; Vanrolleghem, P; Ayesa, E

    2007-01-01

    In this paper, a new methodology for integrated modelling of the WWTP has been used for the construction of the Benchmark Simulation Model N degrees 2 (BSM2). The transformations-approach proposed in this methodology does not require the development of specific transformers to interface unit process models and allows the construction of tailored models for a particular WWTP guaranteeing the mass and charge continuity for the whole model. The BSM2 PWM constructed as case study, is evaluated by means of simulations under different scenarios and its validity in reproducing water and sludge lines in WWTP is demonstrated. Furthermore the advantages that this methodology presents compared to other approaches for integrated modelling are verified in terms of flexibility and coherence.

  17. Depth optimal sorting networks resistant to k passive faults

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piotrow, M.

    In this paper, we study the problem of constructing a sorting network that is tolerant to faults and whose running time (i.e. depth) is as small as possible. We consider the scenario of worst-case comparator faults and follow the model of passive comparator failure proposed by Yao and Yao, in which a faulty comparator outputs directly its inputs without comparison. Our main result is the first construction of an N-input, k-fault-tolerant sorting network that is of an asymptotically optimal depth {theta}(log N+k). That improves over the recent result of Leighton and Ma, whose network is of depth O(log N +more » k log log N/log k). Actually, we present a fault-tolerant correction network that can be added after any N-input sorting network to correct its output in the presence of at most k faulty comparators. Since the depth of the network is O(log N + k) and the constants hidden behind the {open_quotes}O{close_quotes} notation are not big, the construction can be of practical use. Developing the techniques necessary to show the main result, we construct a fault-tolerant network for the insertion problem. As a by-product, we get an N-input, O(log N)-depth INSERT-network that is tolerant to random faults, thereby answering a question posed by Ma in his PhD thesis. The results are based on a new notion of constant delay comparator networks, that is, networks in which each register is used (compared) only in a period of time of a constant length. Copies of such networks can be put one after another with only a constant increase in depth per copy.« less

  18. Ratios of Vector and Pseudoscalar B Meson Decay Constants in the Light-Cone Quark Model

    NASA Astrophysics Data System (ADS)

    Dhiman, Nisha; Dahiya, Harleen

    2018-05-01

    We study the decay constants of pseudoscalar and vector B meson in the framework of light-cone quark model. We apply the variational method to the relativistic Hamiltonian with the Gaussian-type trial wave function to obtain the values of β (scale parameter). Then with the help of known values of constituent quark masses, we obtain the numerical results for the decay constants f_P and f_V, respectively. We compare our numerical results with the existing experimental data.

  19. Antimicrobial Activity of 8-Quinolinols, Salicylic Acids, Hydroxynaphthoic Acids, and Salts of Selected Quinolinols with Selected Hydroxy-Acids

    PubMed Central

    Gershon, Herman; Parmegiani, Raulo

    1962-01-01

    Seventy-seven compounds were screened by the disc-plate method against strains of five bacteria and five fungi. A new constant was proposed to describe the antimicrobial activity of a compound in a defined system of organisms. This constant includes not only the inhibitory level of activity of the material but also the number of organisms inhibited. This constant, the antimicrobial spectrum index, was compared with the antimicrobial index of Albert. PMID:13898066

  20. Very high pressure liquid chromatography using core-shell particles: quantitative analysis of fast gradient separations without post-run times.

    PubMed

    Stankovich, Joseph J; Gritti, Fabrice; Stevenson, Paul G; Beaver, Lois A; Guiochon, Georges

    2014-01-17

    Five methods for controlling the mobile phase flow rate for gradient elution analyses using very high pressure liquid chromatography (VHPLC) were tested to determine thermal stability of the column during rapid gradient separations. To obtain rapid separations, instruments are operated at high flow rates and high inlet pressure leading to uneven thermal effects across columns and additional time needed to restore thermal equilibrium between successive analyses. The purpose of this study is to investigate means to minimize thermal instability and obtain reliable results by measuring the reproducibility of the results of six replicate gradient separations of a nine component RPLC standard mixture under various experimental conditions with no post-run times. Gradient separations under different conditions were performed: constant flow rates, two sets of constant pressure operation, programmed flow constant pressure operation, and conditions which theoretically should yield a constant net heat loss at the column's wall. The results show that using constant flow rates, programmed flow constant pressures, and constant heat loss at the column's wall all provide reproducible separations. However, performing separations using a high constant pressure with programmed flow reduces the analysis time by 16% compared to constant flow rate methods. For the constant flow rate, programmed flow constant pressure, and constant wall heat experiments no equilibration time (post-run time) was required to obtain highly reproducible data. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Alcohol and drug treatment outcome studies: new methodological review (2005-2010) and comparison with past reviews.

    PubMed

    Robinson, Sean M; Sobell, Linda Carter; Sobell, Mark B; Arcidiacono, Steven; Tzall, David

    2014-01-01

    Several methodological reviews of alcohol treatment outcome studies and one review of drug studies have been published over the past 40 years. Although past reviews demonstrated methodological improvements in alcohol studies, they also found continued deficiencies. The current review allows for an updated evaluation of the methodological rigor of alcohol and drug studies and, by utilizing inclusion criteria similar to previous reviews, it allows for a comparative review over time. In addition, this is the first review that compares the methodology of alcohol and drug treatment outcome studies published during the same time period. The methodology for 25 alcohol and 11 drug treatment outcome studies published from 2005 through 2010 that met the review's inclusion criteria was evaluated. The majority of variables evaluated were used in prior reviews. The current review found that more alcohol and drug treatment outcome studies are now using continuous substance use measures and assessing problem severity. Although there have been methodological improvements over time, the current reviews differed little from their most recent past counterpart. Despite this finding, some areas, particularly the continued low reporting of demographic data, needs strengthening. Improvement in the methodological rigor of alcohol and drug treatment outcome studies has occurred over time. The current review found few differences between alcohol and drug study methodologies as well as few differences between the current review and the most recent past alcohol and drug reviews. © 2013 Elsevier Ltd. All rights reserved.

  2. Quality of systematic reviews in pediatric oncology--a systematic review.

    PubMed

    Lundh, Andreas; Knijnenburg, Sebastiaan L; Jørgensen, Anders W; van Dalen, Elvira C; Kremer, Leontien C M

    2009-12-01

    To ensure evidence-based decision making in pediatric oncology systematic reviews are necessary. The objective of our study was to evaluate the methodological quality of all currently existing systematic reviews in pediatric oncology. We identified eligible systematic reviews through a systematic search of the literature. Data on clinical and methodological characteristics of the included systematic reviews were extracted. The methodological quality of the included systematic reviews was assessed using the overview quality assessment questionnaire, a validated 10-item quality assessment tool. We compared the methodological quality of systematic reviews published in regular journals with that of Cochrane systematic reviews. We included 117 systematic reviews, 99 systematic reviews published in regular journals and 18 Cochrane systematic reviews. The average methodological quality of systematic reviews was low for all ten items, but the quality of Cochrane systematic reviews was significantly higher than systematic reviews published in regular journals. On a 1-7 scale, the median overall quality score for all systematic reviews was 2 (range 1-7), with a score of 1 (range 1-7) for systematic reviews in regular journals compared to 6 (range 3-7) in Cochrane systematic reviews (p<0.001). Most systematic reviews in the field of pediatric oncology seem to have serious methodological flaws leading to a high risk of bias. While Cochrane systematic reviews were of higher methodological quality than systematic reviews in regular journals, some of them also had methodological problems. Therefore, the methodology of each individual systematic review should be scrutinized before accepting its results.

  3. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.

  4. Spectrofluorimetric determination of stoichiometry and association constants of the complexes of harmane and harmine with beta-cyclodextrin and chemically modified beta-cyclodextrins.

    PubMed

    Martín, L; León, A; Olives, A I; Del Castillo, B; Martín, M A

    2003-06-13

    The association characteristics of the inclusion complexes of the beta-carboline alkaloids harmane and harmine with beta-cyclodextrin (beta-CD) and chemically modified beta-cyclodextrins such as hydroxypropyl-beta-cyclodextrin (HPbeta-CD), 2,3-di-O-methyl-beta-cyclodextrin (DMbeta-CD) and 2,3,6-tri-O-methyl-beta-cyclodextrin (TMbeta-CD) are described. The association constants vary from 112 for harmine/DMbeta-CD to 418 for harmane/HPbeta-CD. The magnitude of the interactions between the host and the guest molecules depends on the chemical and geometrical characteristics of the guest molecules and therefore the association constants vary for the different cyclodextrin complexes. The steric hindrance is higher in the case of harmine due to the presence of methoxy group on the beta-carboline ring. The association obtained for the harmane complexes is stronger than the one observed for harmine complexes except in the case of harmine/TMbeta-CD. Important differences in the association constants were observed depending on the experimental variable used in the calculations (absolute value of fluorescence intensity or the ratio between the fluorescence intensities corresponding to the neutral and cationic forms). When fluorescence intensity values were considered, the association constants were higher than when the ratio of the emission intensity for the cationic and neutral species was used. These differences are a consequence of the co-existence of acid-base equilibria in the ground and in excited states together with the complexation equilibria. The existence of a proton transfer reaction in the excited states of harmane or harmine implies the need for the experimental dialysis procedure for separation of the complexes from free harmane or harmine. Such methodology allows quantitative results for stoichiometry determinations to be obtained, which show the existence of both 1:1 and 1:2 beta-carboline alkaloid:CD complexes with different solubility properties.

  5. Fuzzy inductive reasoning: a consolidated approach to data-driven construction of complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Nebot, Àngela; Mugica, Francisco

    2012-10-01

    Fuzzy inductive reasoning (FIR) is a modelling and simulation methodology derived from the General Systems Problem Solver. It compares favourably with other soft computing methodologies, such as neural networks, genetic or neuro-fuzzy systems, and with hard computing methodologies, such as AR, ARIMA, or NARMAX, when it is used to predict future behaviour of different kinds of systems. This paper contains an overview of the FIR methodology, its historical background, and its evolution.

  6. A systematic review of model-based economic evaluations of diagnostic and therapeutic strategies for lower extremity artery disease.

    PubMed

    Vaidya, Anil; Joore, Manuela A; ten Cate-Hoek, Arina J; Kleinegris, Marie-Claire; ten Cate, Hugo; Severens, Johan L

    2014-01-01

    Lower extremity artery disease (LEAD) is a sign of wide spread atherosclerosis also affecting coronary, cerebral and renal arteries and is associated with increased risk of cardiovascular events. Many economic evaluations have been published for LEAD due to its clinical, social and economic importance. The aim of this systematic review was to assess modelling methods used in published economic evaluations in the field of LEAD. Our review appraised and compared the general characteristics, model structure and methodological quality of published models. Electronic databases MEDLINE and EMBASE were searched until February 2013 via OVID interface. Cochrane database of systematic reviews, Health Technology Assessment database hosted by National Institute for Health research and National Health Services Economic Evaluation Database (NHSEED) were also searched. The methodological quality of the included studies was assessed by using the Philips' checklist. Sixteen model-based economic evaluations were identified and included. Eleven models compared therapeutic health technologies; three models compared diagnostic tests and two models compared a combination of diagnostic and therapeutic options for LEAD. Results of this systematic review revealed an acceptable to low methodological quality of the included studies. Methodological diversity and insufficient information posed a challenge for valid comparison of the included studies. In conclusion, there is a need for transparent, methodologically comparable and scientifically credible model-based economic evaluations in the field of LEAD. Future modelling studies should include clinically and economically important cardiovascular outcomes to reflect the wider impact of LEAD on individual patients and on the society.

  7. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    NASA Astrophysics Data System (ADS)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  8. Reflections on International Comparative Education Survey Methodology: A Case Study of the European Survey on Language Competences

    ERIC Educational Resources Information Center

    Ashton, Karen

    2016-01-01

    This paper reflects on the methodology used in international comparative education surveys by conducting a systematic review of the European Survey on Language Competences (ESLC). The ESLC was administered from February to March 2011, with final results released in June 2012. The survey tested approximately 55,000 students across 14 European…

  9. Comparing Methodologies for Evaluating Emergency Medical Services Ground Transport Access to Time-critical Emergency Services: A Case Study Using Trauma Center Care.

    PubMed

    Doumouras, Aristithes G; Gomez, David; Haas, Barbara; Boyes, Donald M; Nathens, Avery B

    2012-09-01

    The regionalization of medical services has resulted in improved outcomes and greater compliance with existing guidelines. For certain "time-critical" conditions intimately associated with emergency medicine, early intervention has demonstrated mortality benefits. For these conditions, then, appropriate triage within a regionalized system at first diagnosis is paramount, ideally occurring in the field by emergency medical services (EMS) personnel. Therefore, EMS ground transport access is an important metric in the ongoing evaluation of a regionalized care system for time-critical emergency services. To our knowledge, no studies have demonstrated how methodologies for calculating EMS ground transport access differ in their estimates of access over the same study area for the same resource. This study uses two methodologies to calculate EMS ground transport access to trauma center care in a single study area to explore their manifestations and critically evaluate the differences between the methodologies. Two methodologies were compared in their estimations of EMS ground transport access to trauma center care: a routing methodology (RM) and an as-the-crow-flies methodology (ACFM). These methodologies were adaptations of the only two methodologies that had been previously used in the literature to calculate EMS ground transport access to time-critical emergency services across the United States. The RM and ACFM were applied to the nine Level I and Level II trauma centers within the province of Ontario by creating trauma center catchment areas at 30, 45, 60, and 120 minutes and calculating the population and area encompassed by the catchments. Because the methodologies were identical for measuring air access, this study looks specifically at EMS ground transport access. Catchments for the province were created for each methodology at each time interval, and their populations and areas were significantly different at all time periods. Specifically, the RM calculated significantly larger populations at every time interval while the ACFM calculated larger catchment area sizes. This trend is counterintuitive (i.e., larger catchment should mean higher populations), and it was found to be most disparate at the shortest time intervals (under 60 minutes). Through critical evaluation of the differences, the authors elucidated that the ACFM could calculate road access in areas with no roads and overestimates access in low-density areas compared to the RM, potentially affecting delivery of care decisions. Based on these results, the authors believe that future methodologies for calculating EMS ground transport access must incorporate a continuous and valid route through the road network as well as use travel speeds appropriate to the road segments traveled; alternatively, we feel that variation in methods for calculating road distances would have little effect on realized access. Overall, as more complex models for calculating EMS ground transport access become used, there needs to be a standard methodology to improve and to compare it to. Based on these findings, the authors believe that this should be the RM. © 2012 by the Society for Academic Emergency Medicine.

  10. Surgical Versus Nonsurgical Treatment for Midshaft Clavicle Fractures in Patients Aged 16 Years and Older: A Systematic Review, Meta-analysis, and Comparison of Randomized Controlled Trials and Observational Studies.

    PubMed

    Smeeing, Diederik P J; van der Ven, Denise J C; Hietbrink, Falco; Timmers, Tim K; van Heijl, Mark; Kruyt, Moyo C; Groenwold, Rolf H H; van der Meijden, Olivier A J; Houwert, Roderick M

    2017-07-01

    There is no consensus on the choice of treatment of midshaft clavicle fractures (MCFs). The aims of this systematic review and meta-analysis were (1) to compare fracture healing disorders and functional outcomes of surgical versus nonsurgical treatment of MCFs and (2) to compare effect estimates obtained from randomized controlled trials (RCTs) and observational studies. Systematic review and meta-analysis. The PubMed/MEDLINE, Embase, CENTRAL, and CINAHL databases were searched for both RCTs and observational studies. Using the MINORS instrument, all included studies were assessed on their methodological quality. The primary outcome was a nonunion. Effects of surgical versus nonsurgical treatment were estimated using random-effects meta-analysis models. A total of 20 studies were included, of which 8 were RCTs and 12 were observational studies including 1760 patients. Results were similar across the different study designs. A meta-analysis of 19 studies revealed that nonunions were significantly less common after surgical treatment than after nonsurgical treatment (odds ratio [OR], 0.18 [95% CI, 0.10-0.33]). The risk of malunions did not differ between surgical and nonsurgical treatment (OR, 0.38 [95% CI, 0.12-1.19]). Both the long-term Disabilities of the Arm, Shoulder and Hand (DASH) and Constant-Murley scores favored surgical treatment (DASH: mean difference [MD], -2.04 [95% CI, -3.56 to -0.52]; Constant-Murley: MD, 3.23 [95% CI, 1.52 to 4.95]). No differences were observed regarding revision surgery (OR, 0.85 [95% CI, 0.42-1.73]). Including only high-quality studies, both the number of malunions and days to return to work show significant differences in favor of surgical treatment (malunions: OR, 0.26 [95% CI, 0.07 to 0.92]; return to work: MD, -8.64 [95% CI, -16.22 to -1.05]). This meta-analysis of high-quality studies showed that surgical treatment of MCFs results in fewer nonunions, fewer malunions, and an accelerated return to work compared with nonsurgical treatment. A meta-analysis of surgical treatments need not be restricted to randomized trials, provided that the included observational studies are of high quality.

  11. Estimations of global warming potentials from computational chemistry calculations for CH(2)F(2) and other fluorinated methyl species verified by comparison to experiment.

    PubMed

    Blowers, Paul; Hollingshead, Kyle

    2009-05-21

    In this work, the global warming potential (GWP) of methylene fluoride (CH(2)F(2)), or HFC-32, is estimated through computational chemistry methods. We find our computational chemistry approach reproduces well all phenomena important for predicting global warming potentials. Geometries predicted using the B3LYP/6-311g** method were in good agreement with experiment, although some other computational methods performed slightly better. Frequencies needed for both partition function calculations in transition-state theory and infrared intensities needed for radiative forcing estimates agreed well with experiment compared to other computational methods. A modified CBS-RAD method used to obtain energies led to superior results to all other previous heat of reaction estimates and most barrier height calculations when the B3LYP/6-311g** optimized geometry was used as the base structure. Use of the small-curvature tunneling correction and a hindered rotor treatment where appropriate led to accurate reaction rate constants and radiative forcing estimates without requiring any experimental data. Atmospheric lifetimes from theory at 277 K were indistinguishable from experimental results, as were the final global warming potentials compared to experiment. This is the first time entirely computational methods have been applied to estimate a global warming potential for a chemical, and we have found the approach to be robust, inexpensive, and accurate compared to prior experimental results. This methodology was subsequently used to estimate GWPs for three additional species [methane (CH(4)); fluoromethane (CH(3)F), or HFC-41; and fluoroform (CHF(3)), or HFC-23], where estimations also compare favorably to experimental values.

  12. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  13. Kinetic Monte Carlo simulations of the effect of the exchange control layer thickness in CoPtCrB/CoPtCrSiO granular media

    NASA Astrophysics Data System (ADS)

    Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.

    2018-05-01

    A hybrid Landau Lifshitz Gilbert/kinetic Monte Carlo algorithm is used to simulate experimental magnetic hysteresis loops for dual layer exchange coupled composite media. The calculation of the rate coefficients and difficulties arising from low energy barriers, a fundamental problem of the kinetic Monte Carlo method, are discussed and the methodology used to treat them in the present work is described. The results from simulations are compared with experimental vibrating sample magnetometer measurements on dual layer CoPtCrB/CoPtCrSiO media and a quantitative relationship between the thickness of the exchange control layer separating the layers and the effective exchange constant between the layers is obtained. Estimates of the energy barriers separating magnetically reversed states of the individual grains in zero applied field as well as the saturation field at sweep rates relevant to the bit write speeds in magnetic recording are also presented. The significance of this comparison between simulations and experiment and the estimates of the material parameters obtained from it are discussed in relation to optimizing the performance of magnetic storage media.

  14. Using decision tree models to depict primary care physicians CRC screening decision heuristics.

    PubMed

    Wackerbarth, Sarah B; Tarasenko, Yelena N; Curtis, Laurel A; Joyce, Jennifer M; Haist, Steven A

    2007-10-01

    The purpose of this study was to identify decision heuristics utilized by primary care physicians in formulating colorectal cancer screening recommendations. Qualitative research using in-depth semi-structured interviews. We interviewed 66 primary care internists and family physicians evenly drawn from academic and community practices. A majority of physicians were male, and almost all were white, non-Hispanic. Three researchers independently reviewed each transcript to determine the physician's decision criteria and developed decision trees. Final trees were developed by consensus. The constant comparative methodology was used to define the categories. Physicians were found to use 1 of 4 heuristics ("age 50," "age 50, if family history, then earlier," "age 50, if family history, then screen at age 40," or "age 50, if family history, then adjust relative to reference case") for the timing recommendation and 5 heuristics ["fecal occult blood test" (FOBT), "colonoscopy," "if not colonoscopy, then...," "FOBT and another test," and "a choice between options"] for the type decision. No connection was found between timing and screening type heuristics. We found evidence of heuristic use. Further research is needed to determine the potential impact on quality of care.

  15. Three-temperature plasma shock solutions with gray radiation diffusion

    DOE PAGES

    Johnson, Bryan M.; Klein, Richard I.

    2016-04-19

    Here we discuss the effects of radiation on the structure of shocks in a fully ionized plasma are investigated by solving the steady-state fluid equations for ions, electrons, and radiation. The electrons and ions are assumed to have the same bulk velocity but separate temperatures, and the radiation is modeled with the gray diffusion approximation. Both electron and ion conduction are included, as well as ion viscosity. When the material is optically thin, three-temperature behavior occurs. When the diffusive flux of radiation is important but radiation pressure is not, two-temperature behavior occurs, with the electrons strongly coupled to the radiation.more » Since the radiation heats the electrons on length scales that are much longer than the electron–ion Coulomb coupling length scale, these solutions resemble radiative shock solutions rather than plasma shock solutions that neglect radiation. When radiation pressure is important, all three components are strongly coupled. Results with constant values for the transport and coupling coefficients are compared to a full numerical simulation with a good match between the two, demonstrating that steady shock solutions constitute a straightforward and comprehensive verification test methodology for multi-physics numerical algorithms.« less

  16. The Stigma of Hearing Loss

    PubMed Central

    Wallhagen, Margaret I.

    2010-01-01

    Purpose: To explore dimensions of stigma experienced by older adults with hearing loss and those with whom they frequently communicate to target interventions promoting engagement and positive aging. Design and Methods: This longitudinal qualitative study conducted interviews over 1 year with dyads where one partner had hearing loss. Participants were naive to or had not worn hearing aids in the past year. Data were analyzed using grounded theory, constant comparative methodology. Results: Perceived stigma emerged as influencing decision-making processes at multiple points along the experiential continuum of hearing loss, such as initial acceptance of hearing loss, whether to be tested, type of hearing aid selected, and when and where hearing aids were worn. Stigma was related to 3 interrelated experiences, alterations in self-perception, ageism, and vanity and was influenced by dyadic relationships and external societal forces, such as health and hearing professionals and media. Implications: Findings are discussed in relation to theoretical perspectives regarding stigma and ageism and suggest the need to destigmatize hearing loss by promoting its assessment and treatment as well as emphasizing the importance of remaining actively engaged to support positive physical and cognitive functioning. PMID:19592638

  17. Investigating the Water Vapor Component of the Greenhouse Effect from the Atmospheric InfraRed Sounder (AIRS)

    NASA Astrophysics Data System (ADS)

    Gambacorta, A.; Barnet, C.; Sun, F.; Goldberg, M.

    2009-12-01

    We investigate the water vapor component of the greenhouse effect in the tropical region using data from the Atmospheric InfraRed Sounder (AIRS). Differently from previous studies who have relayed on the assumption of constant lapse rate and performed coarse layer or total column sensitivity analysis, we resort to AIRS high vertical resolution to measure the greenhouse effect sensitivity to water vapor along the vertical column. We employ a "partial radiative perturbation" methodology and discriminate between two different dynamic regimes, convective and non-convective. This analysis provides useful insights on the occurrence and strength of the water vapor greenhouse effect and its sensitivity to spatial variations of surface temperature. By comparison with the clear-sky computation conducted in previous works, we attempt to confine an estimate for the cloud contribution to the greenhouse effect. Our results compare well with the current literature, falling in the upper range of the existing global circulation model estimates. We value the results of this analysis as a useful reference to help discriminate among model simulations and improve our capability to make predictions about the future of our climate.

  18. Unconventional bearing capacity analysis and optimization of multicell box girders.

    PubMed

    Tepic, Jovan; Doroslovacki, Rade; Djelosevic, Mirko

    2014-01-01

    This study deals with unconventional bearing capacity analysis and the procedure of optimizing a two-cell box girder. The generalized model which enables the local stress-strain analysis of multicell girders was developed based on the principle of cross-sectional decomposition. The applied methodology is verified using the experimental data (Djelosevic et al., 2012) for traditionally formed box girders. The qualitative and quantitative evaluation of results obtained for the two-cell box girder is realized based on comparative analysis using the finite element method (FEM) and the ANSYS v12 software. The deflection function obtained by analytical and numerical methods was found consistent provided that the maximum deviation does not exceed 4%. Multicell box girders are rationally designed support structures characterized by much lower susceptibility of their cross-sectional elements to buckling and higher specific capacity than traditionally formed box girders. The developed local stress model is applied for optimizing the cross section of a two-cell box carrier. The author points to the advantages of implementing the model of local stresses in the optimization process and concludes that the technological reserve of bearing capacity amounts to 20% at the same girder weight and constant load conditions.

  19. A Molecular Study of Microbe Transfer between Distant Environments

    PubMed Central

    Hooper, Sean D.; Raes, Jeroen; Foerstner, Konrad U.; Harrington, Eoghan D.; Dalevi, Daniel; Bork, Peer

    2008-01-01

    Background Environments and their organic content are generally not static and isolated, but in a constant state of exchange and interaction with each other. Through physical or biological processes, organisms, especially microbes, may be transferred between environments whose characteristics may be quite different. The transferred microbes may not survive in their new environment, but their DNA will be deposited. In this study, we compare two environmental sequencing projects to find molecular evidence of transfer of microbes over vast geographical distances. Methodology By studying synonymous nucleotide composition, oligomer frequency and orthology between predicted genes in metagenomics data from two environments, terrestrial and aquatic, and by correlating with phylogenetic mappings, we find that both environments are likely to contain trace amounts of microbes which have been far removed from their original habitat. We also suggest a bias in direction from soil to sea, which is consistent with the cycles of planetary wind and water. Conclusions Our findings support the Baas-Becking hypothesis formulated in 1934, which states that due to dispersion and population sizes, microbes are likely to be found in widely disparate environments. Furthermore, the availability of genetic material from distant environments is a possible font of novel gene functions for lateral gene transfer. PMID:18612393

  20. Ligand- and receptor-based docking with LiBELa

    NASA Astrophysics Data System (ADS)

    dos Santos Muniz, Heloisa; Nascimento, Alessandro S.

    2015-08-01

    Methodologies on molecular docking are constantly improving. The problem consists on finding an optimal interplay between the computational cost and a satisfactory physical description of ligand-receptor interaction. In pursuit of an advance in current methods we developed a mixed docking approach combining ligand- and receptor-based strategies in a docking engine, where tridimensional descriptors for shape and charge distribution of a reference ligand guide the initial placement of the docking molecule and an interaction energy-based global minimization follows. This hybrid docking was evaluated with soft-core and force field potentials taking into account ligand pose and scoring. Our approach was found to be competitive to a purely receptor-based dock resulting in improved logAUC values when evaluated with DUD and DUD-E. Furthermore, the smoothed potential as evaluated here, was not advantageous when ligand binding poses were compared to experimentally determined conformations. In conclusion we show that a combination of ligand- and receptor-based strategy docking with a force field energy model results in good reproduction of binding poses and enrichment of active molecules against decoys. This strategy is implemented in our tool, LiBELa, available to the scientific community.

Top