Sample records for methods provide consistent

  1. WEIGHT OF EVIDENCE IN ECOLOGICAL ASSESSMENT

    EPA Science Inventory

    This document provides guidance on methods for weighing ecological evidence using a a standard framework consisting of three steps: assemble evidence, weight evidence and weigh the body of evidence. Use of the methods will improve the consistency and reliability of WoE-based asse...

  2. Safer Conception Methods and Counseling: Psychometric Evaluation of New Measures of Attitudes and Beliefs Among HIV Clients and Providers.

    PubMed

    Woldetsadik, Mahlet Atakilt; Goggin, Kathy; Staggs, Vincent S; Wanyenze, Rhoda K; Beyeza-Kashesya, Jolly; Mindry, Deborah; Finocchario-Kessler, Sarah; Khanakwa, Sarah; Wagner, Glenn J

    2016-06-01

    With data from 400 HIV clients with fertility intentions and 57 HIV providers in Uganda, we evaluated the psychometrics of new client and provider scales measuring constructs related to safer conception methods (SCM) and safer conception counselling (SCC). Several forms of validity (i.e., content, face, and construct validity) were examined using standard methods including exploratory and confirmatory factor analysis. Internal consistency was established using Cronbach's alpha correlation coefficient. The final scales consisted of measures of attitudes towards use of SCM and delivery of SCC, including measures of self-efficacy and motivation to use SCM, and perceived community stigma towards childbearing. Most client and all provider measures had moderate to high internal consistency (alphas 0.60-0.94), most had convergent validity (associations with other SCM or SCC-related measures), and client measures had divergent validity (poor associations with depression). These findings establish preliminary psychometric properties of these scales and should facilitate future studies of SCM and SCC.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.

    Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less

  4. A proof of the DBRF-MEGN method, an algorithm for deducing minimum equivalent gene networks

    PubMed Central

    2011-01-01

    Background We previously developed the DBRF-MEGN (difference-based regulation finding-minimum equivalent gene network) method, which deduces the most parsimonious signed directed graphs (SDGs) consistent with expression profiles of single-gene deletion mutants. However, until the present study, we have not presented the details of the method's algorithm or a proof of the algorithm. Results We describe in detail the algorithm of the DBRF-MEGN method and prove that the algorithm deduces all of the exact solutions of the most parsimonious SDGs consistent with expression profiles of gene deletion mutants. Conclusions The DBRF-MEGN method provides all of the exact solutions of the most parsimonious SDGs consistent with expression profiles of gene deletion mutants. PMID:21699737

  5. Purine Inhibitors Of Protein Kinases, G Proteins And Polymerases

    DOEpatents

    Gray, Nathanael S. , Schultz, Peter , Kim, Sung-Hou , Meijer, Laurent

    2003-09-09

    The invention provides compounds having the structure: ##STR1## where, R.sup.1 is a member selected from the group consisting of H and NH.sub.2 ; R.sup.2 is member selected from the group consisting of H, CO.sub.2 H, OH and halogen; and R.sup.3 is a member selected from the group consisting of CO.sub.2 H, NH.sub.2 and halogen. Also provided are methods of using the compounds and formulations containing the compounds.

  6. Qualitative Versus Quantitative Mammographic Breast Density Assessment: Applications for the US and Abroad

    PubMed Central

    Destounis, Stamatia; Arieno, Andrea; Morgan, Renee; Roberts, Christina; Chan, Ariane

    2017-01-01

    Mammographic breast density (MBD) has been proven to be an important risk factor for breast cancer and an important determinant of mammographic screening performance. The measurement of density has changed dramatically since its inception. Initial qualitative measurement methods have been found to have limited consistency between readers, and in regards to breast cancer risk. Following the introduction of full-field digital mammography, more sophisticated measurement methodology is now possible. Automated computer-based density measurements can provide consistent, reproducible, and objective results. In this review paper, we describe various methods currently available to assess MBD, and provide a discussion on the clinical utility of such methods for breast cancer screening. PMID:28561776

  7. Configuration interaction singles natural orbitals: An orbital basis for an efficient and size intensive multireference description of electronic excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Yinan; Levine, Benjamin G., E-mail: levine@chemistry.msu.edu; Hohenstein, Edward G.

    2015-01-14

    Multireference quantum chemical methods, such as the complete active space self-consistent field (CASSCF) method, have long been the state of the art for computing regions of potential energy surfaces (PESs) where complex, multiconfigurational wavefunctions are required, such as near conical intersections. Herein, we present a computationally efficient alternative to the widely used CASSCF method based on a complete active space configuration interaction (CASCI) expansion built from the state-averaged natural orbitals of configuration interaction singles calculations (CISNOs). This CISNO-CASCI approach is shown to predict vertical excitation energies of molecules with closed-shell ground states similar to those predicted by state averaged (SA)-CASSCFmore » in many cases and to provide an excellent reference for a perturbative treatment of dynamic electron correlation. Absolute energies computed at the CISNO-CASCI level are found to be variationally superior, on average, to other CASCI methods. Unlike SA-CASSCF, CISNO-CASCI provides vertical excitation energies which are both size intensive and size consistent, thus suggesting that CISNO-CASCI would be preferable to SA-CASSCF for the study of systems with multiple excitable centers. The fact that SA-CASSCF and some other CASCI methods do not provide a size intensive/consistent description of excited states is attributed to changes in the orbitals that occur upon introduction of non-interacting subsystems. Finally, CISNO-CASCI is found to provide a suitable description of the PES surrounding a biradicaloid conical intersection in ethylene.« less

  8. Service and Methods Demonstration Program Annual Report

    DOT National Transportation Integrated Search

    1979-08-01

    The Urban Mass Transportation Administration (UMTA) Service and Methods Demonstration (SMD) Program was established in 1974 to provide a consistent and comprehensive framework within which innovative transportation management techniques and transit s...

  9. Service and Methods Demonstration Program Annual Report - Executive Summary

    DOT National Transportation Integrated Search

    1978-08-01

    The Urban Mass Transportation Administration (UMTA Service and methods Demonstration (SMD) Program was established in 1974 to provide a consistent and comprehensive framework within which innovative transportation management techniques and transit se...

  10. The Second Report on the State of the World's Animal Genetic Resources for Food and Agriculture, Part 4, The State of the Art: Box 4A4: A digital enumeration method for collecting phenotypic data for genome association

    USDA-ARS?s Scientific Manuscript database

    Consistent data across animal populations are required to inform genomic science aimed at finding important adaptive genetic variations. The ADAPTMap Digital Phenotype Collection- Prototype Method will yield a new procedure to provide consistent phenotypic data by digital enumeration of categorical ...

  11. Monitoring the quality consistency of Fufang Danshen Pills using micellar electrokinetic chromatography fingerprint coupled with prediction of antioxidant activity and chemometrics.

    PubMed

    Ji, Zhengchao; Sun, Wanyang; Sun, Guoxiang; Zhang, Jin

    2016-08-01

    A fast micellar electrokinetic chromatography fingerprint method combined with quantification was developed and validated to evaluate the quality of Fufang Danshen Pills, a traditional Chinese Medicine, which has been used in the treatment of cardiovascular system diseases, in which the tetrahedron optimization method was first used to optimize the background electrolyte solution. Subsequently, the index of the fingerprint information amount of I was performed as an excellent objective indictor to investigate the experimental conditions. In addition, a systematical quantified fingerprint method was constructed for evaluating the quality consistency of 20 batches of test samples obtained from the same drug manufacturer. The fingerprint analysis combined with quantitative determination of two components showed that the quality consistency of the test samples was quite good within the same commercial brand. Furthermore, the partial least squares model analysis was used to explore the fingerprint-efficacy relationship between active components and antioxidant activity in vitro, which can be applied for the assessment of anti-oxidant activity of Fufang Danshen pills and provide valuable medicinal information for quality control. The result illustrated that the present study provided a reliable and reasonable method for monitoring the quality consistency of Fufang Danshen pills. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Multilevel Modeling with Correlated Effects

    ERIC Educational Resources Information Center

    Kim, Jee-Seon; Frees, Edward W.

    2007-01-01

    When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…

  13. Mental constructs and the cognitive reconstruction of the Berlin wall.

    PubMed

    Tijus, C A; Santolini, A

    1996-07-01

    In this study of how to change people's conceptions of certain facts (i.e., the position of the Berlin Wall), a surprising psychological phenomenon was discovered. In the trial test, instead of designing a wall to enclose West Berlin, most people described and drew a short and straight wall that divided the city from north to south. Two methods were created, based on two general information-processing components involved in problem solving, to study how people might repair their misconceptions by themselves. The do-it-yourself method consisted of providing people with the task of thinking about how to build the wall and then drawing it, instead of just asking them to draw it. The distance-to-goal evaluation method consisted of asking the participants how the wall they had drawn would actually prevent passage from East Germany to West Berlin. The results showed that both methods had important effects in repairing misconceptions, but improvement in performance with the distance-to-goal method was less significant for those participants who were first provided the task of thinking about how to build the wall. These findings are consistent with the hypothesis that awareness of functional properties plays an important role in structuring and restructuring mental constructs.

  14. Method for Predicting Thermal Buckling in Rails

    DOT National Transportation Integrated Search

    2018-01-01

    A method is proposed herein for predicting the onset of thermal buckling in rails in such a way as to provide a means of avoiding this type of potentially devastating failure. The method consists of the development of a thermomechanical model of rail...

  15. An automatic iterative decision-making method for intuitionistic fuzzy linguistic preference relations

    NASA Astrophysics Data System (ADS)

    Pei, Lidan; Jin, Feifei; Ni, Zhiwei; Chen, Huayou; Tao, Zhifu

    2017-10-01

    As a new preference structure, the intuitionistic fuzzy linguistic preference relation (IFLPR) was recently introduced to efficiently deal with situations in which the membership and non-membership are represented as linguistic terms. In this paper, we study the issues of additive consistency and the derivation of the intuitionistic fuzzy weight vector of an IFLPR. First, the new concepts of order consistency, additive consistency and weak transitivity for IFLPRs are introduced, and followed by a discussion of the characterisation about additive consistent IFLPRs. Then, a parameterised transformation approach is investigated to convert the normalised intuitionistic fuzzy weight vector into additive consistent IFLPRs. After that, a linear optimisation model is established to derive the normalised intuitionistic fuzzy weights for IFLPRs, and a consistency index is defined to measure the deviation degree between an IFLPR and its additive consistent IFLPR. Furthermore, we develop an automatic iterative decision-making method to improve the IFLPRs with unacceptable additive consistency until the adjusted IFLPRs are acceptable additive consistent, and it helps the decision-maker to obtain the reasonable and reliable decision-making results. Finally, an illustrative example is provided to demonstrate the validity and applicability of the proposed method.

  16. Batch compositions for cordierite ceramics

    DOEpatents

    Hickman, David L.

    1994-07-26

    Ceramic products consisting principally of cordierite and a method for making them are provided, the method employing batches comprising a mineral component and a chemical component, the mineral component comprising clay and talc and the chemical component consisting essentially of a combination of the powdered oxides, hydroxides, or hydrous oxides of magnesium, aluminum and silicon. Ceramics made by extrusion and firing of the batches can exhibit low porosity, high strength and low thermal expansion coefficients.

  17. Apparatus and method for production of methanethiol

    DOEpatents

    Agarwal, Pradeep K.; Linjewile, Temi M.; Hull, Ashley S.; Chen, Zumao

    2006-02-07

    A method for the production of methyl mercaptan is provided. The method comprises providing raw feed gases consisting of methane and hydrogen sulfide, introducing the raw feed gases into a non-thermal pulsed plasma corona reactor, and reacting the raw feed gases within the non-thermal pulsed plasma corona reactor with the reaction CH4+H2S.fwdarw.CH3SH+H2. An apparatus for the production of methyl mercaptan using a non-thermal pulsed plasma corona reactor is also provided.

  18. Combined Heat and Power Protocol for Uniform Methods Project | Advanced

    Science.gov Websites

    Manufacturing Research | NREL Combined Heat and Power Protocol for Uniform Methods Project Combined Heat and Power Protocol for Uniform Methods Project NREL developed a protocol that provides a ; is consistent with the scope and other protocols developed for the Uniform Methods Project (UMP

  19. Off-surface infrared flow visualization

    NASA Technical Reports Server (NTRS)

    Manuel, Gregory S. (Inventor); Obara, Clifford J. (Inventor); Daryabeigi, Kamran (Inventor); Alderfer, David W. (Inventor)

    1993-01-01

    A method for visualizing off-surface flows is provided. The method consists of releasing a gas with infrared absorbing and emitting characteristics into a fluid flow and imaging the flow with an infrared imaging system. This method allows for visualization of off-surface fluid flow in-flight. The novelty of this method is found in providing an apparatus for flow visualization which is contained within the aircraft so as not to disrupt the airflow around the aircraft, is effective at various speeds and altitudes, and is longer-lasting than previous methods of flow visualization.

  20. Probability machines: consistent probability estimation using nonparametric learning machines.

    PubMed

    Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A

    2012-01-01

    Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.

  1. Methods for Improving Consistency between Statewide and Regional Planning Models.

    DOT National Transportation Integrated Search

    2017-12-01

    Given the difference in scope of statewide and MPO models, inconsistencies between the two levels of modelling are inevitable. There are, however, methods to reduce these inconsistencies. This research provides insight into the current practices of s...

  2. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Technical Performance Assessment

    PubMed Central

    2017-01-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers (QIBs) to measure changes in these features. Critical to the performance of a QIB in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method and metrics used to assess a QIB for clinical use. It is therefore, difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America (RSNA) and the Quantitative Imaging Biomarker Alliance (QIBA) with technical, radiological and statistical experts developed a set of technical performance analysis methods, metrics and study designs that provide terminology, metrics and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of QIB performance studies so that results from multiple studies can be compared, contrasted or combined. PMID:24919831

  3. Method and apparatus for fabricating a composite structure consisting of a filamentary material in a metal matrix

    DOEpatents

    Banker, J.G.; Anderson, R.C.

    1975-10-21

    A method and apparatus are provided for preparing a composite structure consisting of filamentary material within a metal matrix. The method is practiced by the steps of confining the metal for forming the matrix in a first chamber, heating the confined metal to a temperature adequate to effect melting thereof, introducing a stream of inert gas into the chamber for pressurizing the atmosphere in the chamber to a pressure greater than atmospheric pressure, confining the filamentary material in a second chamber, heating the confined filamentary material to a temperature less than the melting temperature of the metal, evacuating the second chamber to provide an atmosphere therein at a pressure, placing the second chamber in registry with the first chamber to provide for the forced flow of the molten metal into the second chamber to effect infiltration of the filamentary material with the molten metal, and thereafter cooling the metal infiltrated-filamentary material to form said composite structure.

  4. Transition of Premature Infants From Hospital to Home Life

    PubMed Central

    Lopez, Greta L.; Anderson, Kathryn Hoehn; Feutchinger, Johanna

    2013-01-01

    Purpose To conduct an integrative literature review to studies that focus on the transition of premature infants from neonatal intensive care unit (NICU) to home. Method A literature search was performed in Cumulative Index to Nursing and Allied Health Literature (CINAHL), PubMed, and MEDLINE to identify studies consisting on the transition of premature infants from hospital to home life. Results The search yielded seven articles that emphasized the need for home visits, child and family assessment methods, methods of keeping contact with health care providers, educational and support groups, and described the nurse’s role in the transition program. The strategy to ease the transition differed in each article. Conclusion Home visits by a nurse were a key component by providing education, support, and nursing care. A program therefore should consist of providing parents of premature infants with home visits implemented by a nurse or staying in contact with a nurse (e.g., via video-conference). PMID:22763247

  5. STEM VQ Method, Using Scanning Transmission Electron Microscopy (STEM) for Accurate Virus Quantification

    DTIC Science & Technology

    2017-02-02

    Corresponding Author Abstract Accurate virus quantification is sought, but a perfect method still eludes the scientific community. Electron...unlimited. UNCLASSIFIED 2 provides morphology data and counts all viral particles, including partial or noninfectious particles; however, EM methods ...consistent, reproducible virus quantification method called Scanning Transmission Electron Microscopy – Virus Quantification (STEM-VQ) which simplifies

  6. Molecular Simulation of the Phase Diagram of Methane Hydrate: Free Energy Calculations, Direct Coexistence Method, and Hyperparallel Tempering.

    PubMed

    Jin, Dongliang; Coasne, Benoit

    2017-10-24

    Different molecular simulation strategies are used to assess the stability of methane hydrate under various temperature and pressure conditions. First, using two water molecular models, free energy calculations consisting of the Einstein molecule approach in combination with semigrand Monte Carlo simulations are used to determine the pressure-temperature phase diagram of methane hydrate. With these calculations, we also estimate the chemical potentials of water and methane and methane occupancy at coexistence. Second, we also consider two other advanced molecular simulation techniques that allow probing the phase diagram of methane hydrate: the direct coexistence method in the Grand Canonical ensemble and the hyperparallel tempering Monte Carlo method. These two direct techniques are found to provide stability conditions that are consistent with the pressure-temperature phase diagram obtained using rigorous free energy calculations. The phase diagram obtained in this work, which is found to be consistent with previous simulation studies, is close to its experimental counterpart provided the TIP4P/Ice model is used to describe the water molecule.

  7. ALLOY COATINGS AND METHOD OF APPLYING

    DOEpatents

    Eubank, L.D.; Boller, E.R.

    1958-08-26

    A method for providing uranium articles with a pro tective coating by a single dip coating process is presented. The uranium article is dipped into a molten zinc bath containing a small percentage of aluminum. The resultant product is a uranium article covered with a thin undercoat consisting of a uranium-aluminum alloy with a small amount of zinc, and an outer layer consisting of zinc and aluminum. The article may be used as is, or aluminum sheathing may then be bonded to the aluminum zinc outer layer.

  8. System and method for integrating hazard-based decision making tools and processes

    DOEpatents

    Hodgin, C Reed [Westminster, CO

    2012-03-20

    A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.

  9. Introduction to Field Water-Quality Methods for the Collection of Metals - 2007 Project Summary

    USGS Publications Warehouse

    Allen, Monica L.

    2008-01-01

    The U.S. Geological Survey (USGS), Region VI of the U.S. Environmental Protection Agency (USEPA), and the Osage Nation presented three 3-day workshops, in June-August 2007, entitled ?Introduction to Field Water-Quality Methods for the Collection of Metals.? The purpose of the workshops was to provide instruction to tribes within USEPA Region VI on various USGS surface-water measurement methods and water-quality sampling protocols for the collection of surface-water samples for metals analysis. Workshop attendees included members from over 22 tribes and pueblos. USGS instructors came from Oklahoma, New Mexico, and Georgia. Workshops were held in eastern and south-central Oklahoma and New Mexico and covered many topics including presampling preparation, water-quality monitors, and sampling for metals in surface water. Attendees spent one full classroom day learning the field methods used by the USGS Water Resources Discipline and learning about the complexity of obtaining valid water-quality and quality-assurance data. Lectures included (1) a description of metal contamination sources in surface water; (2) introduction on how to select field sites, equipment, and laboratories for sample analysis; (3) collection of sediment in surface water; and (4) utilization of proper protocol and methodology for sampling metals in surface water. Attendees also were provided USGS sampling equipment for use during the field portion of the class so they had actual ?hands-on? experience to take back to their own organizations. The final 2 days of the workshop consisted of field demonstrations of current USGS water-quality sample-collection methods. The hands-on training ensured that attendees were exposed to and experienced proper sampling procedures. Attendees learned integrated-flow techniques during sample collection, field-property documentation, and discharge measurements and calculations. They also used enclosed chambers for sample processing and collected quality-assurance samples to verify their techniques. Benefits of integrated water-quality sample-collection methods are varied. Tribal environmental programs now have the ability to collect data that are comparable across watersheds. The use of consistent sample collection, manipulation, and storage techniques will provide consistent quality data that will enhance the understanding of local water resources. The improved data quality also will help the USEPA better document the condition of the region?s water. Ultimately, these workshops equipped tribes to use uniform sampling methods and to provide consistent quality data that are comparable across the region.

  10. Land management planning: a method of evaluating alternatives

    Treesearch

    Andres Weintraub; Richard Adams; Linda Yellin

    1982-01-01

    A method is described for developing and evaluating alternatives in land management planning. A structured set of 15 steps provides a framework for such an evaluation. when multiple objectives and uncertainty must be considered in the planning process. The method is consistent with other processes used in organizational evaluation, and allows for the interaction of...

  11. Validation of a Sulfuric Acid Digestion Method for Inductively Coupled Plasma Mass Spectrometry Quantification of TiO2 Nanoparticles.

    PubMed

    Watkins, Preston S; Castellon, Benjamin T; Tseng, Chiyen; Wright, Moncie V; Matson, Cole W; Cobb, George P

    2018-04-13

    A consistent analytical method incorporating sulfuric acid (H 2 SO 4 ) digestion and ICP-MS quantification has been developed for TiO 2 quantification in biotic and abiotic environmentally relevant matrices. Sample digestion in H 2 SO 4 at 110°C provided consistent results without using hydrofluoric acid or microwave digestion. Analysis of seven replicate samples for four matrices on each of 3 days produced Ti recoveries of 97% ± 2.5%, 91 % ± 4.0%, 94% ± 1.8%, and 73 % ± 2.6% (mean ± standard deviation) from water, fish tissue, periphyton, and sediment, respectively. The method demonstrated consistent performance in analysis of water collected over a 1 month.

  12. An Innovative Method for Obtaining Consistent Images and Quantification of Histochemically Stained Specimens

    PubMed Central

    Sedgewick, Gerald J.; Ericson, Marna

    2015-01-01

    Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568

  13. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  14. Discovery through maps: Exploring real-world applications of ecosystem services

    EPA Science Inventory

    Background/Question/Methods U.S. EPA’s EnviroAtlas provides a collection of interactive tools and resources for exploring ecosystem goods and services. The purpose of EnviroAtlas is to provide better access to consistently derived ecosystems and socio-economic data to facil...

  15. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

    PubMed

    Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

    2015-02-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  16. Human jagged polypeptide, encoding nucleic acids and methods of use

    DOEpatents

    Li, Linheng; Hood, Leroy

    2000-01-01

    The present invention provides an isolated polypeptide exhibiting substantially the same amino acid sequence as JAGGED, or an active fragment thereof, provided that the polypeptide does not have the amino acid sequence of SEQ ID NO:5 or SEQ ID NO:6. The invention further provides an isolated nucleic acid molecule containing a nucleotide sequence encoding substantially the same amino acid sequence as JAGGED, or an active fragment thereof, provided that the nucleotide sequence does not encode the amino acid sequence of SEQ ID NO:5 or SEQ ID NO:6. Also provided herein is a method of inhibiting differentiation of hematopoietic progenitor cells by contacting the progenitor cells with an isolated JAGGED polypeptide, or active fragment thereof. The invention additionally provides a method of diagnosing Alagille Syndrome in an individual. The method consists of detecting an Alagille Syndrome disease-associated mutation linked to a JAGGED locus.

  17. Methods of diagnosing alagille syndrome

    DOEpatents

    Li, Linheng; Hood, Leroy; Krantz, Ian D.; Spinner, Nancy B.

    2004-03-09

    The present invention provides an isolated polypeptide exhibiting substantially the same amino acid sequence as JAGGED, or an active fragment thereof, provided that the polypeptide does not have the amino acid sequence of SEQ ID NO:5 or SEQ ID NO:6. The invention further provides an isolated nucleic acid molecule containing a nucleotide sequence encoding substantially the same amino acid sequence as JAGGED, or an active fragment thereof, provided that the nucleotide sequence does not encode the amino acid sequence of SEQ ID NO:5 or SEQ ID NO:6. Also provided herein is a method of inhibiting differentiation of hematopoietic progenitor cells by contacting the progenitor cells with an isolated JAGGED polypeptide, or active fragment thereof. The invention additionally provides a method of diagnosing Alagille Syndrome in an individual. The method consists of detecting an Alagille Syndrome disease-associated mutation linked to a JAGGED locus.

  18. Lithium-air batteries, method for making lithium-air batteries

    DOEpatents

    Vajda, Stefan; Curtiss, Larry A.; Lu, Jun; Amine, Khalil; Tyo, Eric C.

    2016-11-15

    The invention provides a method for generating Li.sub.2O.sub.2 or composites of it, the method uses mixing lithium ions with oxygen ions in the presence of a catalyst. The catalyst comprises a plurality of metal clusters, their alloys and mixtures, each cluster consisting of between 3 and 18 metal atoms. The invention also describes a lithium-air battery which uses a lithium metal anode, and a cathode opposing the anode. The cathode supports metal clusters, each cluster consisting of size selected clusters, taken from a range of between approximately 3 and approximately 18 metal atoms, and an electrolyte positioned between the anode and the cathode.

  19. Electrochemical systems and methods using metal halide to form products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albrecht, Thomas A.; Solas, Dennis; Leclerc, Margarete K.

    There are provided electrochemical methods and systems to form one or more organic compounds or enantiomers thereof selected from the group consisting of substituted or unsubstituted dioxane, substituted or unsubstituted dioxolane, dichloroethylether, dichloromethyl methyl ether, dichloroethyl methyl ether, chloroform, carbon tetrachloride, phosgene, and combinations thereof.

  20. Method of waste stabilization via chemically bonded phosphate ceramics

    DOEpatents

    Wagh, Arun S.; Singh, Dileep; Jeong, Seung-Young

    1998-01-01

    A method for regulating the reaction temperature of a ceramic formulation process is provided comprising supplying a solution containing a monovalent alkali metal; mixing said solution with an oxide powder to create a binder; contacting said binder with bulk material to form a slurry; and allowing the slurry to cure. A highly crystalline waste form is also provided consisting of a binder containing potassium and waste substrate encapsulated by the binder.

  1. Method of waste stabilization via chemically bonded phosphate ceramics

    DOEpatents

    Wagh, A.S.; Singh, D.; Jeong, S.Y.

    1998-11-03

    A method for regulating the reaction temperature of a ceramic formulation process is provided comprising supplying a solution containing a monovalent alkali metal; mixing said solution with an oxide powder to create a binder; contacting said binder with bulk material to form a slurry; and allowing the slurry to cure. A highly crystalline waste form is also provided consisting of a binder containing potassium and waste substrate encapsulated by the binder. 3 figs.

  2. Sugarbeet root maggot resistace from a red globe-shaped beet (PI 179180)

    USDA-ARS?s Scientific Manuscript database

    Sugarbeet root maggot (Tetanops myopaeformis) is a major insect pest of sugarbeet (Beta vulgaris) in many North American production areas. Chemical insecticides have been the primary control method. Host-plant resistance that provides consistent reliable control would provide both an economical and ...

  3. 10 CFR 436.35 - Standard terms and conditions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methods... acquisition of energy conservation measures; (4) Providing for an annual energy audit and identifying who shall conduct such an audit, consistent with § 436.37 of this subpart; and (5) Providing for a guarantee...

  4. Synchronization in node of complex networks consist of complex chaotic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qiang, E-mail: qiangweibeihua@163.com; Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin; Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  5. Oncoprotein protein kinase

    DOEpatents

    Karin, Michael; Hibi, Masahiko; Lin, Anning

    2002-01-29

    The present invention provides an isolated polynucleotide encoding a c-Jun peptide consisting of about amino acid residues 33 to 79 as set fort in SEQ ID NO: 10 or conservative variations thereof. The invention also provides a method for producing a peptide of SEQ ID NO:1 comprising (a) culturing a host cell containing a polynucleotide encoding a c-Jun peptide consisting of about amino acid residues 33 to 79 as set forth in SEQ ID NO: 10 under conditions which allow expression of the polynucleotide; and (b) obtaining the peptide of SEQ ID NO:1.

  6. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  7. 26 CFR 1.460-5 - Cost allocation rules.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... rules. (a) Overview. This section prescribes methods of allocating costs to long-term contracts... section provides rules concerning consistency in method of allocating costs to long-term contracts. (b... paragraph (b)(2) of this section, a taxpayer must allocate costs to each long-term contract subject to the...

  8. 26 CFR 1.460-5 - Cost allocation rules.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... rules. (a) Overview. This section prescribes methods of allocating costs to long-term contracts... section provides rules concerning consistency in method of allocating costs to long-term contracts. (b... paragraph (b)(2) of this section, a taxpayer must allocate costs to each long-term contract subject to the...

  9. 26 CFR 1.460-5 - Cost allocation rules.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... rules. (a) Overview. This section prescribes methods of allocating costs to long-term contracts... section provides rules concerning consistency in method of allocating costs to long-term contracts. (b... paragraph (b)(2) of this section, a taxpayer must allocate costs to each long-term contract subject to the...

  10. 26 CFR 1.460-5 - Cost allocation rules.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... rules. (a) Overview. This section prescribes methods of allocating costs to long-term contracts... section provides rules concerning consistency in method of allocating costs to long-term contracts. (b... paragraph (b)(2) of this section, a taxpayer must allocate costs to each long-term contract subject to the...

  11. Prediction of Human Phenotype Ontology terms by means of hierarchical ensemble methods.

    PubMed

    Notaro, Marco; Schubach, Max; Robinson, Peter N; Valentini, Giorgio

    2017-10-12

    The prediction of human gene-abnormal phenotype associations is a fundamental step toward the discovery of novel genes associated with human disorders, especially when no genes are known to be associated with a specific disease. In this context the Human Phenotype Ontology (HPO) provides a standard categorization of the abnormalities associated with human diseases. While the problem of the prediction of gene-disease associations has been widely investigated, the related problem of gene-phenotypic feature (i.e., HPO term) associations has been largely overlooked, even if for most human genes no HPO term associations are known and despite the increasing application of the HPO to relevant medical problems. Moreover most of the methods proposed in literature are not able to capture the hierarchical relationships between HPO terms, thus resulting in inconsistent and relatively inaccurate predictions. We present two hierarchical ensemble methods that we formally prove to provide biologically consistent predictions according to the hierarchical structure of the HPO. The modular structure of the proposed methods, that consists in a "flat" learning first step and a hierarchical combination of the predictions in the second step, allows the predictions of virtually any flat learning method to be enhanced. The experimental results show that hierarchical ensemble methods are able to predict novel associations between genes and abnormal phenotypes with results that are competitive with state-of-the-art algorithms and with a significant reduction of the computational complexity. Hierarchical ensembles are efficient computational methods that guarantee biologically meaningful predictions that obey the true path rule, and can be used as a tool to improve and make consistent the HPO terms predictions starting from virtually any flat learning method. The implementation of the proposed methods is available as an R package from the CRAN repository.

  12. Removal of phosphate from greenhouse wastewater using hydrated lime.

    PubMed

    Dunets, C Siobhan; Zheng, Youbin

    2014-01-01

    Phosphate (P) contamination in nutrient-laden wastewater is currently a major topic of discussion in the North American greenhouse industry. Precipitation of P as calcium phosphate minerals using hydrated lime could provide a simple, inexpensive method for retrieval. A combination of batch experiments and chemical equilibrium modelling was used to confirm the viability of this P removal method and determine lime addition rates and pH requirements for greenhouse wastewater of varying nutrient compositions. Lime: P ratio (molar ratio of CaMg(OH)₄: PO₄‒P) provided a consistent parameter for estimating lime addition requirements regardless of initial P concentration, with a ratio of 1.5 providing around 99% removal of dissolved P. Optimal P removal occurred when lime addition increased the pH from 8.6 to 9.0, suggesting that pH monitoring during the P removal process could provide a simple method for ensuring consistent adherence to P removal standards. A Visual MINTEQ model, validated using experimental data, provided a means of predicting lime addition and pH requirements as influenced by changes in other parameters of the lime-wastewater system (e.g. calcium concentration, temperature, and initial wastewater pH). Hydrated lime addition did not contribute to the removal of macronutrient elements such as nitrate and ammonium, but did decrease the concentration of some micronutrients. This study provides basic guidance for greenhouse operators to use hydrated lime for phosphate removal from greenhouse wastewater.

  13. Including quality attributes in efficiency measures consistent with net benefit: creating incentives for evidence based medicine in practice.

    PubMed

    Eckermann, Simon; Coelli, Tim

    2013-01-01

    Evidence based medicine supports net benefit maximising therapies and strategies in processes of health technology assessment (HTA) for reimbursement and subsidy decisions internationally. However, translation of evidence based medicine to practice is impeded by efficiency measures such as cost per case-mix adjusted separation in hospitals, which ignore health effects of care. In this paper we identify a correspondence method that allows quality variables under control of providers to be incorporated in efficiency measures consistent with maximising net benefit. Including effects framed from a disutility bearing (utility reducing) perspective (e.g. mortality, morbidity or reduction in life years) as inputs and minimising quality inclusive costs on the cost-disutility plane is shown to enable efficiency measures consistent with maximising net benefit under a one to one correspondence. The method combines advantages of radial properties with an appropriate objective of maximising net benefit to overcome problems of inappropriate objectives implicit with alternative methods, whether specifying quality variables with utility bearing output (e.g. survival, reduction in morbidity or life years), hyperbolic or exogenous variables. This correspondence approach is illustrated in undertaking efficiency comparison at a clinical activity level for 45 Australian hospitals allowing for their costs and mortality rates per admission. Explicit coverage and comparability conditions of the underlying correspondence method are also shown to provide a robust framework for preventing cost-shifting and cream-skimming incentives, with appropriate qualification of analysis and support for data linkage and risk adjustment where these conditions are not satisfied. Comparison on the cost-disutility plane has previously been shown to have distinct advantages in comparing multiple strategies in HTA, which this paper naturally extends to a robust method and framework for comparing efficiency of health care providers in practice. Consequently, the proposed approach provides a missing link between HTA and practice, to allow active incentives for evidence based net benefit maximisation in practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Analysis of Statistical Methods Currently used in Toxicology Journals

    PubMed Central

    Na, Jihye; Yang, Hyeri

    2014-01-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health. PMID:25343012

  15. Analysis of Statistical Methods Currently used in Toxicology Journals.

    PubMed

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-09-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

  16. Vibrational multiconfiguration self-consistent field theory: implementation and test calculations.

    PubMed

    Heislbetz, Sandra; Rauhut, Guntram

    2010-03-28

    A state-specific vibrational multiconfiguration self-consistent field (VMCSCF) approach based on a multimode expansion of the potential energy surface is presented for the accurate calculation of anharmonic vibrational spectra. As a special case of this general approach vibrational complete active space self-consistent field calculations will be discussed. The latter method shows better convergence than the general VMCSCF approach and must be considered the preferred choice within the multiconfigurational framework. Benchmark calculations are provided for a small set of test molecules.

  17. Validation of the Combined Comorbidity Index of Charlson and Elixhauser to Predict 30-Day Mortality Across ICD-9 and ICD-10.

    PubMed

    Simard, Marc; Sirois, Caroline; Candas, Bernard

    2018-05-01

    To validate and compare performance of an International Classification of Diseases, tenth revision (ICD-10) version of a combined comorbidity index merging conditions of Charlson and Elixhauser measures against individual measures in the prediction of 30-day mortality. To select a weight derivation method providing optimal performance across ICD-9 and ICD-10 coding systems. Using 2 adult population-based cohorts of patients with hospital admissions in ICD-9 (2005, n=337,367) and ICD-10 (2011, n=348,820), we validated a combined comorbidity index by predicting 30-day mortality with logistic regression. To appreciate performance of the Combined index and both individual measures, factors impacting indices performance such as population characteristics and weight derivation methods were accounted for. We applied 3 scoring methods (Van Walraven, Schneeweiss, and Charlson) and determined which provides best predictive values. Combined index [c-statistics: 0.853 (95% confidence interval: CI, 0.848-0.856)] performed better than original Charlson [0.841 (95% CI, 0.835-0.844)] or Elixhauser [0.841 (95% CI, 0.837-0.844)] measures on ICD-10 cohort. All weight derivation methods provided close high discrimination results for the Combined index (Van Walraven: 0.852, Schneeweiss: 0.851, Charlson: 0.849). Results were consistent across both coding systems. The Combined index remains valid with both ICD-9 and ICD-10 coding systems and the 3 weight derivation methods evaluated provided consistent high performance across those coding systems.

  18. Monitoring for airborne allergens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burge, H.A.

    1992-07-01

    Monitoring for allergens can provide some information on the kinds and levels of exposure experienced by local patient populations, providing volumetric methods are used for sample collection and analysis is accurate and consistent. Such data can also be used to develop standards for the specific environment and to begin to develop predictive models. Comparing outdoor allergen aerosols between different monitoring sites requires identical collection and analysis methods and some kind of rational standard, whether arbitrary, or based on recognized health effects.32 references.

  19. Providing Behavioral Feedback to Students in an Alternative High School Setting

    ERIC Educational Resources Information Center

    Whitcomb, Sara A.; Hefter, Sheera; Barker, Elizabeth

    2016-01-01

    This column provides an example method for improving the consistency and quality of daily behavioral feedback provided to students in an alternative high school setting. Often, homeroom or advisory periods are prime points in the day for students to review their behavior from the previous day and set goals for a successful day to come. The method…

  20. Performance of local orbital basis sets in the self-consistent Sternheimer method for dielectric matrices of extended systems

    NASA Astrophysics Data System (ADS)

    Hübener, H.; Pérez-Osorio, M. A.; Ordejón, P.; Giustino, F.

    2012-09-01

    We present a systematic study of the performance of numerical pseudo-atomic orbital basis sets in the calculation of dielectric matrices of extended systems using the self-consistent Sternheimer approach of [F. Giustino et al., Phys. Rev. B 81, 115105 (2010)]. In order to cover a range of systems, from more insulating to more metallic character, we discuss results for the three semiconductors diamond, silicon, and germanium. Dielectric matrices of silicon and diamond calculated using our method fall within 1% of reference planewaves calculations, demonstrating that this method is promising. We find that polarization orbitals are critical for achieving good agreement with planewaves calculations, and that only a few additional ζ's are required for obtaining converged results, provided the split norm is properly optimized. Our present work establishes the validity of local orbital basis sets and the self-consistent Sternheimer approach for the calculation of dielectric matrices in extended systems, and prepares the ground for future studies of electronic excitations using these methods.

  1. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  2. Forest Soil Disturbance Monitoring Protocol: Volume II: Supplementary methods, statistics, and data collection

    Treesearch

    Deborah S. Page-Dumroese; Ann M. Abbott; Thomas M. Rice

    2009-01-01

    Volume I and volume II of the Forest Soil Disturbance Monitoring Protocol (FSDMP) provide information for a wide range of users, including technicians, field crew leaders, private landowners, land managers, forest professionals, and researchers. Volume I: Rapid Assessment includes the basic methods for establishing forest soil monitoring transects and consistently...

  3. Of Mice and Meth: A New Media-Based Neuropsychopharmacology Lab to Teach Research Methods

    ERIC Educational Resources Information Center

    Hatch, Daniel L.; Zschau, Tony; Hays, Arthur; McAllister, Kristin; Harrison, Michelle; Cate, Kelly L.; Shanks, Ryan A.; Lloyd, Steven A.

    2014-01-01

    This article describes an innovative neuropsychopharmacology laboratory that can be incorporated into any research methods class. The lab consists of a set of interconnected modules centered on observations of methamphetamine-induced behavioral changes in mice and is designed to provide students with an opportunity to acquire basic skills…

  4. Examining the impact of harmonic correlation on vibrational frequencies calculated in localized coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson-Heine, Magnus W. D., E-mail: magnus.hansonheine@nottingham.ac.uk

    Carefully choosing a set of optimized coordinates for performing vibrational frequency calculations can significantly reduce the anharmonic correlation energy from the self-consistent field treatment of molecular vibrations. However, moving away from normal coordinates also introduces an additional source of correlation energy arising from mode-coupling at the harmonic level. The impact of this new component of the vibrational energy is examined for a range of molecules, and a method is proposed for correcting the resulting self-consistent field frequencies by adding the full coupling energy from connected pairs of harmonic and pseudoharmonic modes, termed vibrational self-consistent field (harmonic correlation). This approach ismore » found to lift the vibrational degeneracies arising from coordinate optimization and provides better agreement with experimental and benchmark frequencies than uncorrected vibrational self-consistent field theory without relying on traditional correlated methods.« less

  5. Emergency First Responders' Experience with Colorimetric Detection Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandra L. Fox; Keith A. Daum; Carla J. Miller

    2007-10-01

    Nationwide, first responders from state and federal support teams respond to hazardous materials incidents, industrial chemical spills, and potential weapons of mass destruction (WMD) attacks. Although first responders have sophisticated chemical, biological, radiological, and explosive detectors available for assessment of the incident scene, simple colorimetric detectors have a role in response actions. The large number of colorimetric chemical detection methods available on the market can make the selection of the proper methods difficult. Although each detector has unique aspects to provide qualitative or quantitative data about the unknown chemicals present, not all detectors provide consistent, accurate, and reliable results. Includedmore » here, in a consumer-report-style format, we provide “boots on the ground” information directly from first responders about how well colorimetric chemical detection methods meet their needs in the field and how they procure these methods.« less

  6. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  7. Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration

    NASA Astrophysics Data System (ADS)

    Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola

    In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.

  8. Study on the criteria for assessing skull-face correspondence in craniofacial superimposition.

    PubMed

    Ibáñez, Oscar; Valsecchi, Andrea; Cavalli, Fabio; Huete, María Isabel; Campomanes-Alvarez, Blanca Rosario; Campomanes-Alvarez, Carmen; Vicente, Ricardo; Navega, David; Ross, Ann; Wilkinson, Caroline; Jankauskas, Rimantas; Imaizumi, Kazuhiko; Hardiman, Rita; Jayaprakash, Paul Thomas; Ruiz, Elena; Molinero, Francisco; Lestón, Patricio; Veselovskaya, Elizaveta; Abramov, Alexey; Steyn, Maryna; Cardoso, Joao; Humpire, Daniel; Lusnig, Luca; Gibelli, Daniele; Mazzarelli, Debora; Gaudio, Daniel; Collini, Federica; Damas, Sergio

    2016-11-01

    Craniofacial superimposition has the potential to be used as an identification method when other traditional biological techniques are not applicable due to insufficient quality or absence of ante-mortem and post-mortem data. Despite having been used in many countries as a method of inclusion and exclusion for over a century it lacks standards. Thus, the purpose of this research is to provide forensic practitioners with standard criteria for analysing skull-face relationships. Thirty-seven experts from 16 different institutions participated in this study, which consisted of evaluating 65 criteria for assessing skull-face anatomical consistency on a sample of 24 different skull-face superimpositions. An unbiased statistical analysis established the most objective and discriminative criteria. Results did not show strong associations, however, important insights to address lack of standards were provided. In addition, a novel methodology for understanding and standardizing identification methods based on the observation of morphological patterns has been proposed. Crown Copyright © 2016. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Nanoparticles for heat transfer and thermal energy storage

    DOEpatents

    Singh, Dileep; Cingarapu, Sreeram; Timofeeva, Elena V.; Moravek, Michael

    2015-07-14

    An article of manufacture and method of preparation thereof. The article of manufacture and method of making the article includes an eutectic salt solution suspensions and a plurality of nanocrystalline phase change material particles having a coating disposed thereon and the particles capable of undergoing the phase change which provides increase in thermal energy storage. In addition, other articles of manufacture can include a nanofluid additive comprised of nanometer-sized particles consisting of copper decorated graphene particles that provide advanced thermal conductivity to heat transfer fluids.

  10. Contraceptive failure in the United States

    PubMed Central

    Trussell, James

    2013-01-01

    This review provides an update of previous estimates of first-year probabilities of contraceptive failure for all methods of contraception available in the United States. Estimates are provided of probabilities of failure during typical use (which includes both incorrect and inconsistent use) and during perfect use (correct and consistent use). The difference between these two probabilities reveals the consequences of imperfect use; it depends both on how unforgiving of imperfect use a method is and on how hard it is to use that method perfectly. These revisions reflect new research on contraceptive failure both during perfect use and during typical use. PMID:21477680

  11. TRAC-P1: an advanced best estimate computer program for PWR LOCA analysis. I. Methods, models, user information, and programming details

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1978-05-01

    The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos Scientific Laboratory (LASL) to provide an advanced ''best estimate'' predictive capability for the analysis of postulated accidents in light water reactors (LWRs). TRAC-Pl provides this analysis capability for pressurized water reactors (PWRs) and for a wide variety of thermal-hydraulic experimental facilities. It features a three-dimensional treatment of the pressure vessel and associated internals; two-phase nonequilibrium hydrodynamics models; flow-regime-dependent constitutive equation treatment; reflood tracking capability for both bottom flood and falling film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions.more » The TRAC-Pl User's Manual is composed of two separate volumes. Volume I gives a description of the thermal-hydraulic models and numerical solution methods used in the code. Detailed programming and user information is also provided. Volume II presents the results of the developmental verification calculations.« less

  12. Method of using deuterium-cluster foils for an intense pulsed neutron source

    DOEpatents

    Miley, George H.; Yang, Xiaoling

    2013-09-03

    A method is provided for producing neutrons, comprising: providing a converter foil comprising deuterium clusters; focusing a laser on the foil with power and energy sufficient to cause deuteron ions to separate from the foil; and striking a surface of a target with the deuteron ions from the converter foil with energy sufficient to cause neutron production by a reaction selected from the group consisting of D-D fusion, D-T fusion, D-metal nuclear spallation, and p-metal. A further method is provided for assembling a plurality of target assemblies for a target injector to be used in the previously mentioned manner. A further method is provided for producing neutrons, comprising: splitting a laser beam into a first beam and a second beam; striking a first surface of a target with the first beam, and an opposite second surface of the target with the second beam with energy sufficient to cause neutron production.

  13. A new idea for visualization of lesions distribution in mammogram based on CPD registration method.

    PubMed

    Pan, Xiaoguang; Qi, Buer; Yu, Hongfei; Wei, Haiping; Kang, Yan

    2017-07-20

    Mammography is currently the most effective technique for breast cancer. Lesions distribution can provide support for clinical diagnosis and epidemiological studies. We presented a new idea to help radiologists study breast lesions distribution conveniently. We also developed an automatic tool based on this idea which could show visualization of lesions distribution in a standard mammogram. Firstly, establishing a lesion database to study; then, extracting breast contours and match different women's mammograms to a standard mammogram; finally, showing the lesion distribution in the standard mammogram, and providing the distribution statistics. The crucial process of developing this tool was matching different women's mammograms correctly. We used a hybrid breast contour extraction method combined with coherent point drift method to match different women's mammograms. We tested our automatic tool by four mass datasets of 641 images. The distribution results shown by the tool were consistent with the results counted according to their reports and mammograms by manual. We also discussed the registration error that was less than 3.3 mm in average distance. The new idea is effective and the automatic tool can provide lesions distribution results which are consistent with radiologists simply and conveniently.

  14. An introduction to g methods.

    PubMed

    Naimi, Ashley I; Cole, Stephen R; Kennedy, Edward H

    2017-04-01

    Robins' generalized methods (g methods) provide consistent estimates of contrasts (e.g. differences, ratios) of potential outcomes under a less restrictive set of identification conditions than do standard regression methods (e.g. linear, logistic, Cox regression). Uptake of g methods by epidemiologists has been hampered by limitations in understanding both conceptual and technical details. We present a simple worked example that illustrates basic concepts, while minimizing technical complications. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  15. Standardizing the care of detox patients to achieve quality outcomes.

    PubMed

    Becker, Kathy; Semrow, Sue

    2006-03-01

    Providing appropriate treatment for detoxification patients is both challenging and difficult because alcohol abuse and dependence are largely underestimated in the acute hospital setting. Alcohol withdrawal syndrome is treated not only by addictionologists on chemical dependency units, but also by primary care physicians in acute inpatient settings. The need for consistent inpatient treatment through the use of identified protocols can help provide safe and effective care. The need for consistent, inpatient medical-surgical detoxification treatment in our organization became apparent with the staff's identification of patient care concerns. Using an organizational approach, a multidisciplinary team was created to standardize the care of detoxification patients, beginning with patient admission and ending with discharge and referral for outpatient management. Standardization would ensure consistent assessment and intervention, and improve communication among the clinical team members. A protocol was developed for both the emergency department and the inpatient units. The goals of the team were to decrease the adverse events related to detoxification, such as seizures and aggression, and provide a consistent method of treatment for staff to follow.

  16. Comparison of heat-testing methodology.

    PubMed

    Bierma, Mark M; McClanahan, Scott; Baisden, Michael K; Bowles, Walter R

    2012-08-01

    Patients with irreversible pulpitis occasionally present with a chief complaint of sensitivity to heat. To appropriately diagnose the offending tooth, a variety of techniques have been developed to reproduce this chief complaint. Such techniques cause temperature increases that are potentially damaging to the pulp. Newer electronic instruments control the temperature of a heat-testing tip that is placed directly against a tooth. The aim of this study was to determine which method produced the most consistent and safe temperature increase within the pulp. This consistency facilitates the clinician's ability to differentiate between a normal pulp and irreversible pulpitis. Four operators applied the following methods to each of 4 extracted maxillary premolars (for a total of 16 trials per method): heated gutta-percha, heated ball burnisher, hot water, and a System B unit or Elements unit with a heat-testing tip. Each test was performed for 60 seconds, and the temperatures were recorded via a thermocouple in the pulp chamber. Analysis of the data was performed by using the intraclass correlation coefficient. The least consistent warming was found with hot water. The heat-testing tip also demonstrated greater consistency between operators compared with the other methods. Hot water and the heated ball burnisher caused temperature increases high enough to damage pulp tissue. The Elements unit with a heat-testing tip provides the most consistent warming of the dental pulp. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  17. On evaluating the robustness of spatial-proximity-based regionalization methods

    NASA Astrophysics Data System (ADS)

    Lebecherel, Laure; Andréassian, Vazken; Perrin, Charles

    2016-08-01

    In absence of streamflow data to calibrate a hydrological model, its parameters are to be inferred by a regionalization method. In this technical note, we discuss a specific class of regionalization methods, those based on spatial proximity, which transfers hydrological information (typically calibrated parameter sets) from neighbor gauged stations to the target ungauged station. The efficiency of any spatial-proximity-based regionalization method will depend on the density of the available streamgauging network, and the purpose of this note is to discuss how to assess the robustness of the regionalization method (i.e., its resilience to an increasingly sparse hydrometric network). We compare two options: (i) the random hydrometrical reduction (HRand) method, which consists in sub-sampling the existing gauging network around the target ungauged station, and (ii) the hydrometrical desert method (HDes), which consists in ignoring the closest gauged stations. Our tests suggest that the HDes method should be preferred, because it provides a more realistic view on regionalization performance.

  18. Using network screening methods to determine locations with specific safety issues: A design consistency case study.

    PubMed

    Butsick, Andrew J; Wood, Jonathan S; Jovanis, Paul P

    2017-09-01

    The Highway Safety Manual provides multiple methods that can be used to identify sites with promise (SWiPs) for safety improvement. However, most of these methods cannot be used to identify sites with specific problems. Furthermore, given that infrastructure funding is often specified for use related to specific problems/programs, a method for identifying SWiPs related to those programs would be very useful. This research establishes a method for Identifying SWiPs with specific issues. This is accomplished using two safety performance functions (SPFs). This method is applied to identifying SWiPs with geometric design consistency issues. Mixed effects negative binomial regression was used to develop two SPFs using 5 years of crash data and over 8754km of two-lane rural roadway. The first SPF contained typical roadway elements while the second contained additional geometric design consistency parameters. After empirical Bayes adjustments, sites with promise (SWiPs) were identified. The disparity between SWiPs identified by the two SPFs was evident; 40 unique sites were identified by each model out of the top 220 segments. By comparing sites across the two models, candidate road segments can be identified where a lack design consistency may be contributing to an increase in expected crashes. Practitioners can use this method to more effectively identify roadway segments suffering from reduced safety performance due to geometric design inconsistency, with detailed engineering studies of identified sites required to confirm the initial assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Trip optimization system and method for a train

    DOEpatents

    Kumar, Ajith Kuttannair; Shaffer, Glenn Robert; Houpt, Paul Kenneth; Movsichoff, Bernardo Adrian; Chan, David So Keung

    2017-08-15

    A system for operating a train having one or more locomotive consists with each locomotive consist comprising one or more locomotives, the system including a locator element to determine a location of the train, a track characterization element to provide information about a track, a sensor for measuring an operating condition of the locomotive consist, a processor operable to receive information from the locator element, the track characterizing element, and the sensor, and an algorithm embodied within the processor having access to the information to create a trip plan that optimizes performance of the locomotive consist in accordance with one or more operational criteria for the train.

  20. Workshop on Survey Methods in Education Research: Facilitator's Guide and Resources. REL 2017-214

    ERIC Educational Resources Information Center

    Walston, Jill; Redford, Jeremy; Bhatt, Monica P.

    2017-01-01

    This Workshop on Survey Methods in Education Research tool consists of a facilitator guide and workshop handouts. The toolkit is intended for use by state or district education leaders and others who want to conduct training on developing and administering surveys. The facilitator guide provides materials related to various phases of the survey…

  1. Scaffolding Learning for Practitioner-Scholars: The Philosophy and Design of a Qualitative Research Methods Course

    ERIC Educational Resources Information Center

    Slayton, Julie; Samkian, Artineh

    2017-01-01

    We present our approach to a qualitative research methods course to prepare practitioner-scholars for their dissertation and independent research. We explain how an instructor's guide provides consistency and rigor, and in-class activities to scaffold learning, and helps faculty connect the content to students' out-of-school lives. We explain how…

  2. Developing Mathematical Knowledge for Teaching in a Methods Course: The Case of Function

    ERIC Educational Resources Information Center

    Steele, Michael D.; Hillen, Amy F.; Smith, Margaret S.

    2013-01-01

    This study describes teacher learning in a teaching experiment consisting of a content-focused methods course involving the mathematical knowledge for teaching function. Prospective and practicing teachers in the course showed growth in their ability to define function, to provide examples of functions and link them to the definition, in the…

  3. 26 CFR 1.42-13 - Rules necessary and appropriate; housing credit agencies' correction of administrative errors and...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... errors or omissions that occurred before the publication of these regulations. Any reasonable method used... February 24, 1994, will be considered proper, provided that the method is consistent with the rules of...

  4. Maximum entropy method applied to deblurring images on a MasPar MP-1 computer

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Dorband, John; Busse, Tim

    1991-01-01

    A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.

  5. Interpreting findings from Mendelian randomization using the MR-Egger method.

    PubMed

    Burgess, Stephen; Thompson, Simon G

    2017-05-01

    Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.

  6. Gas Classification Using Deep Convolutional Neural Networks.

    PubMed

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  7. Gas Classification Using Deep Convolutional Neural Networks

    PubMed Central

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  8. Nuclear reactor target assemblies, nuclear reactor configurations, and methods for producing isotopes, modifying materials within target material, and/or characterizing material within a target material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, James J.; Wall, Donald; Wittman, Richard S.

    Target assemblies are provided that can include a uranium-comprising annulus. The assemblies can include target material consisting essentially of non-uranium material within the volume of the annulus. Reactors are disclosed that can include one or more discrete zones configured to receive target material. At least one uranium-comprising annulus can be within one or more of the zones. Methods for producing isotopes within target material are also disclosed, with the methods including providing neutrons to target material within a uranium-comprising annulus. Methods for modifying materials within target material are disclosed as well as are methods for characterizing material within a targetmore » material.« less

  9. A visual training tool for the Photoload sampling technique

    Treesearch

    Violet J. Holley; Robert E. Keane

    2010-01-01

    This visual training aid is designed to provide Photoload users a tool to increase the accuracy of fuel loading estimations when using the Photoload technique. The Photoload Sampling Technique (RMRS-GTR-190) provides fire managers a sampling method for obtaining consistent, accurate, inexpensive, and quick estimates of fuel loading. It is designed to require only one...

  10. OATYC Journal, Volume XX, Numbers 1-2, Fall 1995-Spring 1996.

    ERIC Educational Resources Information Center

    Houston, Linda, Ed.

    1996-01-01

    Published by the Ohio Association of Two Year Colleges, this journal provides a medium for sharing concepts, methods, and findings relevant to the classroom and an open forum for the discussion and review of problems. This volume consists of the fall 1995 and spring 1996 issues and provides the following articles: (1) "FOCUS: OMI College of…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dierauf, Timothy; Kurtz, Sarah; Riley, Evan

    This paper provides a recommended method for evaluating the AC capacity of a photovoltaic (PV) generating station. It also presents companion guidance on setting the facilitys capacity guarantee value. This is a principles-based approach that incorporates plant fundamental design parameters such as loss factors, module coefficients, and inverter constraints. This method has been used to prove contract guarantees for over 700 MW of installed projects. The method is transparent, and the results are deterministic. In contrast, current industry practices incorporate statistical regression where the empirical coefficients may only characterize the collected data. Though these methods may work well when extrapolationmore » is not required, there are other situations where the empirical coefficients may not adequately model actual performance.This proposed Fundamentals Approach method provides consistent results even where regression methods start to lose fidelity.« less

  12. Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames

    NASA Astrophysics Data System (ADS)

    Heye, Colin; Raman, Venkat

    2012-11-01

    A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.

  13. Measuring Workload Demand of Informatics Systems with the Clinical Case Demand Index

    PubMed Central

    Iyengar, M. Sriram; Rogith, Deevakar; Florez-Arango, Jose F

    2017-01-01

    Introduction: The increasing use of Health Information Technology (HIT) can add substantially to workload on clinical providers. Current methods for assessing workload do not take into account the nature of clinical cases and the use of HIT tools while solving them. Methods: The Clinical Case Demand Index (CCDI), consisting of a summary score and visual representation, was developed to meet this need. Consistency with current perceived workload measures was evaluated in a Randomized Control Trial of a mobile health system. Results: CCDI is significantly correlated with existing workload measures and inversely related to provider performance. Discussion: CCDI combines subjective and objective characteristics of clinical cases along with cognitive and clinical dimensions. Applications include evaluation of HIT tools, clinician scheduling, medical education. Conclusion: CCDI supports comparative effectiveness research of HIT tools. In addition, CCDI could have numerous applications including training, clinical trials, design of clinical workflows, and others. PMID:29854166

  14. Solar cells with perovskite-based light sensitization layers

    DOEpatents

    Kanatzidis, Mercouri G.; Chang, Robert P.H.; Stoumpos, Konstantinos; Lee, Byunghong

    2018-05-08

    Solar cells are provided which comprise an electron transporting layer and a light sensitizing layer of perovskite disposed over the surface of the electron transporting layer. The perovskite may have a formula selected from the group consisting of A2MX6, Z2MX6 or YMX6, wherein A is an alkali metal, M is a metal or a metalloid, X is a halide, Z is selected from the group consisting of a primary ammonium, an iminium, a secondary ammonium, a tertiary ammonium, and a quaternary ammonium, and Y has formula Mb(L)3, wherein Mb is a transition metal in the 2+ oxidation state L is an N--N neutral chelating ligand. Methods of making the solar cells are also provided, including methods based on electrospray deposition.

  15. Field Demonstrations of Five Geophysical Methods that Could Be Used to Characterize Deposits of Alluvial Aggregate

    USGS Publications Warehouse

    Ellefsen, K.J.; Burton, B.L.; Lucius, J.E.; Haines, S.S.; Fitterman, D.V.; Witty, J.A.; Carlson, D.; Milburn, B.; Langer, W.H.

    2007-01-01

    Personnel from the U.S. Geological Survey and Martin Marietta Aggregates, Inc., conducted field demonstrations of five different geophysical methods to show how these methods could be used to characterize deposits of alluvial aggregate. The methods were time-domain electromagnetic sounding, electrical resistivity profiling, S-wave reflection profiling, S-wave refraction profiling, and P-wave refraction profiling. All demonstrations were conducted at one site within a river valley in central Indiana, where the stratigraphy consisted of 1 to 2 meters of clay-rich soil, 20 to 35 meters of alluvial sand and gravel, 1 to 6 meters of clay, and multiple layers of limestone and dolomite bedrock. All geophysical methods, except time-domain electromagnetic sounding, provided information about the alluvial aggregate that was consistent with the known geology. Although time-domain electromagnetic sounding did not work well at this site, it has worked well at other sites with different geology. All of these geophysical methods complement traditional methods of geologic characterization such as drilling.

  16. Getter materials for cracking ammonia

    DOEpatents

    Boffito, Claudio; Baker, John D.

    1999-11-02

    A method is provided for cracking ammonia to produce hydrogen. The method includes the steps of passing ammonia over an ammonia-cracking catalyst which is an alloy including (1) alloys having the general formula Zr.sub.1-x Ti.sub.x M.sub.1 M.sub.2, wherein M.sub.1 and M.sub.2 are selected independently from the group consisting of Cr, Mn, Fe, Co, and Ni, and x is between about 0.0 and about 1.0 inclusive; and between about 20% and about 50% Al by weight. In another aspect, the method of the invention is used to provide methods for operating hydrogen-fueled internal combustion engines and hydrogen fuel cells. In still another aspect, the present invention provides a hydrogen-fueled internal combustion engine and a hydrogen fuel cell including the above-described ammonia-cracking catalyst.

  17. Slow-rotation dynamic SPECT with a temporal second derivative constraint.

    PubMed

    Humphries, T; Celler, A; Trummer, M

    2011-08-01

    Dynamic tracer behavior in the human body arises as a result of continuous physiological processes. Hence, the change in tracer concentration within a region of interest (ROI) should follow a smooth curve. The authors propose a modification to an existing slow-rotation dynamic SPECT reconstruction algorithm (dSPECT) with the goal of improving the smoothness of time activity curves (TACs) and other properties of the reconstructed image. The new method, denoted d2EM, imposes a constraint on the second derivative (concavity) of the TAC in every voxel of the reconstructed image, allowing it to change sign at most once. Further constraints are enforced to prevent other nonphysical behaviors from arising. The new method is compared with dSPECT using digital phantom simulations and experimental dynamic 99mTc -DTPA renal SPECT data, to assess any improvement in image quality. In both phantom simulations and healthy volunteer experiments, the d2EM method provides smoother TACs than dSPECT, with more consistent shapes in regions with dynamic behavior. Magnitudes of TACs within an ROI still vary noticeably in both dSPECT and d2EM images, but also in images produced using an OSEM approach that reconstructs each time frame individually, based on much more complete projection data. TACs produced by averaging over a region are similar using either method, even for small ROIs. Results for experimental renal data show expected behavior in images produced by both methods, with d2EM providing somewhat smoother mean TACs and more consistent TAC shapes. The d2EM method is successful in improving the smoothness of time activity curves obtained from the reconstruction, as well as improving consistency of TAC shapes within ROIs.

  18. Measuring Ultrasonic Acoustic Velocity in a Thin Sheet of Graphite Epoxy Composite

    NASA Technical Reports Server (NTRS)

    2008-01-01

    A method for measuring the acoustic velocity in a thin sheet of a graphite epoxy composite (GEC) material was investigated. This method uses two identical acoustic-emission (AE) sensors, one to transmit and one to receive. The delay time as a function of distance between sensors determines a bulk velocity. A lightweight fixture (balsa wood in the current implementation) provides a consistent method of positioning the sensors, thus providing multiple measurements of the time delay between sensors at different known distances. A linear fit to separation, x, versus delay time, t, will yield an estimate of the velocity from the slope of the line.

  19. Accuracy and consistency of grass pollen identification by human analysts using electron micrographs of surface ornamentation1

    PubMed Central

    Mander, Luke; Baker, Sarah J.; Belcher, Claire M.; Haselhorst, Derek S.; Rodriguez, Jacklyn; Thorn, Jessica L.; Tiwari, Shivangi; Urrego, Dunia H.; Wesseln, Cassandra J.; Punyasena, Surangi W.

    2014-01-01

    • Premise of the study: Humans frequently identify pollen grains at a taxonomic rank above species. Grass pollen is a classic case of this situation, which has led to the development of computational methods for identifying grass pollen species. This paper aims to provide context for these computational methods by quantifying the accuracy and consistency of human identification. • Methods: We measured the ability of nine human analysts to identify 12 species of grass pollen using scanning electron microscopy images. These are the same images that were used in computational identifications. We have measured the coverage, accuracy, and consistency of each analyst, and investigated their ability to recognize duplicate images. • Results: Coverage ranged from 87.5% to 100%. Mean identification accuracy ranged from 46.67% to 87.5%. The identification consistency of each analyst ranged from 32.5% to 87.5%, and each of the nine analysts produced considerably different identification schemes. The proportion of duplicate image pairs that were missed ranged from 6.25% to 58.33%. • Discussion: The identification errors made by each analyst, which result in a decline in accuracy and consistency, are likely related to psychological factors such as the limited capacity of human memory, fatigue and boredom, recency effects, and positivity bias. PMID:25202649

  20. Methods for synthesizing semiconductor quality chalcopyrite crystals for nonlinear optical and radiation detection applications and the like

    DOEpatents

    Stowe, Ashley; Burger, Arnold

    2016-05-10

    A method for synthesizing I-III-VI.sub.2 compounds, including: melting a Group III element; adding a Group I element to the melted Group III element at a rate that allows the Group I and Group III elements to react thereby providing a single phase I-III compound; and adding a Group VI element to the single phase I-III compound under heat, with mixing, and/or via vapor transport. The Group III element is melted at a temperature of between about 200 degrees C. and about 700 degrees C. Preferably, the Group I element consists of a neutron absorber and the group III element consists of In or Ga. The Group VI element and the single phase I-III compound are heated to a temperature of between about 700 degrees C. and about 1000 degrees C. Preferably, the Group VI element consists of S, Se, or Te. Optionally, the method also includes doping with a Group IV element activator.

  1. Extracting Damping Ratio from Dynamic Data and Numerical Solutions

    NASA Technical Reports Server (NTRS)

    Casiano, M. J.

    2016-01-01

    There are many ways to extract damping parameters from data or models. This Technical Memorandum provides a quick reference for some of the more common approaches used in dynamics analysis. Described are six methods of extracting damping from data: the half-power method, logarithmic decrement (decay rate) method, an autocorrelation/power spectral density fitting method, a frequency response fitting method, a random decrement fitting method, and a newly developed half-quadratic gain method. Additionally, state-space models and finite element method modeling tools, such as COMSOL Multiphysics (COMSOL), provide a theoretical damping via complex frequency. Each method has its advantages which are briefly noted. There are also likely many other advanced techniques in extracting damping within the operational modal analysis discipline, where an input excitation is unknown; however, these approaches discussed here are objective, direct, and can be implemented in a consistent manner.

  2. Simulation of the effect of incline incident angle in DMD Maskless Lithography

    NASA Astrophysics Data System (ADS)

    Liang, L. W.; Zhou, J. Y.; Xiang, L. L.; Wang, B.; Wen, K. H.; Lei, L.

    2017-06-01

    The aim of this study is to provide a simulation method for investigation of the intensity fluctuation caused by the inclined incident angle in DMD (digital micromirror device) maskless lithography. The simulation consists of eight main processes involving the simplification of the DMD aperture function and light propagation utilizing the non-parallel angular spectrum method. These processes provide a possibility of co-simulation in the spatial frequency domain, which combines the microlens array and DMD in the maskless lithography system. The simulation provided the spot shape and illumination distribution. These two parameters are crucial in determining the exposure dose in the existing maskless lithography system.

  3. MPACT Standard Input User s Manual, Version 2.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew

    The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.

  4. Spacecraft mass estimation, relationships and engine data: Task 1.1 of the lunar base systems study

    NASA Technical Reports Server (NTRS)

    1988-01-01

    A collection of scaling equations, weight statements, scaling factors, etc., useful for doing conceptual designs of spacecraft are given. Rules of thumb and methods of calculating quantities of interest are provided. Basic relationships for conventional, and several non-conventional, propulsion systems (nuclear, solar electric and solar thermal) are included. The equations and other data were taken from a number of sources and are not at all consistent with each other in level of detail or method, but provide useful references for early estimation purposes.

  5. Conversion of fullerenes to diamonds

    DOEpatents

    Gruen, Dieter M.

    1995-01-01

    A method of forming synthetic diamond or diamond-like films on a substrate surface. The method involves the steps of providing a vapor selected from the group of fullerene molecules or an inert gas/fullerene molecule mixture, providing energy to the fullerene molecules consisting of carbon-carbon bonds, the energized fullerene molecules breaking down to form fragments of fullerene molecules including C.sub.2 molecules and depositing the energized fullerene molecules with C.sub.2 fragments onto the substrate with farther fragmentation occurring and forming a thickness of diamond or diamond-like films on the substrate surface.

  6. Design Appropriate Models Based on Intelligent Dimension in Fars Education Organization

    ERIC Educational Resources Information Center

    Goodarzi, Shahbaz; Fallah, Vahid; Saffarian, Saeid

    2016-01-01

    The purpose of this study is to determine the dimensions of smart schools in the Fars education system and provide a suitable model. The research method is descriptive survey. The study population consisted of all school principals Fars Province in the academic 2014-2015 and number of them was 1364. The sample volume using Cochran method was 302…

  7. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  8. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  9. Aerial Refueling Test Methods

    DTIC Science & Technology

    2015-04-13

    and receiver optimal lighting configuration should be determined and evaluated in dusk, twilight and full dark lunar illumination periods. Degraded...should consist of sunset to nautical twilight . These conditions provide poor illumination for visible cameras, but high for IR ones.  Night conditions

  10. Method for refining contaminated iridium

    DOEpatents

    Heshmatpour, B.; Heestand, R.L.

    1982-08-31

    Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.

  11. Patient-Provider Concordance with Behavioral Change Goals Drives Measures of Motivational Interviewing Consistency

    PubMed Central

    Laws, M. Barton; Rose, Gary S.; Beach, Mary Catherine; Lee, Yoojin; Rogers, William S.; Velasco, Alyssa Bianca; Wilson, Ira B.

    2015-01-01

    Objective Motivational Interviewing (MI) consistent talk by a counselor is thought to produce “change talk” in clients. However, it is possible that client resistance to behavior change can produce MI inconsistent counselor behavior. Methods We applied a coding scheme which identifies all of the behavioral counseling about a given issue during a visit (“episodes”), assesses patient concordance with the behavioral goal, and labels providers’ counseling style as facilitative or directive, to a corpus of routine outpatient visits by people with HIV. Using a different data set of comparable encounters, we applied the concepts of episode and concordance, and coded using the Motivational Interviewing Treatment Integrity system. Results Patient concordance/discordance was not observed to change during any episode. Provider directiveness was strongly associated with patient discordance in the first study, and MI inconsistency was strongly associated with discordance in the second. Conclusion Observations that MI-consistent behavior by medical providers is associated with patient change talk or outcomes should be evaluated cautiously, as patient resistance may provoke MI-inconsistency. Practice Implications Counseling episodes in routine medical visits are typically too brief for client talk to evolve toward change. Providers with limited training may have particular difficulty maintaining MI consistency with resistant clients. PMID:25791372

  12. Italian National Forest Inventory: methods, state of the project, and future developments

    Treesearch

    Giovanni Tabacchi; Flora De Natale; Antonio Floris; Caterina Gagliano; Patrizia Gasparini; Gianfranco Scrinzi; Vittorio Tosi

    2007-01-01

    A primary objective of the Italian National Forest Inventory (NFI) is to provide information required by the Kyoto Protocol and the Ministerial Conference on the Protection of Forests in Europe in relation to sustainable forest management practices. For this reason, the second Italian NFI was aimed at providing data in a way that is consistent with the international...

  13. Comparing floral and isotopic paleoelevation estimates: Examples from the western United States

    NASA Astrophysics Data System (ADS)

    Hyland, E. G.; Huntington, K. W.; Sheldon, N. D.; Smith, S. Y.; Strömberg, C. A. E.

    2016-12-01

    Describing paleoelevations is crucial to understanding tectonic processes and deconvolving the effects of uplift and climate on environmental change in the past. Decades of work has gone into estimating past elevation from various proxy archives, particularly using modern relationships between elevation and temperature, floral assemblage compositions, or oxygen isotope values. While these methods have been used widely and refined through time, they are rarely applied in tandem; here we provide two examples from the western United States using new multiproxy methods: 1) combining clumped isotopes and macrofloral assemblages to estimate paleoelevations along the Colorado Plateau, and 2) combining oxygen isotopes and phytolith methods to estimate paleoelevations within the greater Yellowstone region. Clumped isotope measurements and refined floral coexistence methods from sites on the northern Colorado Plateau like Florissant and Creede (CO) consistently estimate low (< 2km) elevations through the Eocene/Oligocene, suggesting slower uplift and a south-north propagation of the plateau. Oxygen isotope measurements and C4 phytolith estimates from sites surrounding the Yellowstone hotspot consistently estimate moderate uplift (0.2-0.7km) propagating along the hotspot track, suggesting migrating dynamic topography associated with the region. These examples provide support for the emerging practice of using multiproxy methods to estimate paleoelevations for important time periods, and can help integrate environmental and tectonic records of the past.

  14. Enhancing the control of force in putting by video game training.

    PubMed

    Fery, Y A; Ponserre, S

    2001-10-10

    Even if golf video games provide no proprioceptive afferences on actual putting movement, they may give sufficient substitutive visual cues to enhance force control in this skill. It was hypothesized that this usefulness requires, however, two conditions: the video game must provide reliable demonstrations of actual putts, and the user must want to use the game to make progress in actual putting. Accordingly, a video game was selected on the basis of its fidelity to the real-world game. It allowed two different methods of adjusting the virtual player's putting force in order to hole a putt: an analogue method that consisted of focusing on the virtual player's movement and a symbolic method that consisted of focusing on the movement of a gauge on a scale representing the virtual player's putting force. The participants had to use one of these methods with either the intention of making progress in actual putting or in a second condition to simply enjoy the game. Results showed a positive transfer of video playing to actual putting skill for the learning group and also, to a lesser degree, for the enjoyment group; but only when they used the symbolic method. Results are discussed in the context of how vision may convey force cues in sports video games.

  15. An empirical method that separates irreversible stem radial growth from bark water content changes in trees: theory and case studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mencuccini, Maurizio; Salmon, Yann; Mitchell, Patrick

    Substantial uncertainty surrounds our knowledge of tree stem growth, with some of the most basic questions, such as when stem radial growth occurs through the daily cycle, still unanswered. Here, we employed high-resolution point dendrometers, sap flow sensors, and developed theory and statistical approaches, to devise a novel method separating irreversible radial growth from elastic tension-driven and elastic osmotically driven changes in bark water content. We tested this method using data from five case study species. Experimental manipulations, namely a field irrigation experiment on Scots pine and a stem girdling experiment on red forest gum trees, were used to validatemore » the theory. Time courses of stem radial growth following irrigation and stem girdling were consistent with a-priori predictions. Patterns of stem radial growth varied across case studies, with growth occurring during the day and/or night, consistent with the available literature. Importantly, our approach provides a valuable alternative to existing methods, as it can be approximated by a simple empirical interpolation routine that derives irreversible radial growth using standard regression techniques. In conclusion, our novel method provides an improved understanding of the relative source–sink carbon dynamics of tree stems at a sub-daily time scale.« less

  16. Concentration Measurements in a Cold Flow Model Annular Combustor Using Laser Induced Fluorescence

    NASA Technical Reports Server (NTRS)

    Morgan, Douglas C.

    1996-01-01

    A nonintrusive concentration measurement method is developed for determining the concentration distribution in a complex flow field. The measurement method consists of marking a liquid flow with a water soluble fluorescent dye. The dye is excited by a two dimensional sheet of laser light. The fluorescent intensity is shown to be proportional to the relative concentration level. The fluorescent field is recorded on a video cassette recorder through a video camera. The recorded images are analyzed with image processing hardware and software to obtain intensity levels. Mean and root mean square (rms) values are calculated from these intensity levels. The method is tested on a single round turbulent jet because previous concentration measurements have been made on this configuration by other investigators. The previous results were used to comparison to qualify the current method. These comparisons showed that this method provides satisfactory results. 'Me concentration measurement system was used to measure the concentrations in the complex flow field of a model gas turbine annular combustor. The model annular combustor consists of opposing primary jets and an annular jet which discharges perpendicular to the primary jets. The mixing between the different jet flows can be visualized from the calculated mean and rms profiles. Concentration field visualization images obtained from the processing provide further qualitative information about the flow field.

  17. An empirical method that separates irreversible stem radial growth from bark water content changes in trees: theory and case studies.

    PubMed

    Mencuccini, Maurizio; Salmon, Yann; Mitchell, Patrick; Hölttä, Teemu; Choat, Brendan; Meir, Patrick; O'Grady, Anthony; Tissue, David; Zweifel, Roman; Sevanto, Sanna; Pfautsch, Sebastian

    2017-02-01

    Substantial uncertainty surrounds our knowledge of tree stem growth, with some of the most basic questions, such as when stem radial growth occurs through the daily cycle, still unanswered. We employed high-resolution point dendrometers, sap flow sensors, and developed theory and statistical approaches, to devise a novel method separating irreversible radial growth from elastic tension-driven and elastic osmotically driven changes in bark water content. We tested this method using data from five case study species. Experimental manipulations, namely a field irrigation experiment on Scots pine and a stem girdling experiment on red forest gum trees, were used to validate the theory. Time courses of stem radial growth following irrigation and stem girdling were consistent with a-priori predictions. Patterns of stem radial growth varied across case studies, with growth occurring during the day and/or night, consistent with the available literature. Importantly, our approach provides a valuable alternative to existing methods, as it can be approximated by a simple empirical interpolation routine that derives irreversible radial growth using standard regression techniques. Our novel method provides an improved understanding of the relative source-sink carbon dynamics of tree stems at a sub-daily time scale. © 2016 The Authors Plant, Cell & Environment Published by John Wiley & Sons Ltd.

  18. Research on the development of space target detecting system and three-dimensional reconstruction technology

    NASA Astrophysics Data System (ADS)

    Li, Dong; Wei, Zhen; Song, Dawei; Sun, Wenfeng; Fan, Xiaoyan

    2016-11-01

    With the development of space technology, the number of spacecrafts and debris are increasing year by year. The demand for detecting and identification of spacecraft is growing strongly, which provides support to the cataloguing, crash warning and protection of aerospace vehicles. The majority of existing approaches for three-dimensional reconstruction is scattering centres correlation, which is based on the radar high resolution range profile (HRRP). This paper proposes a novel method to reconstruct the threedimensional scattering centre structure of target from a sequence of radar ISAR images, which mainly consists of three steps. First is the azimuth scaling of consecutive ISAR images based on fractional Fourier transform (FrFT). The later is the extraction of scattering centres and matching between adjacent ISAR images using grid method. Finally, according to the coordinate matrix of scattering centres, the three-dimensional scattering centre structure is reconstructed using improved factorization method. The three-dimensional structure is featured with stable and intuitive characteristic, which provides a new way to improve the identification probability and reduce the complexity of the model matching library. A satellite model is reconstructed using the proposed method from four consecutive ISAR images. The simulation results prove that the method has gotten a satisfied consistency and accuracy.

  19. An empirical method that separates irreversible stem radial growth from bark water content changes in trees: theory and case studies

    DOE PAGES

    Mencuccini, Maurizio; Salmon, Yann; Mitchell, Patrick; ...

    2017-11-12

    Substantial uncertainty surrounds our knowledge of tree stem growth, with some of the most basic questions, such as when stem radial growth occurs through the daily cycle, still unanswered. Here, we employed high-resolution point dendrometers, sap flow sensors, and developed theory and statistical approaches, to devise a novel method separating irreversible radial growth from elastic tension-driven and elastic osmotically driven changes in bark water content. We tested this method using data from five case study species. Experimental manipulations, namely a field irrigation experiment on Scots pine and a stem girdling experiment on red forest gum trees, were used to validatemore » the theory. Time courses of stem radial growth following irrigation and stem girdling were consistent with a-priori predictions. Patterns of stem radial growth varied across case studies, with growth occurring during the day and/or night, consistent with the available literature. Importantly, our approach provides a valuable alternative to existing methods, as it can be approximated by a simple empirical interpolation routine that derives irreversible radial growth using standard regression techniques. In conclusion, our novel method provides an improved understanding of the relative source–sink carbon dynamics of tree stems at a sub-daily time scale.« less

  20. Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.

    PubMed

    Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G

    2017-09-01

    To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.

  1. Comprehensive inventory of protein complexes in the Protein Data Bank from consistent classification of interfaces

    DOE PAGES

    Bordner, Andrew J.; Gorin, Andrey A.

    2008-05-12

    Here, protein-protein interactions are ubiquitous and essential for cellular processes. High-resolution X-ray crystallographic structures of protein complexes can elucidate the details of their function and provide a basis for many computational and experimental approaches. Here we demonstrate that existing annotations of protein complexes, including those provided by the Protein Data Bank (PDB) itself, contain a significant fraction of incorrect annotations. Results: We have developed a method for identifying protein complexes in the PDB X-ray structures by a four step procedure: (1) comprehensively collecting all protein-protein interfaces; (2) clustering similar protein-protein interfaces together; (3) estimating the probability that each cluster ismore » relevant based on a diverse set of properties; and (4) finally combining these scores for each entry in order to predict the complex structure. Unlike previous annotation methods, consistent prediction of complexes with identical or almost identical protein content is insured. The resulting clusters of biologically relevant interfaces provide a reliable catalog of evolutionary conserved protein-protein interactions.« less

  2. Collaborative voxel-based surgical virtual environments.

    PubMed

    Acosta, Eric; Muniz, Gilbert; Armonda, Rocco; Bowyer, Mark; Liu, Alan

    2008-01-01

    Virtual Reality-based surgical simulators can utilize Collaborative Virtual Environments (C-VEs) to provide team-based training. To support real-time interactions, C-VEs are typically replicated on each user's local computer and a synchronization method helps keep all local copies consistent. This approach does not work well for voxel-based C-VEs since large and frequent volumetric updates make synchronization difficult. This paper describes a method that allows multiple users to interact within a voxel-based C-VE for a craniotomy simulator being developed. Our C-VE method requires smaller update sizes and provides faster synchronization update rates than volumetric-based methods. Additionally, we address network bandwidth/latency issues to simulate networked haptic and bone drilling tool interactions with a voxel-based skull C-VE.

  3. Accuracy and consistency of radiographic interpretation among clinical instructors using two viewing systems.

    PubMed

    Lanning, Sharon K; Best, Al M; Temple, Henry J; Richards, Philip S; Carey, Allison; McCauley, Laurie K

    2006-02-01

    Accurate and consistent radiographic interpretation among clinical instructors is needed for assessment of teaching, student performance, and patient care. The purpose of this investigation was to determine if the method of radiographic viewing affects accuracy and consistency of instructors' determinations of bone loss. Forty-one clinicians who provide instruction in a dental school clinical teaching program (including periodontists, general dentists, periodontal graduate students, and dental hygienists) quantified bone loss for up to twenty-five teeth into four descriptive categories using a view box for plain film viewing or a projection system for digitized image viewing. Ratings were compared to the correct category as determined by direct measurement using the Schei ruler. Agreement with the correct choice for the view box and projection system was 70.2 percent and 64.5 percent, respectively. The mean difference was better for a projection system due to small rater error by graduate students. Projection system ratings were slightly less consistent than view box ratings. Dental hygiene faculty ratings were the most consistent but least accurate. Although the projection system resulted in slightly reduced accuracy and consistency among instructors, training sessions utilizing a single method for projecting digitized radiographic images have their advantages and may positively influence dental education and patient care by enhancing accuracy and consistency of radiographic interpretation among instructors.

  4. Temporally consistent probabilistic detection of new multiple sclerosis lesions in brain MRI.

    PubMed

    Elliott, Colm; Arnold, Douglas L; Collins, D Louis; Arbel, Tal

    2013-08-01

    Detection of new Multiple Sclerosis (MS) lesions on magnetic resonance imaging (MRI) is important as a marker of disease activity and as a potential surrogate for relapses. We propose an approach where sequential scans are jointly segmented, to provide a temporally consistent tissue segmentation while remaining sensitive to newly appearing lesions. The method uses a two-stage classification process: 1) a Bayesian classifier provides a probabilistic brain tissue classification at each voxel of reference and follow-up scans, and 2) a random-forest based lesion-level classification provides a final identification of new lesions. Generative models are learned based on 364 scans from 95 subjects from a multi-center clinical trial. The method is evaluated on sequential brain MRI of 160 subjects from a separate multi-center clinical trial, and is compared to 1) semi-automatically generated ground truth segmentations and 2) fully manual identification of new lesions generated independently by nine expert raters on a subset of 60 subjects. For new lesions greater than 0.15 cc in size, the classifier has near perfect performance (99% sensitivity, 2% false detection rate), as compared to ground truth. The proposed method was also shown to exceed the performance of any one of the nine expert manual identifications.

  5. Screening-Level Ecological Risk Assessment Methods, Revision 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirenda, Richard J.

    2012-08-16

    This document provides guidance for screening-level assessments of potential adverse impacts to ecological resources from release of environmental contaminants at the Los Alamos National Laboratory (LANL or the Laboratory). The methods presented are based on two objectives, namely: to provide a basis for reaching consensus with regulators, managers, and other interested parties on how to conduct screening-level ecological risk investigations at the Laboratory; and to provide guidance for ecological risk assessors under the Environmental Programs (EP) Directorate. This guidance promotes consistency, rigor, and defensibility in ecological screening investigations and in reporting those investigation results. The purpose of the screening assessmentmore » is to provide information to the risk managers so informed riskmanagement decisions can be made. This document provides examples of recommendations and possible risk-management strategies.« less

  6. X-ray mask and method for providing same

    DOEpatents

    Morales, Alfredo M [Pleasanton, CA; Skala, Dawn M [Fremont, CA

    2004-09-28

    The present invention describes a method for fabricating an x-ray mask tool which can achieve pattern features having lateral dimension of less than 1 micron. The process uses a thin photoresist and a standard lithographic mask to transfer an trace image pattern in the surface of a silicon wafer by exposing and developing the resist. The exposed portion of the silicon substrate is then anisotropically etched to provide an etched image of the trace image pattern consisting of a series of channels in the silicon having a high depth-to-width aspect ratio. These channels are then filled by depositing a metal such as gold to provide an inverse image of the trace image and thereby providing a robust x-ray mask tool.

  7. X-ray mask and method for providing same

    DOEpatents

    Morales, Alfredo M.; Skala, Dawn M.

    2002-01-01

    The present invention describes a method for fabricating an x-ray mask tool which can achieve pattern features having lateral dimension of less than 1 micron. The process uses a thin photoresist and a standard lithographic mask to transfer an trace image pattern in the surface of a silicon wafer by exposing and developing the resist. The exposed portion of the silicon substrate is then anisotropically etched to provide an etched image of the trace image pattern consisting of a series of channels in the silicon having a high depth-to-width aspect ratio. These channels are then filled by depositing a metal such as gold to provide an inverse image of the trace image and thereby providing a robust x-ray mask tool.

  8. Three-step interferometric method with blind phase shifts by use of interframe correlation between interferograms

    NASA Astrophysics Data System (ADS)

    Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.

    2018-06-01

    A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.

  9. Pesticide Environmental Accounting: a method for assessing the external costs of individual pesticide applications.

    PubMed

    Leach, A W; Mumford, J D

    2008-01-01

    The Pesticide Environmental Accounting (PEA) tool provides a monetary estimate of environmental and health impacts per hectare-application for any pesticide. The model combines the Environmental Impact Quotient method and a methodology for absolute estimates of external pesticide costs in UK, USA and Germany. For many countries resources are not available for intensive assessments of external pesticide costs. The model converts external costs of a pesticide in the UK, USA and Germany to Mediterranean countries. Economic and policy applications include estimating impacts of pesticide reduction policies or benefits from technologies replacing pesticides, such as sterile insect technique. The system integrates disparate data and approaches into a single logical method. The assumptions in the system provide transparency and consistency but at the cost of some specificity and precision, a reasonable trade-off for a method that provides both comparative estimates of pesticide impacts and area-based assessments of absolute impacts.

  10. A new unconditionally stable and consistent quasi-analytical in-stream water quality solution scheme for CSTR-based water quality simulators

    NASA Astrophysics Data System (ADS)

    Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy

    2017-06-01

    Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.

  11. Focused vs Broad In World War I: A Historical Comparison Of General Staff Officer Education At Pre War Leavenworth and Langres

    DTIC Science & Technology

    2016-05-26

    innovation , which stymied the students’ growth as reflective practitioners. The Langres Staff College students did not shrn:e a sirnilrn: knowledge...Secondly, the Langres Staff College’s methods of instruction lacked innovation , which stymied the students’ growth as reflective practitioners. The...pre-war Leavenworth Staff College’s methods of instruction consisted of innovative methods, which provided students with more opportunities to

  12. COMPOSITION AND METHOD FOR COATING A CERAMIC BODY

    DOEpatents

    Blanchard, M.K.

    1958-11-01

    A method is presented for protecting a beryllium carbide-graphite body. The method consists in providing a ceramic coating which must contain at least one basic oxide component, such as CaO, at least one amphoteric oxide component, such as Al/sub 2/O/sub 3/, and at least one acidic oxide component, such as SiO/ sub 2/. Various specific formulations for this ceramic coating are given and the coating is applied by conventional ceramic techniques.

  13. System and method for air temperature control in an oxygen transport membrane based reactor

    DOEpatents

    Kelly, Sean M

    2016-09-27

    A system and method for air temperature control in an oxygen transport membrane based reactor is provided. The system and method involves introducing a specific quantity of cooling air or trim air in between stages in a multistage oxygen transport membrane based reactor or furnace to maintain generally consistent surface temperatures of the oxygen transport membrane elements and associated reactors. The associated reactors may include reforming reactors, boilers or process gas heaters.

  14. System and method for temperature control in an oxygen transport membrane based reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Sean M.

    A system and method for temperature control in an oxygen transport membrane based reactor is provided. The system and method involves introducing a specific quantity of cooling air or trim air in between stages in a multistage oxygen transport membrane based reactor or furnace to maintain generally consistent surface temperatures of the oxygen transport membrane elements and associated reactors. The associated reactors may include reforming reactors, boilers or process gas heaters.

  15. Secondary power-producing cell. [electrodes contain same two elements in different proportions

    DOEpatents

    Fischer, A.K.

    1971-10-26

    This cell consists of an anode and a cathode containing the same two elements in different proportions and an electrolyte which contains ions of the element which is to be transported through it. The electrodes consist of chromium, iron, lithium, sodium, cadmium, copper, or zinc and phosphorus, selenium, tellurium, sulfur, arsenic, or nitrogen. A method to heat the cathode in the regeneration cycle to transfer the electronegative component to the anode is provided. (RWR)

  16. GAMA/H-ATLAS: a meta-analysis of SFR indicators - comprehensive measures of the SFR-M* relation and cosmic star formation history at z < 0.4

    NASA Astrophysics Data System (ADS)

    Davies, L. J. M.; Driver, S. P.; Robotham, A. S. G.; Grootes, M. W.; Popescu, C. C.; Tuffs, R. J.; Hopkins, A.; Alpaslan, M.; Andrews, S. K.; Bland-Hawthorn, J.; Bremer, M. N.; Brough, S.; Brown, M. J. I.; Cluver, M. E.; Croom, S.; da Cunha, E.; Dunne, L.; Lara-López, M. A.; Liske, J.; Loveday, J.; Moffett, A. J.; Owers, M.; Phillipps, S.; Sansom, A. E.; Taylor, E. N.; Michalowski, M. J.; Ibar, E.; Smith, M.; Bourne, N.

    2016-09-01

    We present a meta-analysis of star formation rate (SFR) indicators in the Galaxy And Mass Assembly (GAMA) survey, producing 12 different SFR metrics and determining the SFR-M* relation for each. We compare and contrast published methods to extract the SFR from each indicator, using a well-defined local sample of morphologically selected spiral galaxies, which excludes sources which potentially have large recent changes to their SFR. The different methods are found to yield SFR-M* relations with inconsistent slopes and normalizations, suggesting differences between calibration methods. The recovered SFR-M* relations also have a large range in scatter which, as SFRs of the targets may be considered constant over the different time-scales, suggests differences in the accuracy by which methods correct for attenuation in individual targets. We then recalibrate all SFR indicators to provide new, robust and consistent luminosity-to-SFR calibrations, finding that the most consistent slopes and normalizations of the SFR-M* relations are obtained when recalibrated using the radiation transfer method of Popescu et al. These new calibrations can be used to directly compare SFRs across different observations, epochs and galaxy populations. We then apply our calibrations to the GAMA II equatorial data set and explore the evolution of star formation in the local Universe. We determine the evolution of the normalization to the SFR-M* relation from 0 < z < 0.35 - finding consistent trends with previous estimates at 0.3 < z < 1.2. We then provide the definitive z < 0.35 cosmic star formation history, SFR-M* relation and its evolution over the last 3 billion years.

  17. Metal oxide composite dosimeter method and material

    DOEpatents

    Miller, Steven D.

    1998-01-01

    The present invention is a method of measuring a radiation dose wherein a radiation responsive material consisting essentially of metal oxide is first exposed to ionizing radiation. The metal oxide is then stimulating with light thereby causing the radiation responsive material to photoluminesce. Photons emitted from the metal oxide as a result of photoluminescence may be counted to provide a measure of the ionizing radiation.

  18. Mira variables: An informal review

    NASA Technical Reports Server (NTRS)

    Wing, R. F.

    1980-01-01

    The structure of the Mira variables is discussed with particular emphasis on the extent of their observable atmospheres, the various methods for measuring the sizes of these atmospheres, and the manner in which the size changes through the cycle. The results obtained by direct, photometric and spectroscopic methods are compared, and the problems of interpretation are addressed. Also, a simple model for the atmospheric structure and motions of Miras based on recent observations of the doubling of infrared molecualr times is described. This model, consisting of two atmospheric layers plus a circumstellar shell, provides a physically plausible picture of the atmosphere which is consistent with the photometrically measured magnitude and temperature variations as well as the spectroscopic data.

  19. Systems and methods for coating conduit interior surfaces utilizing a thermal spray gun with extension arm

    DOEpatents

    Moore, Karen A.; Zatorski, Raymond A.

    2005-07-12

    Systems and methods for applying a coating to an interior surface of a conduit. In one embodiment, a spray gun configured to apply a coating is attached to an extension arm which may be inserted into the bore of a pipe. The spray gun may be a thermal spray gun adapted to apply a powder coating. An evacuation system may be used to provide a volume area of reduced air pressure for drawing overspray out of the pipe interior during coating. The extension arm as well as the spray gun may be cooled to maintain a consistent temperature in the system, allowing for more consistent coating.

  20. Methods for coating conduit interior surfaces utilizing a thermal spray gun with extension arm

    DOEpatents

    Moore, Karen A.; Zatorski, Raymond A.

    2007-10-02

    Systems and methods for applying a coating to an interior surface of a conduit. In one embodiment, a spray gun configured to apply a coating is attached to an extension arm which may be inserted into the bore of a pipe. The spray gun may be a thermal spray gun adapted to apply a powder coating. An evacuation system may be used to provide a volume area of reduced air pressure for drawing overspray out of the pipe interior during coating. The extension arm as well as the spray gun may be cooled to maintain a consistent temperature in the system, allowing for more consistent coating.

  1. Method of bonding metals to ceramics

    DOEpatents

    Maroni, Victor A.

    1992-01-01

    A method of forming a composite by providing a ceramic capable of having zero electrical resistance and complete diamagnetism at superconducting temperatures, bonding a thin layer of Ag, Au or alloys thereof with the ceramic. Thereafter, there is bonded a first metal to the ceramic surface at a temperature less than about 400.degree. C., and then a second metal is bonded to the first metal at a temperature less than about 400.degree. C. to form a composite wherein the first metal is selected from the class consisting of In, Ga, Sn, Bi, Zn, Cd, Pb, Ti and alloys thereof and wherein the second metal is selected from the class consisting of Al, Cu, Pb and Zn and alloys thereof.

  2. NETL CO 2 Storage prospeCtive Resource Estimation Excel aNalysis (CO 2-SCREEN) User's Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanguinito, Sean M.; Goodman, Angela; Levine, Jonathan

    This user’s manual guides the use of the National Energy Technology Laboratory’s (NETL) CO 2 Storage prospeCtive Resource Estimation Excel aNalysis (CO 2-SCREEN) tool, which was developed to aid users screening saline formations for prospective CO 2 storage resources. CO 2- SCREEN applies U.S. Department of Energy (DOE) methods and equations for estimating prospective CO 2 storage resources for saline formations. CO2-SCREEN was developed to be substantive and user-friendly. It also provides a consistent method for calculating prospective CO 2 storage resources that allows for consistent comparison of results between different research efforts, such as the Regional Carbon Sequestration Partnershipsmore » (RCSP). CO 2-SCREEN consists of an Excel spreadsheet containing geologic inputs and outputs, linked to a GoldSim Player model that calculates prospective CO 2 storage resources via Monte Carlo simulation.« less

  3. Analyzing qualitative data with computer software.

    PubMed Central

    Weitzman, E A

    1999-01-01

    OBJECTIVE: To provide health services researchers with an overview of the qualitative data analysis process and the role of software within it; to provide a principled approach to choosing among software packages to support qualitative data analysis; to alert researchers to the potential benefits and limitations of such software; and to provide an overview of the developments to be expected in the field in the near future. DATA SOURCES, STUDY DESIGN, METHODS: This article does not include reports of empirical research. CONCLUSIONS: Software for qualitative data analysis can benefit the researcher in terms of speed, consistency, rigor, and access to analytic methods not available by hand. Software, however, is not a replacement for methodological training. PMID:10591282

  4. Quantum theory of multiscale coarse-graining.

    PubMed

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A

    2018-03-14

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  5. A configuration space of homologous proteins conserving mutual information and allowing a phylogeny inference based on pair-wise Z-score probabilities.

    PubMed

    Bastien, Olivier; Ortet, Philippe; Roy, Sylvaine; Maréchal, Eric

    2005-03-10

    Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons) and be the basis for a novel method of consistent and stable phylogenetic reconstruction. We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space) and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP) allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.

  6. Clustering and Bayesian hierarchical modeling for the definition of informative prior distributions in hydrogeology

    NASA Astrophysics Data System (ADS)

    Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.

    2017-12-01

    In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.

  7. Facilitation Standards: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Hunter, Jennifer

    2017-01-01

    Online education is increasing as a solution to manage ever increasing enrollment numbers at higher education institutions. Intentionally and thoughtfully constructed courses allow students to improve performance through practice and self-assessment and instructors benefit from improving consistency in providing content and assessing process,…

  8. Being a Teacher of the Behavior Disordered.

    ERIC Educational Resources Information Center

    Dolce, Russell

    1984-01-01

    Draws on 13 years experience teaching behavior-disordered students to identify effective teaching methods. Discusses the importance of honesty, fairness, consistency, persistence, intuition, praise, and humor in the teacher-student relationship. Provides eight guidelines for developing a workable and suitable classroom reward system. (JHZ)

  9. Evaluation of Wet Chemical ICP-AES Elemental Analysis Methods usingSimulated Hanford Waste Samples-Phase I Interim Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Charles J.; Edwards, Thomas B.

    2005-04-30

    The wet chemistry digestion method development for providing process control elemental analyses of the Hanford Tank Waste Treatment and Immobilization Plant (WTP) Melter Feed Preparation Vessel (MFPV) samples is divided into two phases: Phase I consists of: (1) optimizing digestion methods as a precursor to elemental analyses by ICP-AES techniques; (2) selecting methods with the desired analytical reliability and speed to support the nine-hour or less turnaround time requirement of the WTP; and (3) providing baseline comparison to the laser ablation (LA) sample introduction technique for ICP-AES elemental analyses that is being developed at the Savannah River National Laboratory (SRNL).more » Phase II consists of: (1) Time-and-Motion study of the selected methods from Phase I with actual Hanford waste or waste simulants in shielded cell facilities to ensure that the methods can be performed remotely and maintain the desired characteristics; and (2) digestion of glass samples prepared from actual Hanford Waste tank sludge for providing comparative results to the LA Phase II study. Based on the Phase I testing discussed in this report, a tandem digestion approach consisting of sodium peroxide fusion digestions carried out in nickel crucibles and warm mixed-acid digestions carried out in plastic bottles has been selected for Time-and-Motion study in Phase II. SRNL experience with performing this analytical approach in laboratory hoods indicates that well-trained cell operator teams will be able to perform the tandem digestions in five hours or less. The selected approach will produce two sets of solutions for analysis by ICP-AES techniques. Four hours would then be allocated for performing the ICP-AES analyses and reporting results to meet the nine-hour or less turnaround time requirement. The tandem digestion approach will need to be performed in two separate shielded analytical cells by two separate cell operator teams in order to achieve the nine-hour or less turnaround time. Because of the simplicity of the warm mixed-acid method, a well-trained cell operator team may in time be able to perform both sets of digestions. However, having separate shielded cells for each of the methods is prudent to avoid overcrowding problems that would impede a minimal turnaround time.« less

  10. Shape and energy consistent pseudopotentials for correlated electron systems

    PubMed Central

    Needs, R. J.

    2017-01-01

    A method is developed for generating pseudopotentials for use in correlated-electron calculations. The paradigms of shape and energy consistency are combined and defined in terms of correlated-electron wave-functions. The resulting energy consistent correlated electron pseudopotentials (eCEPPs) are constructed for H, Li–F, Sc–Fe, and Cu. Their accuracy is quantified by comparing the relaxed molecular geometries and dissociation energies which they provide with all electron results, with all quantities evaluated using coupled cluster singles, doubles, and triples calculations. Errors inherent in the pseudopotentials are also compared with those arising from a number of approximations commonly used with pseudopotentials. The eCEPPs provide a significant improvement in optimised geometries and dissociation energies for small molecules, with errors for the latter being an order-of-magnitude smaller than for Hartree-Fock-based pseudopotentials available in the literature. Gaussian basis sets are optimised for use with these pseudopotentials. PMID:28571391

  11. Needs and Opportunities for Uncertainty-Based Multidisciplinary Design Methods for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Hemsch, Michael J.; Hilburger, Mark W.; Kenny, Sean P; Luckring, James M.; Maghami, Peiman; Padula, Sharon L.; Stroud, W. Jefferson

    2002-01-01

    This report consists of a survey of the state of the art in uncertainty-based design together with recommendations for a Base research activity in this area for the NASA Langley Research Center. This report identifies the needs and opportunities for computational and experimental methods that provide accurate, efficient solutions to nondeterministic multidisciplinary aerospace vehicle design problems. Barriers to the adoption of uncertainty-based design methods are identified. and the benefits of the use of such methods are explained. Particular research needs are listed.

  12. Comparison of Pressures Applied by Digital Tourniquets in the Emergency Department

    PubMed Central

    Lahham, Shadi; Tu, Khoa; Ni, Mickey; Tran, Viet; Lotfipour, Shahram; Anderson, Craig L.; Fox, J Christian

    2011-01-01

    Background: Digital tourniquets used in the emergency department have been scrutinized due to complications associated with their use, including neurovascular injury secondary to excessive tourniquet pressure and digital ischemia caused by a forgotten tourniquet. To minimize these risks, a conspicuous tourniquet that applies the least amount of pressure necessary to maintain hemostasis is recommended. Objective: To evaluate the commonly used tourniquet methods, the Penrose drain, rolled glove, the Tourni-cot and the T-Ring, to determine which applies the lowest pressure while consistently preventing digital perfusion. Methods: We measured the circumference of selected digits of 200 adult males and 200 adult females to determine the adult finger size range. We then measured the pressure applied to four representative finger sizes using a pressure monitor and assessed the ability of each method to prevent digital blood flow with a pulse oximeter. Results: We selected four representative finger sizes: 45mm, 65mm, 70mm, and 85mm to test the different tourniquet methods. All methods consistently prevented digital perfusion. The highest pressure recorded for the Penrose drain was 727 mmHg, the clamped rolled glove 439, the unclamped rolled glove 267, Tourni-cot 246, while the T-Ring had the lowest at 151 mmHg and least variable pressures of all methods. Conclusion: All tested methods provided adequate hemostasis. Only the Tourni-cot and T-Ring provided hemostasis at safe pressures across all digit sizes with the T-Ring having a lower overall average pressure. PMID:21691536

  13. Method of Calibrating a Force Balance

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor); Rhew, Ray D. (Inventor); Johnson, Thomas H. (Inventor); Landman, Drew (Inventor)

    2015-01-01

    A calibration system and method utilizes acceleration of a mass to generate a force on the mass. An expected value of the force is calculated based on the magnitude and acceleration of the mass. A fixture is utilized to mount the mass to a force balance, and the force balance is calibrated to provide a reading consistent with the expected force determined for a given acceleration. The acceleration can be varied to provide different expected forces, and the force balance can be calibrated for different applied forces. The acceleration may result from linear acceleration of the mass or rotational movement of the mass.

  14. A method for smoothing segmented lung boundary in chest CT images

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  15. Container Surface Evaluation by Function Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    Container images are analyzed for specific surface features, such as, pits, cracks, and corrosion. The detection of these features is confounded with complicating features. These complication features include: shape/curvature, welds, edges, scratches, foreign objects among others. A method is provided to discriminate between the various features. The method consists of estimating the image background, determining a residual image and post processing to determine the features present. The methodology is not finalized but demonstrates the feasibility of a method to determine the kind and size of the features present.

  16. High-resolution wave-theory-based ultrasound reflection imaging using the split-step fourier and globally optimized fourier finite-difference methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie

    Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Resultsmore » from various data input to the method indicate significant improvements are provided in both image quality and resolution.« less

  17. METHOD AND APPARATUS FOR MEASURING RADIATION

    DOEpatents

    Reeder, S.D.

    1962-04-17

    A chemical dosimeter for measuring the progress of a radiation-induced oxidation-reduction reaction is described. The dosimeter comprises a container filled with an aqueous chemical oxidation-reduction system which reacts quantitatively to the radiation. An anode of the group consisting of antimony and tungsten and a cathode of the group consisting of gold and platnium are inserted into the system. Means are provided to stir the system and a potential sensing device is connected across the anode and cathode to detect voltage changes. (AEC)

  18. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    DOEpatents

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  19. Analysis of full disc Ca II K spectroheliograms. I. Photometric calibration and centre-to-limb variation compensation

    NASA Astrophysics Data System (ADS)

    Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.

    2018-01-01

    Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.

  20. Robust and accurate vectorization of line drawings.

    PubMed

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  1. Method of calibrating an interferometer and reducing its systematic noise

    NASA Technical Reports Server (NTRS)

    Hammer, Philip D. (Inventor)

    1997-01-01

    Methods of operation and data analysis for an interferometer so as to eliminate the errors contributed by non-responsive or unstable pixels, interpixel gain variations that drift over time, and spurious noise that would otherwise degrade the operation of the interferometer are disclosed. The methods provide for either online or post-processing calibration. The methods apply prescribed reversible transformations that exploit the physical properties of interferograms obtained from said interferometer to derive a calibration reference signal for subsequent treatment of said interferograms for interpixel gain variations. A self-consistent approach for treating bad pixels is incorporated into the methods.

  2. Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics

    DOE PAGES

    Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; ...

    2017-01-03

    In this paper, we present a consistent implicit incompressible smoothed particle hydrodynamics (I 2SPH) discretization of Navier–Stokes, Poisson–Boltzmann, and advection–diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The accuracy and convergence of the consistent I 2SPH are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. Lastly, the new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.

  3. Personality and Situation Predictors of Consistent Eating Patterns

    PubMed Central

    Vainik, Uku; Dubé, Laurette; Lu, Ji; Fellows, Lesley K.

    2015-01-01

    Introduction A consistent eating style might be beneficial to avoid overeating in a food-rich environment. Eating consistency entails maintaining a similar dietary pattern across different eating situations. This construct is relatively under-studied, but the available evidence suggests that eating consistency supports successful weight maintenance and decreases risk for metabolic syndrome and cardiovascular disease. Yet, personality and situation predictors of consistency have not been studied. Methods A community-based sample of 164 women completed various personality tests, and 139 of them also reported their eating behaviour 6 times/day over 10 observational days. We focused on observations with meals (breakfast, lunch, or dinner). The participants indicated if their momentary eating patterns were consistent with their own baseline eating patterns in terms of healthiness or size of the meal. Further, participants described various characteristics of each eating situation. Results Eating consistency was positively predicted by trait self-control. Eating consistency was undermined by eating in the evening, eating with others, eating away from home, having consumed alcohol and having undertaken physical exercise. Interactions emerged between personality traits and situations, including punishment sensitivity, restraint, physical activity and alcohol consumption. Conclusion Trait self-control and several eating situation variables were related to eating consistency. These findings provide a starting point for targeting interventions to improve consistency, suggesting that a focus on self-control skills, together with addressing contextual factors such as social situations and time of day, may be most promising. This work is a first step to provide people with the tools they need to maintain a consistently healthy lifestyle in a food-rich environment. PMID:26633707

  4. XENOPUS LAEVIS: A CULTURING AND REARING PROTOCOL

    EPA Science Inventory

    Xenopus laevis are used extensively here at MED-Duluth as a model for assessing development toxicity to xenobiotics. As a result, a culturing system has been developed that provides eggs and tadpoles of consistent high quality for use by researchers at the facility. The methods ...

  5. Developing Water Quality Critera for Suspended and Bedded Sediments-Illustrative Example Application.

    EPA Science Inventory

    The U. S. EPA's Framework for Developing Suspended and Bedded Sediments (SABS) Water Quality Criteria (SABS Framework) provides a consistent process, technical methods, and supporting materials to enable resource managers to develop ambient water quality criteria for one of the m...

  6. Improving FHWA's Ability to Assess Highway Infrastructure Health : National Meeting Report

    DOT National Transportation Integrated Search

    2011-12-08

    The FHWA in coordination with AASHTO conducted a study to define a consistent and reliable method to document infrastructure health with a focus on pavements and bridges on the Interstate System, and to develop a framework for tools that can provide ...

  7. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning

    PubMed Central

    Matsunaga, Yasuhiro

    2018-01-01

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. PMID:29723137

  8. Characterizing Esophageal Cancerous Cells at Different Stages Using the Dielectrophoretic Impedance Measurement Method in a Microchip.

    PubMed

    Wang, Hsiang-Chen; Nguyen, Ngoc-Viet; Lin, Rui-Yi; Jen, Chun-Ping

    2017-05-06

    Analysis of cancerous cells allows us to provide useful information for the early diagnosis of cancer and to monitor treatment progress. An approach based on electrical principles has recently become an attractive technique. This study presents a microdevice that utilizes a dielectrophoretic impedance measurement method for the identification of cancerous cells. The proposed biochip consists of circle-on-line microelectrodes that are patterned using a standard microfabrication processes. A sample of various cell concentrations was introduced in an open-top microchamber. The target cells were collectively concentrated between the microelectrodes using dielectrophoresis manipulation, and their electrical impedance properties were also measured. Different stages of human esophageal squamous cell carcinoma lines could be distinguished. This result is consistent with findings using hyperspectral imaging technology. Moreover, it was observed that the distinguishing characteristics change in response to the progression of cancer cell invasiveness by Raman spectroscopy. The device enables highly efficient cell collection and provides rapid, sensitive, and label-free electrical measurements of cancerous cells.

  9. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning.

    PubMed

    Matsunaga, Yasuhiro; Sugita, Yuji

    2018-05-03

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. © 2018, Matsunaga et al.

  10. Decision-Tree Program

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1994-01-01

    IND computer program introduces Bayesian and Markov/maximum-likelihood (MML) methods and more-sophisticated methods of searching in growing trees. Produces more-accurate class-probability estimates important in applications like diagnosis. Provides range of features and styles with convenience for casual user, fine-tuning for advanced user or for those interested in research. Consists of four basic kinds of routines: data-manipulation, tree-generation, tree-testing, and tree-display. Written in C language.

  11. Method for forming a thermocouple

    DOEpatents

    Metz, Hugh J.

    1979-01-01

    A method is provided for producing a fast response, insulated junction thermocouple having a uniform diameter outer sheath in the region of the measuring junction. One step is added to the usual thermocouple fabrication process that consists in expanding the thermocouple sheath following the insulation removal step. This makes it possible to swage the sheath back to the original diameter and compact the insulation to the desired high density in the final fabrication step.

  12. SEPARATION OF PLUTONYL IONS

    DOEpatents

    Connick, R.E.; McVey, Wm.H.

    1958-07-15

    A process is described for separating plutonyl ions from the acetate ions with which they are associated in certaln carrier precipitation methods of concentrating plutonium. The method consists in adding alkaline earth metal ions and subsequently alkalizing the solution, causing formation of an alkaltne earth plutonate precipitate. Barium hydroxide is used in a preferred embodiment since it provides alkaline earth metal ion and alkalizes the solution in one step forming insoluble barium platonate.

  13. Use of Psychotherapy for Depression in Older Adults

    PubMed Central

    Wei, Wenhui; Sambamoorthi, Usha; Olfson, Mark; Walkup, James T.; Crystal, Stephen

    2010-01-01

    Objective The authors examine national patterns in psychotherapy for older adults with a diagnosis of depression and analyze correlates of psychotherapy use that is consistent with Agency for Health Care Policy and Research guidelines for duration of treatment. Method Linked Medicare claims and survey data from the 1992–1999 Medicare Current Beneficiary Survey were used. The data were merged with the Area Resource File to assess the effect of provider-supply influences on psychotherapy treatment. An episode-of-care framework approach was used to analyze psychotherapy use and treatment duration. Multiple logistic regression analysis was used to predict psychotherapy use and its consistency. Results The authors identified 2,025 episodes of depression treatment between 1992 and 1999. Overall, psychotherapy was used in 25% (N=474) of the episodes, with 68% of episodes with psychotherapy involving services received only from psychiatrists. (Percentages were weighted for the complex design of the Medicare Current Beneficiary Survey.) Use of psychotherapy was correlated with younger patient age, higher patient educational attainment, and availability of local psychotherapy providers. Among episodes in which psychotherapy was used, only a minority (33%, N=141) involved patients who remained in consistent treatment, defined as extending for at least two-thirds of the episode of depression. Availability of local providers was positively correlated with consistent psychotherapy use. In analyses with adjustment for provider-related factors, patients’ socioeconomic and demographic characteristics did not affect the odds of receiving consistent psychotherapy. Conclusions Use of psychotherapy remains uncommon among depressed older adults despite its widely acknowledged efficacy. Some of the disparities in psychotherapy utilization suggest supply-side barriers. Increasing the geographic availability of mental health care providers may be one way of increasing access to psychotherapy for depressed older adults. PMID:15800143

  14. Review and Extension of CO2-Based Methods to Determine Ventilation Rates with Application to School Classrooms

    PubMed Central

    Batterman, Stuart

    2017-01-01

    The ventilation rate (VR) is a key parameter affecting indoor environmental quality (IEQ) and the energy consumption of buildings. This paper reviews the use of CO2 as a “natural” tracer gas for estimating VRs, focusing on applications in school classrooms. It provides details and guidance for the steady-state, build-up, decay and transient mass balance methods. An extension to the build-up method and an analysis of the post-exercise recovery period that can increase CO2 generation rates are presented. Measurements in four mechanically-ventilated school buildings demonstrate the methods and highlight issues affecting their applicability. VRs during the school day fell below recommended minimum levels, and VRs during evening and early morning were on the order of 0.1 h−1, reflecting shutdown of the ventilation systems. The transient mass balance method was the most flexible and advantageous method given the low air change rates and dynamic occupancy patterns observed in the classrooms. While the extension to the build-up method improved stability and consistency, the accuracy of this and the steady-state method may be limited. Decay-based methods did not reflect the VR during the school day due to heating, ventilation and air conditioning (HVAC) system shutdown. Since the number of occupants in classrooms changes over the day, the VR expressed on a per person basis (e.g., L·s−1·person−1) depends on the occupancy metric. If occupancy measurements can be obtained, then the transient mass balance method likely will provide the most consistent and accurate results among the CO2-based methods. Improved VR measurements can benefit many applications, including research examining the linkage between ventilation and health. PMID:28165398

  15. Review and Extension of CO₂-Based Methods to Determine Ventilation Rates with Application to School Classrooms.

    PubMed

    Batterman, Stuart

    2017-02-04

    The ventilation rate (VR) is a key parameter affecting indoor environmental quality (IEQ) and the energy consumption of buildings. This paper reviews the use of CO₂ as a "natural" tracer gas for estimating VRs, focusing on applications in school classrooms. It provides details and guidance for the steady-state, build-up, decay and transient mass balance methods. An extension to the build-up method and an analysis of the post-exercise recovery period that can increase CO₂ generation rates are presented. Measurements in four mechanically-ventilated school buildings demonstrate the methods and highlight issues affecting their applicability. VRs during the school day fell below recommended minimum levels, and VRs during evening and early morning were on the order of 0.1 h -1 , reflecting shutdown of the ventilation systems. The transient mass balance method was the most flexible and advantageous method given the low air change rates and dynamic occupancy patterns observed in the classrooms. While the extension to the build-up method improved stability and consistency, the accuracy of this and the steady-state method may be limited. Decay-based methods did not reflect the VR during the school day due to heating, ventilation and air conditioning (HVAC) system shutdown. Since the number of occupants in classrooms changes over the day, the VR expressed on a per person basis (e.g., L·s -1 ·person -1 ) depends on the occupancy metric. If occupancy measurements can be obtained, then the transient mass balance method likely will provide the most consistent and accurate results among the CO₂-based methods. Improved VR measurements can benefit many applications, including research examining the linkage between ventilation and health.

  16. Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.

    PubMed

    Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo

    2017-12-01

    The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.

  17. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    PubMed Central

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  18. Common methods for fecal sample storage in field studies yield consistent signatures of individual identity in microbiome sequencing data.

    PubMed

    Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny

    2016-08-16

    Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.

  19. The Function Biomedical Informatics Research Network Data Repository

    PubMed Central

    Keator, David B.; van Erp, Theo G.M.; Turner, Jessica A.; Glover, Gary H.; Mueller, Bryon A.; Liu, Thomas T.; Voyvodic, James T.; Rasmussen, Jerod; Calhoun, Vince D.; Lee, Hyo Jong; Toga, Arthur W.; McEwen, Sarah; Ford, Judith M.; Mathalon, Daniel H.; Diaz, Michele; O’Leary, Daniel S.; Bockholt, H. Jeremy; Gadde, Syam; Preda, Adrian; Wible, Cynthia G.; Stern, Hal S.; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G.

    2015-01-01

    The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical datasets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 dataset consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 Tesla scanners. The FBIRN Phase 2 and Phase 3 datasets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN’s multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. PMID:26364863

  20. Ecological Systems Theory in "School Psychology Review"

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Warmbold-Brann, Kristy; Zaslofsky, Anne F.

    2015-01-01

    Ecological systems theory (EST) has been suggested as a framework to provide effective school psychology services, but previous reviews of research found questionable consistency between methods and the principles of EST. The current article reviewed 349 articles published in "School Psychology Review" (SPR) between 2006 and 2015 and…

  1. Solvent effects in time-dependent self-consistent field methods. II. Variational formulations and analytical gradients

    DOE PAGES

    Bjorgaard, J. A.; Velizhanin, K. A.; Tretiak, S.

    2015-08-06

    This study describes variational energy expressions and analytical excited state energy gradients for time-dependent self-consistent field methods with polarizable solvent effects. Linear response, vertical excitation, and state-specific solventmodels are examined. Enforcing a variational ground stateenergy expression in the state-specific model is found to reduce it to the vertical excitation model. Variational excited state energy expressions are then provided for the linear response and vertical excitation models and analytical gradients are formulated. Using semiempiricalmodel chemistry, the variational expressions are verified by numerical and analytical differentiation with respect to a static external electric field. Lastly, analytical gradients are further tested by performingmore » microcanonical excited state molecular dynamics with p-nitroaniline.« less

  2. Consistent use of the standard model effective potential.

    PubMed

    Andreassen, Anders; Frost, William; Schwartz, Matthew D

    2014-12-12

    The stability of the standard model is determined by the true minimum of the effective Higgs potential. We show that the potential at its minimum when computed by the traditional method is strongly dependent on the gauge parameter. It moreover depends on the scale where the potential is calculated. We provide a consistent method for determining absolute stability independent of both gauge and calculation scale, order by order in perturbation theory. This leads to a revised stability bounds m(h)(pole)>(129.4±2.3)  GeV and m(t)(pole)<(171.2±0.3)  GeV. We also show how to evaluate the effect of new physics on the stability bound without resorting to unphysical field values.

  3. Modeling and Bayesian parameter estimation for shape memory alloy bending actuators

    NASA Astrophysics Data System (ADS)

    Crews, John H.; Smith, Ralph C.

    2012-04-01

    In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.

  4. Augmented reality & gesture-based architecture in games for the elderly.

    PubMed

    McCallum, Simon; Boletsis, Costas

    2013-01-01

    Serious games for health and, more specifically, for elderly people have developed rapidly in recent years. The recent popularization of novel interaction methods of consoles, such as the Nintendo Wii and Microsoft Kinect, has provided an opportunity for the elderly to engage in computer and video games. These interaction methods, however, still present various challenges for elderly users. To address these challenges, we propose an architecture consisted of Augmented Reality (as an output mechanism) combined with gestured-based devices (as an input method). The intention of this work is to provide a theoretical justification for using these technologies and to integrate them into an architecture, acting as a basis for potentially creating suitable interaction techniques for the elderly players.

  5. Super-nodal methods for space-time kinetics

    NASA Astrophysics Data System (ADS)

    Mertyurek, Ugur

    The purpose of this research has been to develop an advanced Super-Nodal method to reduce the run time of 3-D core neutronics models, such as in the NESTLE reactor core simulator and FORMOSA nuclear fuel management optimization codes. Computational performance of the neutronics model is increased by reducing the number of spatial nodes used in the core modeling. However, as the number of spatial nodes decreases, the error in the solution increases. The Super-Nodal method reduces the error associated with the use of coarse nodes in the analyses by providing a new set of cross sections and ADFs (Assembly Discontinuity Factors) for the new nodalization. These so called homogenization parameters are obtained by employing consistent collapsing technique. During this research a new type of singularity, namely "fundamental mode singularity", is addressed in the ANM (Analytical Nodal Method) solution. The "Coordinate Shifting" approach is developed as a method to address this singularity. Also, the "Buckling Shifting" approach is developed as an alternative and more accurate method to address the zero buckling singularity, which is a more common and well known singularity problem in the ANM solution. In the course of addressing the treatment of these singularities, an effort was made to provide better and more robust results from the Super-Nodal method by developing several new methods for determining the transverse leakage and collapsed diffusion coefficient, which generally are the two main approximations in the ANM methodology. Unfortunately, the proposed new transverse leakage and diffusion coefficient approximations failed to provide a consistent improvement to the current methodology. However, improvement in the Super-Nodal solution is achieved by updating the homogenization parameters at several time points during a transient. The update is achieved by employing a refinement technique similar to pin-power reconstruction. A simple error analysis based on the relative residual in the 3-D few group diffusion equation at the fine mesh level is also introduced in this work.

  6. From diets to foods: using linear programming to formulate a nutritious, minimum-cost porridge mix for children aged 1 to 2 years.

    PubMed

    De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas

    2015-03-01

    Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.

  7. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  8. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  9. iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees

    PubMed Central

    Rosenberg, Noah A.

    2012-01-01

    Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756

  10. A high-order relativistic two-fluid electrodynamic scheme with consistent reconstruction of electromagnetic fields and a multidimensional Riemann solver for electromagnetism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsara, Dinshaw S., E-mail: dbalsara@nd.edu; Amano, Takanobu, E-mail: amano@eps.s.u-tokyo.ac.jp; Garain, Sudip, E-mail: sgarain@nd.edu

    In various astrophysics settings it is common to have a two-fluid relativistic plasma that interacts with the electromagnetic field. While it is common to ignore the displacement current in the ideal, classical magnetohydrodynamic limit, when the flows become relativistic this approximation is less than absolutely well-justified. In such a situation, it is more natural to consider a positively charged fluid made up of positrons or protons interacting with a negatively charged fluid made up of electrons. The two fluids interact collectively with the full set of Maxwell's equations. As a result, a solution strategy for that coupled system of equationsmore » is sought and found here. Our strategy extends to higher orders, providing increasing accuracy. The primary variables in the Maxwell solver are taken to be the facially-collocated components of the electric and magnetic fields. Consistent with such a collocation, three important innovations are reported here. The first two pertain to the Maxwell solver. In our first innovation, the magnetic field within each zone is reconstructed in a divergence-free fashion while the electric field within each zone is reconstructed in a form that is consistent with Gauss' law. In our second innovation, a multidimensionally upwinded strategy is presented which ensures that the magnetic field can be updated via a discrete interpretation of Faraday's law and the electric field can be updated via a discrete interpretation of the generalized Ampere's law. This multidimensional upwinding is achieved via a multidimensional Riemann solver. The multidimensional Riemann solver automatically provides edge-centered electric field components for the Stokes law-based update of the magnetic field. It also provides edge-centered magnetic field components for the Stokes law-based update of the electric field. The update strategy ensures that the electric field is always consistent with Gauss' law and the magnetic field is always divergence-free. This collocation also ensures that electromagnetic radiation that is propagating in a vacuum has both electric and magnetic fields that are exactly divergence-free. Coupled relativistic fluid dynamic equations are solved for the positively and negatively charged fluids. The fluids' numerical fluxes also provide a self-consistent current density for the update of the electric field. Our reconstruction strategy ensures that fluid velocities always remain sub-luminal. Our third innovation consists of an efficient design for several popular IMEX schemes so that they provide strong coupling between the finite-volume-based fluid solver and the electromagnetic fields at high order. This innovation makes it possible to efficiently utilize high order IMEX time update methods for stiff source terms in the update of high order finite-volume methods for hyperbolic conservation laws. We also show that this very general innovation should extend seamlessly to Runge–Kutta discontinuous Galerkin methods. The IMEX schemes enable us to use large CFL numbers even in the presence of stiff source terms. Several accuracy analyses are presented showing that our method meets its design accuracy in the MHD limit as well as in the limit of electromagnetic wave propagation. Several stringent test problems are also presented. We also present a relativistic version of the GEM problem, which shows that our algorithm can successfully adapt to challenging problems in high energy astrophysics.« less

  11. A high-order relativistic two-fluid electrodynamic scheme with consistent reconstruction of electromagnetic fields and a multidimensional Riemann solver for electromagnetism

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Amano, Takanobu; Garain, Sudip; Kim, Jinho

    2016-08-01

    In various astrophysics settings it is common to have a two-fluid relativistic plasma that interacts with the electromagnetic field. While it is common to ignore the displacement current in the ideal, classical magnetohydrodynamic limit, when the flows become relativistic this approximation is less than absolutely well-justified. In such a situation, it is more natural to consider a positively charged fluid made up of positrons or protons interacting with a negatively charged fluid made up of electrons. The two fluids interact collectively with the full set of Maxwell's equations. As a result, a solution strategy for that coupled system of equations is sought and found here. Our strategy extends to higher orders, providing increasing accuracy. The primary variables in the Maxwell solver are taken to be the facially-collocated components of the electric and magnetic fields. Consistent with such a collocation, three important innovations are reported here. The first two pertain to the Maxwell solver. In our first innovation, the magnetic field within each zone is reconstructed in a divergence-free fashion while the electric field within each zone is reconstructed in a form that is consistent with Gauss' law. In our second innovation, a multidimensionally upwinded strategy is presented which ensures that the magnetic field can be updated via a discrete interpretation of Faraday's law and the electric field can be updated via a discrete interpretation of the generalized Ampere's law. This multidimensional upwinding is achieved via a multidimensional Riemann solver. The multidimensional Riemann solver automatically provides edge-centered electric field components for the Stokes law-based update of the magnetic field. It also provides edge-centered magnetic field components for the Stokes law-based update of the electric field. The update strategy ensures that the electric field is always consistent with Gauss' law and the magnetic field is always divergence-free. This collocation also ensures that electromagnetic radiation that is propagating in a vacuum has both electric and magnetic fields that are exactly divergence-free. Coupled relativistic fluid dynamic equations are solved for the positively and negatively charged fluids. The fluids' numerical fluxes also provide a self-consistent current density for the update of the electric field. Our reconstruction strategy ensures that fluid velocities always remain sub-luminal. Our third innovation consists of an efficient design for several popular IMEX schemes so that they provide strong coupling between the finite-volume-based fluid solver and the electromagnetic fields at high order. This innovation makes it possible to efficiently utilize high order IMEX time update methods for stiff source terms in the update of high order finite-volume methods for hyperbolic conservation laws. We also show that this very general innovation should extend seamlessly to Runge-Kutta discontinuous Galerkin methods. The IMEX schemes enable us to use large CFL numbers even in the presence of stiff source terms. Several accuracy analyses are presented showing that our method meets its design accuracy in the MHD limit as well as in the limit of electromagnetic wave propagation. Several stringent test problems are also presented. We also present a relativistic version of the GEM problem, which shows that our algorithm can successfully adapt to challenging problems in high energy astrophysics.

  12. Thermodynamically consistent model calibration in chemical kinetics

    PubMed Central

    2011-01-01

    Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948

  13. Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades

    NASA Astrophysics Data System (ADS)

    Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang

    2017-12-01

    This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.

  14. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    PubMed

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  15. Maintenance Training Equipment: Design Specification Based on Instructional System Development. Revision

    DTIC Science & Technology

    1984-12-01

    model pr, vides a method for communicating a specific training equipment design to the procurement office after A"~- ISD ana lysis has est~blished a...maintenance trainer has been identified. The model provides a method by which a training equipment design can be communicated to the System Project Office...ensure * ase of development of procurement specifications and consistency between different documented designs. A completed application of this maodel

  16. Method and Characterization of Pyroelectric Coefficients for Determining Material Figures of Merit for Infrared (IR) Detectors

    DTIC Science & Technology

    2013-12-01

    and the signal is read as a photocurrent or at a photovoltaic p-n junction. These detectors can provide high-sensitivity and fast refresh rates and...Alternative methods can be used to modulate the sample temperature directly; for example, by using modern Peltier devices and thermoelectric ...commercially-available hardware. The setup consist of three main components: (1) A temperature regulated thermoelectric stage; (2) A high-sensitivity

  17. Fluid dynamic modeling of nano-thermite reactions

    NASA Astrophysics Data System (ADS)

    Martirosyan, Karen S.; Zyskin, Maxim; Jenkins, Charles M.; Yuki Horie, Yasuyuki

    2014-03-01

    This paper presents a direct numerical method based on gas dynamic equations to predict pressure evolution during the discharge of nanoenergetic materials. The direct numerical method provides for modeling reflections of the shock waves from the reactor walls that generates pressure-time fluctuations. The results of gas pressure prediction are consistent with the experimental evidence and estimates based on the self-similar solution. Artificial viscosity provides sufficient smoothing of shock wave discontinuity for the numerical procedure. The direct numerical method is more computationally demanding and flexible than self-similar solution, in particular it allows study of a shock wave in its early stage of reaction and allows the investigation of "slower" reactions, which may produce weaker shock waves. Moreover, numerical results indicate that peak pressure is not very sensitive to initial density and reaction time, providing that all the material reacts well before the shock wave arrives at the end of the reactor.

  18. Fluid dynamic modeling of nano-thermite reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martirosyan, Karen S., E-mail: karen.martirosyan@utb.edu; Zyskin, Maxim; Jenkins, Charles M.

    2014-03-14

    This paper presents a direct numerical method based on gas dynamic equations to predict pressure evolution during the discharge of nanoenergetic materials. The direct numerical method provides for modeling reflections of the shock waves from the reactor walls that generates pressure-time fluctuations. The results of gas pressure prediction are consistent with the experimental evidence and estimates based on the self-similar solution. Artificial viscosity provides sufficient smoothing of shock wave discontinuity for the numerical procedure. The direct numerical method is more computationally demanding and flexible than self-similar solution, in particular it allows study of a shock wave in its early stagemore » of reaction and allows the investigation of “slower” reactions, which may produce weaker shock waves. Moreover, numerical results indicate that peak pressure is not very sensitive to initial density and reaction time, providing that all the material reacts well before the shock wave arrives at the end of the reactor.« less

  19. A method for the geometric and densitometric standardization of intraoral radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duckworth, J.E.; Judy, P.F.; Goodson, J.M.

    1983-07-01

    The interpretation of dental radiographs for the diagnosis of periodontal disease conditions poses several difficulties. These include the inability to adequately reproduce the projection geometry and optical density of the exposures. In order to improve the ability to extract accurate quantitative information from a radiographic survey of periodontal status, a method was developed which provided for consistent reproduction of both geometric and densitometric exposure parameters. This technique employed vertical bitewing projections in holders customized to individual segments of the dentition. A copper stepwedge was designed to provide densitometric standardization, and wire markers were included to permit measurement of angular variation.more » In a series of 53 paired radiographs, measurement of alveolar crest heights was found to be reproducible within approximately 0.1 mm. This method provided a full mouth radiographic survey using seven films, each complete with internal standards suitable for computer-based image processing.« less

  20. A Structured Peer-Mentoring Method for Physical Activity Behavior Change Among Adolescents.

    PubMed

    Smith, Laureen H; Petosa, Rick L

    2016-10-01

    Despite national guidelines for regular physical activity, most adolescents are not physically active. Schools serve an estimated 60 million youth and provide an educational environment to meet the current physical activity guidelines. The obesity epidemic and chronic disease comorbidities associated with physical inactivity are not likely to be reversed without a strong contribution from local schools. This article describes how a structured peer-mentoring method provides a feasible, flexible, and tailored means to meet the current guidelines for best practice in a school setting. Structured peer mentoring using trained high school mentors to support behavior change in younger peers is an innovative method to meeting the School Health Guidelines to Promote Healthy Eating and Physical Activity Through structured peer mentoring, adolescents are provided consistent social support in a caring and personalized manner. This support builds skills and competencies enhancing self-efficacy to sustain a lifetime of physical activity behavior. © The Author(s) 2016.

  1. A Structured Peer-Mentoring Method for Physical Activity Behavior Change Among Adolescents

    PubMed Central

    Smith, Laureen H.; Petosa, Rick L.

    2016-01-01

    Despite national guidelines for regular physical activity, most adolescents are not physically active. Schools serve an estimated 60 million youth and provide an educational environment to meet the current physical activity guidelines. The obesity epidemic and chronic disease comorbidities associated with physical inactivity are not likely to be reversed without a strong contribution from local schools. This article describes how a structured peer-mentoring method provides a feasible, flexible, and tailored means to meet the current guidelines for best practice in a school setting. Structured peer mentoring using trained high school mentors to support behavior change in younger peers is an innovative method to meeting the School Health Guidelines to Promote Healthy Eating and Physical Activity. Through structured peer mentoring, adolescents are provided consistent social support in a caring and personalized manner. This support builds skills and competencies enhancing self-efficacy to sustain a lifetime of physical activity behavior. PMID:27257081

  2. Assessment of systems for paying health care providers in Vietnam: implications for equity, efficiency and expanding effective health coverage.

    PubMed

    Phuong, Nguyen Khanh; Oanh, Tran Thi Mai; Phuong, Hoang Thi; Tien, Tran Van; Cashin, Cheryl

    2015-01-01

    Provider payment arrangements are currently a core concern for Vietnam's health sector and a key lever for expanding effective coverage and improving the efficiency and equity of the health system. This study describes how different provider payment systems are designed and implemented in practice across a sample of provinces and districts in Vietnam. Key informant interviews were conducted with over 100 health policy-makers, purchasers and providers using a structured interview guide. The results of the different payment methods were scored by respondents and assessed against a set of health system performance criteria. Overall, the public health insurance agency, Vietnam Social Security (VSS), is focused on managing expenditures through a complicated set of reimbursement policies and caps, but the incentives for providers are unclear and do not consistently support Vietnam's health system objectives. The results of this study are being used by the Ministry of Health and VSS to reform the provider payment systems to be more consistent with international definitions and good practices and to better support Vietnam's health system objectives.

  3. A fast method for finding bound systems in numerical simulations: Results from the formation of asteroid binaries

    NASA Astrophysics Data System (ADS)

    Leinhardt, Zoë M.; Richardson, Derek C.

    2005-08-01

    We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.

  4. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  5. 48 CFR 16.305 - Cost-plus-award-fee contracts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACTING METHODS AND CONTRACT TYPES TYPES OF CONTRACTS Cost-Reimbursement Contracts 16.305 Cost-plus-award... consisting of (a) a base amount (which may be zero) fixed at inception of the contract and (b) an award amount, based upon a judgmental evaluation by the Government, sufficient to provide motivation for...

  6. A Hermeneutic Phenomenological Study of the Experience of Teacher-Learners

    ERIC Educational Resources Information Center

    Bechtel, Patricia L.

    2011-01-01

    Teachers who choose to participate in professional development to acquire new instructional methods or enhance their content knowledge are teacher-learners. For this study, data consisted of the reflective writings of 19 secondary mathematics and science teacher-learners in an 11-month professional development project. These writings provided a…

  7. A MATLAB-Aided Method for Teaching Calculus-Based Business Mathematics

    ERIC Educational Resources Information Center

    Liang, Jiajuan; Pan, William S. Y.

    2009-01-01

    MATLAB is a powerful package for numerical computation. MATLAB contains a rich pool of mathematical functions and provides flexible plotting functions for illustrating mathematical solutions. The course of calculus-based business mathematics consists of two major topics: 1) derivative and its applications in business; and 2) integration and its…

  8. Essays on Policy Evaluation with Endogenous Adoption

    ERIC Educational Resources Information Center

    Gentile, Elisabetta

    2011-01-01

    Over the last decade, experimental and quasi-experimental methods have been favored by researchers in empirical economics, as they provide unbiased causal estimates. However, when implementing a program, it is often not possible to randomly assign subjects to treatment, leading to a possible endogeneity bias. This dissertation consists of two…

  9. Is School Funding Fair? A National Report Card

    ERIC Educational Resources Information Center

    Baker, Bruce D.; Sciarra, David G.; Farrie, Danielle

    2010-01-01

    Building a more accurate, reliable and consistent method of analyzing how states fund public education starts with a critical question: What is fair school funding? In this report, "fair" school funding is defined as a state finance system that ensures equal educational opportunity by providing a sufficient level of funding distributed…

  10. How Integration Can Benefit Physical Education

    ERIC Educational Resources Information Center

    Wilson-Parish, Nichelle; Parish, Anthony

    2016-01-01

    One method for physical educators to increase their contact hours with their students is curricular integration, which consists of combining two or more subject areas with the goal of fostering enhanced learning in each subject area. This article provides an example of a possible integrated lesson plan involving physical education and art.

  11. Quantifying Qualitative Data Using Cognitive Maps

    ERIC Educational Resources Information Center

    Scherp, Hans-Ake

    2013-01-01

    The aim of the article is to show how substantial qualitative material consisting of graphic cognitive maps can be analysed by using digital CmapTools, Excel and SPSS. Evidence is provided of how qualitative and quantitative methods can be combined in educational research by transforming qualitative data into quantitative data to facilitate…

  12. Improving Program Performance through Management Information. A Workbook.

    ERIC Educational Resources Information Center

    Bienia, Nancy

    Designed specifically for state and local managers and supervisors who plan, direct, and operate child support enforcement programs, this workbook provides a four-part, step-by-step process for identifying needed information and methods of using the information to operate an effective program. The process consists of: (1) determining what…

  13. Consistent initial conditions for the Saint-Venant equations in river network modeling

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Wei; Liu, Frank; Hodges, Ben R.

    2017-09-01

    Initial conditions for flows and depths (cross-sectional areas) throughout a river network are required for any time-marching (unsteady) solution of the one-dimensional (1-D) hydrodynamic Saint-Venant equations. For a river network modeled with several Strahler orders of tributaries, comprehensive and consistent synoptic data are typically lacking and synthetic starting conditions are needed. Because of underlying nonlinearity, poorly defined or inconsistent initial conditions can lead to convergence problems and long spin-up times in an unsteady solver. Two new approaches are defined and demonstrated herein for computing flows and cross-sectional areas (or depths). These methods can produce an initial condition data set that is consistent with modeled landscape runoff and river geometry boundary conditions at the initial time. These new methods are (1) the pseudo time-marching method (PTM) that iterates toward a steady-state initial condition using an unsteady Saint-Venant solver and (2) the steady-solution method (SSM) that makes use of graph theory for initial flow rates and solution of a steady-state 1-D momentum equation for the channel cross-sectional areas. The PTM is shown to be adequate for short river reaches but is significantly slower and has occasional non-convergent behavior for large river networks. The SSM approach is shown to provide a rapid solution of consistent initial conditions for both small and large networks, albeit with the requirement that additional code must be written rather than applying an existing unsteady Saint-Venant solver.

  14. Electrolyte treatment for aluminum reduction

    DOEpatents

    Brown, Craig W.; Brooks, Richard J.; Frizzle, Patrick B.; Juric, Drago D.

    2002-01-01

    A method of treating an electrolyte for use in the electrolytic reduction of alumina to aluminum employing an anode and a cathode, the alumina dissolved in the electrolyte, the treating improving wetting of the cathode with molten aluminum during electrolysis. The method comprises the steps of providing a molten electrolyte comprised of ALF.sub.3 and at least one salt selected from the group consisting of NaF, KF and LiF, and treating the electrolyte by providing therein 0.004 to 0.2 wt. % of a transition metal or transition metal compound for improved wettability of the cathode with molten aluminum during subsequent electrolysis to reduce alumina to aluminum.

  15. Energetic composites and method of providing chemical energy

    DOEpatents

    Danen, Wayne C.; Martin, Joe A.

    1997-01-01

    A method for providing chemical energy and energetic compositions of matter consisting of thin layers of substances which will exothermically react with one another. The layers of reactive substances are separated by thin layers of a buffer material which prevents the reactions from taking place until the desired time. The reactions are triggered by an external agent, such as mechanical stress or an electric spark. The compositions are known as metastable interstitial composites (MICs). This class of compositions includes materials which have not previously been capable of use as energetic materials. The speed and products of the reactions can be varied to suit the application.

  16. Energetic composites and method of providing chemical energy

    DOEpatents

    Danen, W.C.; Martin, J.A.

    1997-02-25

    A method is described for providing chemical energy and energetic compositions of matter consisting of thin layers of substances which will exothermically react with one another. The layers of reactive substances are separated by thin layers of a buffer material which prevents the reactions from taking place until the desired time. The reactions are triggered by an external agent, such as mechanical stress or an electric spark. The compositions are known as metastable interstitial composites (MICs). This class of compositions includes materials which have not previously been capable of use as energetic materials. The speed and products of the reactions can be varied to suit the application. 3 figs.

  17. Contribution of artificial intelligence to the knowledge of prognostic factors in Hodgkin's lymphoma.

    PubMed

    Buciński, Adam; Marszałł, Michał Piotr; Krysiński, Jerzy; Lemieszek, Andrzej; Załuski, Jerzy

    2010-07-01

    Hodgkin's lymphoma is one of the most curable malignancies and most patients achieve a lasting complete remission. In this study, artificial neural network (ANN) analysis was shown to provide significant factors with regard to 5-year recurrence after lymphoma treatment. Data from 114 patients treated for Hodgkin's disease were available for evaluation and comparison. A total of 31 variables were subjected to ANN analysis. The ANN approach as an advanced multivariate data processing method was shown to provide objective prognostic data. Some of these prognostic factors are consistent or even identical to the factors evaluated earlier by other statistical methods.

  18. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    PubMed Central

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  19. C-statistic fitting routines: User's manual and reference guide

    NASA Technical Reports Server (NTRS)

    Nousek, John A.; Farwana, Vida

    1991-01-01

    The computer program is discussed which can read several input files and provide a best set of values for the functions provided by the user, using either C-statistic or the chi(exp 2) statistic method. The program consists of one main routine and several functions and subroutines. Detail descriptions of each function and subroutine is presented. A brief description of the C-statistic and the reason for its application is also presented.

  20. Imperial Valley's proposal to develop a guide for geothermal development within its county

    NASA Technical Reports Server (NTRS)

    Pierson, D. E.

    1974-01-01

    A plan to develop the geothermal resources of the Imperial Valley of California is presented. The plan consists of development policies and includes text and graphics setting forth the objectives, principles, standards, and proposals. The plan allows developers to know the goals of the surrounding community and provides a method for decision making to be used by county representatives. A summary impact statement for the geothermal development aspects is provided.

  1. MELTING AND PURIFICATION OF URANIUM

    DOEpatents

    Spedding, F.H.; Gray, C.F.

    1958-09-16

    A process is described for treating uranium ingots having inner metal portions and an outer oxide skin. The method consists in partially supporting such an ingot on the surface of a grid or pierced plate. A sufficient weight of uranium is provided so that when the mass becomes molten, the oxide skin bursts at the unsupported portions of its bottom surface, allowing molten urantum to flow through the burst skin and into a container provided below.

  2. FERRET data analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.

  3. Effects of phylogenetic reconstruction method on the robustness of species delimitation using single-locus data

    PubMed Central

    Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel

    2014-01-01

    Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses. PMID:25821577

  4. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    PubMed

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  5. Sediment Core Extrusion Method at Millimeter Resolution Using a Calibrated, Threaded-rod.

    PubMed

    Schwing, Patrick T; Romero, Isabel C; Larson, Rebekka A; O'Malley, Bryan J; Fridrik, Erika E; Goddard, Ethan A; Brooks, Gregg R; Hastings, David W; Rosenheim, Brad E; Hollander, David J; Grant, Guy; Mulhollan, Jim

    2016-08-17

    Aquatic sediment core subsampling is commonly performed at cm or half-cm resolution. Depending on the sedimentation rate and depositional environment, this resolution provides records at the annual to decadal scale, at best. An extrusion method, using a calibrated, threaded-rod is presented here, which allows for millimeter-scale subsampling of aquatic sediment cores of varying diameters. Millimeter scale subsampling allows for sub-annual to monthly analysis of the sedimentary record, an order of magnitude higher than typical sampling schemes. The extruder consists of a 2 m aluminum frame and base, two core tube clamps, a threaded-rod, and a 1 m piston. The sediment core is placed above the piston and clamped to the frame. An acrylic sampling collar is affixed to the upper 5 cm of the core tube and provides a platform from which to extract sub-samples. The piston is rotated around the threaded-rod at calibrated intervals and gently pushes the sediment out the top of the core tube. The sediment is then isolated into the sampling collar and placed into an appropriate sampling vessel (e.g., jar or bag). This method also preserves the unconsolidated samples (i.e., high pore water content) at the surface, providing a consistent sampling volume. This mm scale extrusion method was applied to cores collected in the northern Gulf of Mexico following the Deepwater Horizon submarine oil release. Evidence suggests that it is necessary to sample at the mm scale to fully characterize events that occur on the monthly time-scale for continental slope sediments.

  6. Sediment Core Extrusion Method at Millimeter Resolution Using a Calibrated, Threaded-rod

    PubMed Central

    Schwing, Patrick T.; Romero, Isabel C.; Larson, Rebekka A.; O'Malley, Bryan J.; Fridrik, Erika E.; Goddard, Ethan A.; Brooks, Gregg R.; Hastings, David W.; Rosenheim, Brad E.; Hollander, David J.; Grant, Guy; Mulhollan, Jim

    2016-01-01

    Aquatic sediment core subsampling is commonly performed at cm or half-cm resolution. Depending on the sedimentation rate and depositional environment, this resolution provides records at the annual to decadal scale, at best. An extrusion method, using a calibrated, threaded-rod is presented here, which allows for millimeter-scale subsampling of aquatic sediment cores of varying diameters. Millimeter scale subsampling allows for sub-annual to monthly analysis of the sedimentary record, an order of magnitude higher than typical sampling schemes. The extruder consists of a 2 m aluminum frame and base, two core tube clamps, a threaded-rod, and a 1 m piston. The sediment core is placed above the piston and clamped to the frame. An acrylic sampling collar is affixed to the upper 5 cm of the core tube and provides a platform from which to extract sub-samples. The piston is rotated around the threaded-rod at calibrated intervals and gently pushes the sediment out the top of the core tube. The sediment is then isolated into the sampling collar and placed into an appropriate sampling vessel (e.g., jar or bag). This method also preserves the unconsolidated samples (i.e., high pore water content) at the surface, providing a consistent sampling volume. This mm scale extrusion method was applied to cores collected in the northern Gulf of Mexico following the Deepwater Horizon submarine oil release. Evidence suggests that it is necessary to sample at the mm scale to fully characterize events that occur on the monthly time-scale for continental slope sediments. PMID:27585268

  7. Contaminant treatment method

    DOEpatents

    Shapiro, Andrew Philip; Thornton, Roy Fred; Salvo, Joseph James

    2003-01-01

    The present invention provides a method for treating contaminated media. The method comprises introducing remediating ions consisting essentially of ferrous ions, and being peroxide-free, in the contaminated media; applying a potential difference across the contaminated media to cause the remediating ions to migrate into contact with contaminants in the contaminated media; chemically degrading contaminants in the contaminated media by contact with the remediating ions; monitoring the contaminated media for degradation products of the contaminants; and controlling the step of applying the potential difference across the contaminated media in response to the step of monitoring.

  8. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  9. Physically consistent data assimilation method based on feedback control for patient-specific blood flow analysis.

    PubMed

    Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo

    2018-01-01

    This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Recovery of protactinium from molten fluoride nuclear fuel compositions

    DOEpatents

    Baes, C.F. Jr.; Bamberger, C.; Ross, R.G.

    1973-12-25

    A method is provided for separating protactinium from a molten fluonlde salt composition consisting essentially of at least one alkali and alkaline earth metal fluoride and at least one soluble fluoride of uranium or thorium which comprises oxidizing the protactinium in said composition to the + 5 oxidation state and contacting said composition with an oxide selected from the group consisting of an alkali metal oxide, an alkaline earth oxide, thorium oxide, and uranium oxide, and thereafter isolating the resultant insoluble protactinium oxide product from said composition. (Official Gazette)

  11. Consistent latent position estimation and vertex classification for random dot product graphs.

    PubMed

    Sussman, Daniel L; Tang, Minh; Priebe, Carey E

    2014-01-01

    In this work, we show that using the eigen-decomposition of the adjacency matrix, we can consistently estimate latent positions for random dot product graphs provided the latent positions are i.i.d. from some distribution. If class labels are observed for a number of vertices tending to infinity, then we show that the remaining vertices can be classified with error converging to Bayes optimal using the $(k)$-nearest-neighbors classification rule. We evaluate the proposed methods on simulated data and a graph derived from Wikipedia.

  12. Primary explosives

    DOEpatents

    Hiskey, Michael A [Los Alamos, NM; Huynh, My Hang V [Los Alamos, NM

    2011-01-25

    The present invention provides a compound of the formula (Cat).sup.+.sub.z[M.sup.++(5-nitro-1H-tetrazolato-N2).sup.-.sub.x(H.sub.2- O).sub.y] where x is 3 or 4, y is 2 or 3, x+y is 6, z is 1 or 2, and M.sup.++ is selected from the group consisting of iron, cobalt, nickel, copper, zinc, chromium, and manganese, and (Cat).sup.+ is selected from the group consisting of ammonium, sodium, potassium, rubidium and cesium. A method of preparing the compound of that formula is also disclosed.

  13. Primary explosives

    DOEpatents

    Hiskey, Michael A [Los Alamos, NM; Huynh, My Hang V [Los Alamos, NM

    2009-03-03

    The present invention provides a compound of the formula (Cat).sup.+.sub.z[M.sup.++(5-nitro-1H-tetrazolato-N2).sup.-.sub.x(H.sub.2- O).sub.y] where x is 3 or 4, y is 2 or 3, x+y is 6, z is 1 or 2, and M.sup.++ is selected from the group consisting of iron, cobalt, nickel, copper, zinc, chromium, and manganese, and (Cat).sup.+ is selected from the group consisting of ammonium, sodium, potassium, rubidium and cesium. A method of preparing the compound of that formula is also disclosed.

  14. Probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  15. A Data Driven Model for Predicting RNA-Protein Interactions based on Gradient Boosting Machine.

    PubMed

    Jain, Dharm Skandh; Gupte, Sanket Rajan; Aduri, Raviprasad

    2018-06-22

    RNA protein interactions (RPI) play a pivotal role in the regulation of various biological processes. Experimental validation of RPI has been time-consuming, paving the way for computational prediction methods. The major limiting factor of these methods has been the accuracy and confidence of the predictions, and our in-house experiments show that they fail to accurately predict RPI involving short RNA sequences such as TERRA RNA. Here, we present a data-driven model for RPI prediction using a gradient boosting classifier. Amino acids and nucleotides are classified based on the high-resolution structural data of RNA protein complexes. The minimum structural unit consisting of five residues is used as the descriptor. Comparative analysis of existing methods shows the consistently higher performance of our method irrespective of the length of RNA present in the RPI. The method has been successfully applied to map RPI networks involving both long noncoding RNA as well as TERRA RNA. The method is also shown to successfully predict RNA and protein hubs present in RPI networks of four different organisms. The robustness of this method will provide a way for predicting RPI networks of yet unknown interactions for both long noncoding RNA and microRNA.

  16. Self-consistent projection operator theory in nonlinear quantum optical systems: A case study on degenerate optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Degenfeld-Schonburg, Peter; Navarrete-Benlloch, Carlos; Hartmann, Michael J.

    2015-05-01

    Nonlinear quantum optical systems are of paramount relevance for modern quantum technologies, as well as for the study of dissipative phase transitions. Their nonlinear nature makes their theoretical study very challenging and hence they have always served as great motivation to develop new techniques for the analysis of open quantum systems. We apply the recently developed self-consistent projection operator theory to the degenerate optical parametric oscillator to exemplify its general applicability to quantum optical systems. We show that this theory provides an efficient method to calculate the full quantum state of each mode with a high degree of accuracy, even at the critical point. It is equally successful in describing both the stationary limit and the dynamics, including regions of the parameter space where the numerical integration of the full problem is significantly less efficient. We further develop a Gaussian approach consistent with our theory, which yields sensibly better results than the previous Gaussian methods developed for this system, most notably standard linearization techniques.

  17. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction

    PubMed Central

    Nikazad, T; Davidi, R; Herman, G. T.

    2013-01-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911

  18. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction.

    PubMed

    Nikazad, T; Davidi, R; Herman, G T

    2012-03-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.

  19. The Great Lakes Hydrography Dataset: Consistent, binational ...

    EPA Pesticide Factsheets

    Ecosystem-based management of the Laurentian Great Lakes, which spans both the United States and Canada, is hampered by the lack of consistent binational watersheds for the entire Basin. Using comparable data sources and consistent methods we developed spatially equivalent watershed boundaries for the binational extent of the Basin to create the Great Lakes Hydrography Dataset (GLHD). The GLHD consists of 5,589 watersheds for the entire Basin, covering a total area of approximately 547,967 km2, or about twice the 247,003 km2 surface water area of the Great Lakes. The GLHD improves upon existing watershed efforts by delineating watersheds for the entire Basin using consistent methods; enhancing the precision of watershed delineation by using recently developed flow direction grids that have been hydrologically enforced and vetted by provincial and federal water resource agencies; and increasing the accuracy of watershed boundaries by enforcing embayments, delineating watersheds on islands, and delineating watersheds for all tributaries draining to connecting channels. In addition, the GLHD is packaged in a publically available geodatabase that includes synthetic stream networks, reach catchments, watershed boundaries, a broad set of attribute data for each tributary, and metadata documenting methodology. The GLHD provides a common set of watersheds and associated hydrography data for the Basin that will enhance binational efforts to protect and restore the Great

  20. Recognition of battery aging variations for LiFePO4 batteries in 2nd use applications combining incremental capacity analysis and statistical approaches

    NASA Astrophysics Data System (ADS)

    Jiang, Yan; Jiang, Jiuchun; Zhang, Caiping; Zhang, Weige; Gao, Yang; Guo, Qipei

    2017-08-01

    To assess the economic benefits of battery reuse, the consistency and aging characteristics of a retired LiFePO4 battery pack are studied in this paper. The consistency of battery modules is analyzed from the perspective of the capacity and the internal resistance. Test results indicate that battery module parameter dispersion increases along with battery aging. However, battery modules with better capacity consistency doesn't ensure better resistance consistency. Then the aging characteristics of the battery pack are analyzed and the main results are as follow: (1) Weibull and normal distribution are feasible to fit the capacity and resistance distribution of battery modules respectively; (2) SOC imbalance is the dominating factor in the capacity fading process of the battery pack; (3) By employing the incremental capacity (IC) and IC peak area analysis, a consistency evaluation method representing the aging mechanism variations of the battery modules is proposed and then an accurate battery screening strategy is put forward. This study not only provides data support for evaluating economic benefits of retired batteries but also presents a method to recognize the battery aging variations, which is helpful for rapid evaluation and screening of retired batteries for 2nd use.

  1. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  2. Avoiding sexually transmitted diseases.

    PubMed

    Stone, K M

    1990-12-01

    As the spectrum of sexually transmitted diseases (STDs) has broadened to include many infections that are not readily cured, prevention of STDs has become more important than ever. Primary prevention methods include abstinence, careful selection of sexual partners, condoms, vaginal spermicides, and a vaccine for hepatitis B. Condoms will protect against STDs only if they are used consistently and correctly; vaginal spermicides may also reduce risk of certain STDs. Health care providers should routinely counsel women on methods to reduce risk of STDs.

  3. FLUXCOM - Overview and First Synthesis

    NASA Astrophysics Data System (ADS)

    Jung, M.; Ichii, K.; Tramontana, G.; Camps-Valls, G.; Schwalm, C. R.; Papale, D.; Reichstein, M.; Gans, F.; Weber, U.

    2015-12-01

    We present a community effort aiming at generating an ensemble of global gridded flux products by upscaling FLUXNET data using an array of different machine learning methods including regression/model tree ensembles, neural networks, and kernel machines. We produced products for gross primary production, terrestrial ecosystem respiration, net ecosystem exchange, latent heat, sensible heat, and net radiation for two experimental protocols: 1) at a high spatial and 8-daily temporal resolution (5 arc-minute) using only remote sensing based inputs for the MODIS era; 2) 30 year records of daily, 0.5 degree spatial resolution by incorporating meteorological driver data. Within each set-up, all machine learning methods were trained with the same input data for carbon and energy fluxes respectively. Sets of input driver variables were derived using an extensive formal variable selection exercise. The performance of the extrapolation capacities of the approaches is assessed with a fully internally consistent cross-validation. We perform cross-consistency checks of the gridded flux products with independent data streams from atmospheric inversions (NEE), sun-induced fluorescence (GPP), catchment water balances (LE, H), satellite products (Rn), and process-models. We analyze the uncertainties of the gridded flux products and for example provide a breakdown of the uncertainty of mean annual GPP originating from different machine learning methods, different climate input data sets, and different flux partitioning methods. The FLUXCOM archive will provide an unprecedented source of information for water, energy, and carbon cycle studies.

  4. Combined geophysical methods for mapping infiltration pathways at the Aurora Water Aquifer recharge and recovery site

    NASA Astrophysics Data System (ADS)

    Jasper, Cameron A.

    Although aquifer recharge and recovery systems are a sustainable, decentralized, low cost, and low energy approach for the reclamation, treatment, and storage of post- treatment wastewater, they can suffer from poor infiltration rates and the development of a near-surface clogging layer within infiltration ponds. One such aquifer recharge and recovery system, the Aurora Water site in Colorado, U.S.A, functions at about 25% of its predicted capacity to recharge floodplain deposits by flooding infiltration ponds with post-treatment wastewater extracted from river bank aquifers along the South Platte River. The underwater self-potential method was developed to survey self-potential signals at the ground surface in a flooded infiltration pond for mapping infiltration pathways. A method for using heat as a groundwater tracer within the infiltration pond used an array of in situ high-resolution temperature sensing probes. Both relatively positive and negative underwater self-potential anomalies are consistent with observed recovery well pumping rates and specific discharge estimates from temperature data. Results from electrical resistivity tomography and electromagnetics surveys provide consistent electrical conductivity distributions associated with sediment textures. A lab method was developed for resistivity tests of near-surface sediment samples. Forward numerical modeling synthesizes the geophysical information to best match observed self- potential anomalies and provide permeability distributions, which is important for effective aquifer recharge and recovery system design, and optimization strategy development.

  5. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    NASA Astrophysics Data System (ADS)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  6. Electronic excitation and quenching of atoms at insulator surfaces

    NASA Technical Reports Server (NTRS)

    Swaminathan, P. K.; Garrett, Bruce C.; Murthy, C. S.

    1988-01-01

    A trajectory-based semiclassical method is used to study electronically inelastic collisions of gas atoms with insulator surfaces. The method provides for quantum-mechanical treatment of the internal electronic dynamics of a localized region involving the gas/surface collision, and a classical treatment of all the nuclear degrees of freedom (self-consistently and in terms of stochastic trajectories), and includes accurate simulation of the bath-temperature effects. The method is easy to implement and has a generality that holds promise for many practical applications. The problem of electronically inelastic dynamics is solved by computing a set of stochastic trajectories that on thermal averaging directly provide electronic transition probabilities at a given temperature. The theory is illustrated by a simple model of a two-state gas/surface interaction.

  7. An Open-Source Standard T-Wave Alternans Detector for Benchmarking.

    PubMed

    Khaustov, A; Nemati, S; Clifford, Gd

    2008-09-14

    We describe an open source algorithm suite for T-Wave Alternans (TWA) detection and quantification. The software consists of Matlab implementations of the widely used Spectral Method and Modified Moving Average with libraries to read both WFDB and ASCII data under windows and Linux. The software suite can run in both batch mode and with a provided graphical user interface to aid waveform exploration. Our software suite was calibrated using an open source TWA model, described in a partner paper [1] by Clifford and Sameni. For the PhysioNet/CinC Challenge 2008 we obtained a score of 0.881 for the Spectral Method and 0.400 for the MMA method. However, our objective was not to provide the best TWA detector, but rather a basis for detailed discussion of algorithms.

  8. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    PubMed

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Using the GOCE star trackers for validating the calibration of its accelerometers

    NASA Astrophysics Data System (ADS)

    Visser, P. N. A. M.

    2017-12-01

    A method for validating the calibration parameters of the six accelerometers on board the Gravity field and steady-state Ocean Circulation Explorer (GOCE) from star tracker observations that was originally tested by an end-to-end simulation, has been updated and applied to real data from GOCE. It is shown that the method provides estimates of scale factors for all three axes of the six GOCE accelerometers that are consistent at a level significantly better than 0.01 compared to the a priori calibrated value of 1. In addition, relative accelerometer biases and drift terms were estimated consistent with values obtained by precise orbit determination, where the first GOCE accelerometer served as reference. The calibration results clearly reveal the different behavior of the sensitive and less-sensitive accelerometer axes.

  10. Co-precipitation of protein and polyester as a method to isolate high molecular weight DNA.

    PubMed

    Dixson, Jamie D

    2005-02-01

    DNA isolation is often the limiting step in genetic analysis using PCR and automated fragment analysis due to low quality or purity of DNA, the need to determine and adjust DNA concentrations after isolation etc. Several protocols have been developed which are either safe and provide good quality DNA or hazardous and provide excellent quality DNA. In this brief communication I describe a new and rapid method of DNA isolation which employs the co-precipitation of protein and polyester, in the presence of acetone, to remove contaminating proteins from a lysed-tissue sample, thus leaving high quality pure DNA. The advantages of this method are increased safety over the phenol:chloroform and the chaotrophic salt methods and increased purity over the salting-out method. Since the concentrations of DNA isolated using this method are relatively consistent regardless of the amount of starting tissue (within limits), adjustments of the DNA concentrations before use as templates in PCR's are not necessary.

  11. A pilot study to explore the feasibility of using theClinical Care Classification System for developing a reliable costing method for nursing services.

    PubMed

    Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K

    2013-01-01

    While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.

  12. Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    PubMed

    Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A

    2014-04-01

    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.

  13. Galaxy bias from the Dark Energy Survey Science Verification data: Combining galaxy density maps and weak lensing maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C.; Pujol, A.; Gaztañaga, E.

    We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less

  14. Galaxy bias from the Dark Energy Survey Science Verification data: Combining galaxy density maps and weak lensing maps

    DOE PAGES

    Chang, C.; Pujol, A.; Gaztañaga, E.; ...

    2016-04-15

    We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less

  15. Second-order perturbation theory with a density matrix renormalization group self-consistent field reference function: theory and application to the study of chromium dimer.

    PubMed

    Kurashige, Yuki; Yanai, Takeshi

    2011-09-07

    We present a second-order perturbation theory based on a density matrix renormalization group self-consistent field (DMRG-SCF) reference function. The method reproduces the solution of the complete active space with second-order perturbation theory (CASPT2) when the DMRG reference function is represented by a sufficiently large number of renormalized many-body basis, thereby being named DMRG-CASPT2 method. The DMRG-SCF is able to describe non-dynamical correlation with large active space that is insurmountable to the conventional CASSCF method, while the second-order perturbation theory provides an efficient description of dynamical correlation effects. The capability of our implementation is demonstrated for an application to the potential energy curve of the chromium dimer, which is one of the most demanding multireference systems that require best electronic structure treatment for non-dynamical and dynamical correlation as well as large basis sets. The DMRG-CASPT2/cc-pwCV5Z calculations were performed with a large (3d double-shell) active space consisting of 28 orbitals. Our approach using large-size DMRG reference addressed the problems of why the dissociation energy is largely overestimated by CASPT2 with the small active space consisting of 12 orbitals (3d4s), and also is oversensitive to the choice of the zeroth-order Hamiltonian. © 2011 American Institute of Physics

  16. Education and training column: the learning collaborative.

    PubMed

    MacDonald-Wilson, Kim L; Nemec, Patricia B

    2015-03-01

    This column describes the key components of a learning collaborative, with examples from the experience of 1 organization. A learning collaborative is a method for management, learning, and improvement of products or processes, and is a useful approach to implementation of a new service design or approach. This description draws from published material on learning collaboratives and the authors' experiences. The learning collaborative approach offers an effective method to improve service provider skills, provide support, and structure environments to result in lasting change for people using behavioral health services. This approach is consistent with psychiatric rehabilitation principles and practices, and serves to increase the overall capacity of the mental health system by structuring a process for discovering and sharing knowledge and expertise across provider agencies. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  17. IUSThrust Vector Control (TVC) servo system

    NASA Technical Reports Server (NTRS)

    Conner, G. E.

    1979-01-01

    The IUS TVC SERVO SYSTEM which consists of four electrically redundant electromechanical actuators, four potentiometer assemblies, and two controllers to provide movable nozzle control on both IUS solid rocket motors is developed. An overview of the more severe IUS TVC servo system design requirements, the system and component designs, and test data acquired on a preliminary development unit is presented. Attention is focused on the unique methods of sensing movable nozzle position and providing for redundant position locks.

  18. The Raid distributed database system

    NASA Technical Reports Server (NTRS)

    Bhargava, Bharat; Riedl, John

    1989-01-01

    Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.

  19. 48 CFR 16.405-2 - Cost-plus-award-fee contracts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACTING METHODS AND CONTRACT TYPES TYPES OF CONTRACTS Incentive Contracts 16.405-2 Cost-plus-award-fee... during performance and that is sufficient to provide motivation for excellence in the areas of cost... consisting of (1) a base amount fixed at inception of the contract, if applicable and at the discretion of...

  20. Total Quality Management Simplified.

    ERIC Educational Resources Information Center

    Arias, Pam

    1995-01-01

    Maintains that Total Quality Management (TQM) is one method that helps to monitor and improve the quality of child care. Lists four steps for a child-care center to design and implement its own TQM program. Suggests that quality assurance in child-care settings is an ongoing process, and that TQM programs help in providing consistent, high-quality…

  1. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    ERIC Educational Resources Information Center

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  2. A Grounded Theory Study of the Relationship between E-Mail and Burnout

    ERIC Educational Resources Information Center

    Camargo, Marta Rocha

    2008-01-01

    Introduction: This study consisted of a qualitative investigation into the role of e-mail in work-related burnout among high technology employees working full time and on-site for Internet, hardware, and software companies. Method: Grounded theory methodology was used to provide a systemic approach in categorising, sorting, and analysing data…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korte, Andrew R

    This thesis presents efforts to improve the methodology of matrix-assisted laser desorption ionization-mass spectrometry imaging (MALDI-MSI) as a method for analysis of metabolites from plant tissue samples. The first chapter consists of a general introduction to the technique of MALDI-MSI, and the sixth and final chapter provides a brief summary and an outlook on future work.

  4. 12 CFR 715.8 - Requirements for verification of accounts and passbooks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... selection: (ii) A sample which is representative of the population from which it was selected; (iii) An equal chance of selecting each dollar in the population; (iv) Sufficient accounts in both number and... consistent with GAAS if such methods provide for: (i) Sufficient accounts in both number and scope on which...

  5. Peer Education from the Perspective of Peer Educators

    ERIC Educational Resources Information Center

    Karaca, Aysel; Akkus, Dilek; Sener, Dilek Konuk

    2018-01-01

    Peer educators (PEs) have a significant role in providing education on various health issues like smoking, alcohol, and other substance use. This study aimed to determine the experiences and opinions of PEs regarding a peer education program. Using the qualitative research method, data were collected from the study sample, which consisted of 23…

  6. OATYC Journal, Fall 1990-Spring 1991.

    ERIC Educational Resources Information Center

    Fullen, Jim, Ed.

    1991-01-01

    Published by the Ohio Association of Two-Year Colleges, the "OATYC Journal" is designed to provide a medium for sharing concepts, methods, and findings relevant to the classroom, and an open forum for the discussion and review of problems. This 16th volume of the journal, consisting of the fall 1990 and spring 1991 issues, contains the…

  7. 75 FR 33317 - Request for Information (RFI) on the National Institutes of Health Plan To Develop the Genetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-11

    ... consistent with recommendations of the HHS Secretary's Advisory Committee on Genetics, Health, and Society... molecular basis, including, for example, information about what the test detects and what methods the test... and providing information on the molecular basis of genetic tests, such as detailed information about...

  8. Staying Healthy in Child Care: Preventing Infectious Diseases in Child Care.

    ERIC Educational Resources Information Center

    Thomson, Beth, Ed.

    This guide provides explanations of control methods for infection and diseases in child care with an emphasis on prevention and health. The guide consists of two parts. The first part covers the following topics on preventing illness in children: how infections spread; handwashing; separation into age groups; nappy changing and toileting; cleaning…

  9. Teachers as Mentors: Models for Promoting Achievement with Disadvantaged and Underrepresented Students by Creating Community

    ERIC Educational Resources Information Center

    Ayalon, Aram

    2011-01-01

    The book describes two similar and successful models of youth mentoring used by two acclaimed urban high schools that have consistently achieved exceptional graduation rates. Providing a detailed description of their methods--based upon extensive observation, and interviews with teachers, students, administrators, and parents--this book makes a…

  10. Daddy's Gone to Colorado: Male-Staffed Child Care for Father-Absent Boys.

    ERIC Educational Resources Information Center

    Brody, Steve

    1978-01-01

    The article presents the goals, methods, and case examples of The Nutury, a predominantly male-staffed child care center serving single-parent children. The primary goal is to provide consistent relationships with men for children without a male model in their home. Clinical observations reveal positive life-styles and attitudes. (LPG)

  11. A propensity score approach to correction for bias due to population stratification using genetic and non-genetic factors.

    PubMed

    Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita

    2009-12-01

    Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.

  12. Preprocessed cumulative reconstructor with domain decomposition: a fast wavefront reconstruction method for pyramid wavefront sensor.

    PubMed

    Shatokhina, Iuliia; Obereder, Andreas; Rosensteiner, Matthias; Ramlau, Ronny

    2013-04-20

    We present a fast method for the wavefront reconstruction from pyramid wavefront sensor (P-WFS) measurements. The method is based on an analytical relation between pyramid and Shack-Hartmann sensor (SH-WFS) data. The algorithm consists of two steps--a transformation of the P-WFS data to SH data, followed by the application of cumulative reconstructor with domain decomposition, a wavefront reconstructor from SH-WFS measurements. The closed loop simulations confirm that our method provides the same quality as the standard matrix vector multiplication method. A complexity analysis as well as speed tests confirm that the method is very fast. Thus, the method can be used on extremely large telescopes, e.g., for eXtreme adaptive optics systems.

  13. Measuring acetabular component position on lateral radiographs - ischio-lateral method.

    PubMed

    Pulos, Nicholas; Tiberi Iii, John V; Schmalzried, Thomas P

    2011-01-01

    The standard method for the evaluation of arthritis and postoperative assessment of arthroplasty treatment is observation and measurement from plain films, using the flm edge for orientation. A more recent employment of an anatomical landmark, the ischial tuberosity, has come into use as orientation for evaluation and is called the ischio-lateral method. In this study, the use of this method was evaluated as a first report to the literature on acetabular component measurement using a skeletal reference with lateral radiographs. Postoperative radiographs of 52 hips, with at least three true lateral radiographs taken at different time periods, were analyzed. Component position was measured with the historical method (using the flm edge for orientation) and with the new method using the ischio-lateral method. The mean standard deviation (SD) for the historical approach was 3.7° and for the ischio-lateral method, 2.2° (p < 0.001). With the historical method, 19 (36.5%) hips had a SD greater than ± 4°, compared to six hips (11.5%) with the ischio-lateral method. By using a skeletal reference, the ischio-lateral method provides a more consistent measurement of acetabular component position. The high intra-class correlation coefficients for both intra- and inter-observer reliability indicate that the angle measured with this simple method, which employs no further technology, increased time, or cost, is consistent and reproducible for multiple observers.

  14. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales

    DOE PAGES

    Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander; ...

    2015-03-20

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less

  15. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales.

    PubMed

    Riccardi, Demian; Parks, Jerry M; Johs, Alexander; Smith, Jeremy C

    2015-04-27

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. The core is well-tested, well-documented, and easy to install across computational platforms. The goal of the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, an abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.

  16. HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander

    HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less

  17. The Function Biomedical Informatics Research Network Data Repository.

    PubMed

    Keator, David B; van Erp, Theo G M; Turner, Jessica A; Glover, Gary H; Mueller, Bryon A; Liu, Thomas T; Voyvodic, James T; Rasmussen, Jerod; Calhoun, Vince D; Lee, Hyo Jong; Toga, Arthur W; McEwen, Sarah; Ford, Judith M; Mathalon, Daniel H; Diaz, Michele; O'Leary, Daniel S; Jeremy Bockholt, H; Gadde, Syam; Preda, Adrian; Wible, Cynthia G; Stern, Hal S; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G

    2016-01-01

    The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical data sets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 data set consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 T scanners. The FBIRN Phase 2 and Phase 3 data sets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN's multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. An evaluation of tyramide signal amplification and archived fixed and frozen tissue in microarray gene expression analysis

    PubMed Central

    Karsten, Stanislav L.; Van Deerlin, Vivianna M. D.; Sabatti, Chiara; Gill, Lisa H.; Geschwind, Daniel H.

    2002-01-01

    Archival formalin-fixed, paraffin-embedded and ethanol-fixed tissues represent a potentially invaluable resource for gene expression analysis, as they are the most widely available material for studies of human disease. Little data are available evaluating whether RNA obtained from fixed (archival) tissues could produce reliable and reproducible microarray expression data. Here we compare the use of RNA isolated from human archival tissues fixed in ethanol and formalin to frozen tissue in cDNA microarray experiments. Since an additional factor that can limit the utility of archival tissue is the often small quantities available, we also evaluate the use of the tyramide signal amplification method (TSA), which allows the use of small amounts of RNA. Detailed analysis indicates that TSA provides a consistent and reproducible signal amplification method for cDNA microarray analysis, across both arrays and the genes tested. Analysis of this method also highlights the importance of performing non-linear channel normalization and dye switching. Furthermore, archived, fixed specimens can perform well, but not surprisingly, produce more variable results than frozen tissues. Consistent results are more easily obtainable using ethanol-fixed tissues, whereas formalin-fixed tissue does not typically provide a useful substrate for cDNA synthesis and labeling. PMID:11788730

  19. pE-DB: a database of structural ensembles of intrinsically disordered and of unfolded proteins.

    PubMed

    Varadi, Mihaly; Kosol, Simone; Lebrun, Pierre; Valentini, Erica; Blackledge, Martin; Dunker, A Keith; Felli, Isabella C; Forman-Kay, Julie D; Kriwacki, Richard W; Pierattelli, Roberta; Sussman, Joel; Svergun, Dmitri I; Uversky, Vladimir N; Vendruscolo, Michele; Wishart, David; Wright, Peter E; Tompa, Peter

    2014-01-01

    The goal of pE-DB (http://pedb.vib.be) is to serve as an openly accessible database for the deposition of structural ensembles of intrinsically disordered proteins (IDPs) and of denatured proteins based on nuclear magnetic resonance spectroscopy, small-angle X-ray scattering and other data measured in solution. Owing to the inherent flexibility of IDPs, solution techniques are particularly appropriate for characterizing their biophysical properties, and structural ensembles in agreement with these data provide a convenient tool for describing the underlying conformational sampling. Database entries consist of (i) primary experimental data with descriptions of the acquisition methods and algorithms used for the ensemble calculations, and (ii) the structural ensembles consistent with these data, provided as a set of models in a Protein Data Bank format. PE-DB is open for submissions from the community, and is intended as a forum for disseminating the structural ensembles and the methodologies used to generate them. While the need to represent the IDP structures is clear, methods for determining and evaluating the structural ensembles are still evolving. The availability of the pE-DB database is expected to promote the development of new modeling methods and leads to a better understanding of how function arises from disordered states.

  20. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  1. Method For Selective Catalytic Reduction Of Nitrogen Oxides

    DOEpatents

    Mowery-Evans, Deborah L.; Gardner, Timothy J.; McLaughlin, Linda I.

    2005-02-15

    A method for catalytically reducing nitrogen oxide compounds (NO.sub.x, defined as nitric oxide, NO, +nitrogen dioxide, NO.sub.2) in a gas by a material comprising a base metal consisting essentially of CuO and Mn, and oxides of Mn, on an activated metal hydrous metal oxide support, such as HMO:Si. A promoter, such as tungsten oxide or molybdenum oxide, can be added and has been shown to increase conversion efficiency. This method provides good conversion of NO.sub.x to N.sub.2, good selectivity, good durability, resistance to SO.sub.2 aging and low toxicity compared with methods utilizing vanadia-based catalysts.

  2. Method for selective catalytic reduction of nitrogen oxides

    DOEpatents

    Mowery-Evans, Deborah L [Broomfield, CO; Gardner, Timothy J [Albuquerque, NM; McLaughlin, Linda I [Albuquerque, NM

    2005-02-15

    A method for catalytically reducing nitrogen oxide compounds (NO.sub.x, defined as nitric oxide, NO, +nitrogen dioxide, NO.sub.2) in a gas by a material comprising a base metal consisting essentially of CuO and Mn, and oxides of Mn, on an activated metal hydrous metal oxide support, such as HMO:Si. A promoter, such as tungsten oxide or molybdenum oxide, can be added and has been shown to increase conversion efficiency. This method provides good conversion of NO.sub.x to N.sub.2, good selectivity, good durability, resistance to SO.sub.2 aging and low toxicity compared with methods utilizing vanadia-based catalysts.

  3. Development of a new ferulic acid certified reference material for use in clinical chemistry and pharmaceutical analysis.

    PubMed

    Yang, Dezhi; Wang, Fengfeng; Zhang, Li; Gong, Ningbo; Lv, Yang

    2015-05-01

    This study compares the results of three certified methods, namely differential scanning calorimetry (DSC), the mass balance (MB) method and coulometric titrimetry (CT), in the purity assessment of ferulic acid certified reference material (CRM). Purity and expanded uncertainty as determined by the three methods were respectively 99.81%, 0.16%; 99.79%, 0.16%; and 99.81%, 0.26% with, in all cases, a coverage factor (k) of 2 (P=95%). The purity results are consistent indicating that the combination of DSC, the MB method and CT provides a confident assessment of the purity of suitable CRMs like ferulic acid.

  4. Expanding contraceptive options for PMTCT clients: a mixed methods implementation study in Cape Town, South Africa

    PubMed Central

    2014-01-01

    Background Clients of prevention of mother-to-child transmission (PMTCT) services in South Africa who use contraception following childbirth rely primarily on short-acting methods like condoms, pills, and injectables, even when they desire no future pregnancies. Evidence is needed on strategies for expanding contraceptive options for postpartum PMTCT clients to include long-acting and permanent methods. Methods We examined the process of expanding contraceptive options in five health centers in Cape Town providing services to HIV-positive women. Maternal/child health service providers received training and coaching to strengthen contraceptive counseling for postpartum women, including PMTCT clients. Training and supplies were introduced to strengthen intrauterine device (IUD) services, and referral mechanisms for female sterilization were reinforced. We conducted interviews with separate samples of postpartum PMTCT clients (265 pre-intervention and 266 post-intervention) to assess knowledge and behaviors regarding postpartum contraception. The process of implementing the intervention was evaluated through systematic documentation and interpretation using an intervention tracking tool. In-depth interviews with providers who participated in study-sponsored training were conducted to assess their attitudes toward and experiences with promoting voluntary contraceptive services to HIV-positive clients. Results Following the intervention, 6% of interviewed PMTCT clients had the desired knowledge about the IUD and 23% had the desired knowledge about female sterilization. At both pre- and post-intervention, 7% of clients were sterilized and IUD use was negligible; by comparison, 75% of clients used injectables. Intervention tracking and in-depth interviews with providers revealed intervention shortcomings and health system constraints explaining the failure to produce intended effects. Conclusions The intervention failed to improve PMTCT clients’ knowledge about the IUD and sterilization or to increase use of those methods. To address the family planning needs of postpartum PMTCT clients in a way that is consistent with their fertility desires, services must expand the range of contraceptive options to include long-acting and permanent methods. In turn, to ensure consistent access to high quality family planning services that are effectively linked to HIV services, attention must also be focused on resolving underlying health system constraints weakening health service delivery more generally. PMID:24410922

  5. Applications of algebraic topology to compatible spatial discretizations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bochev, Pavel Blagoveston; Hyman, James M.

    We provide a common framework for compatible discretizations using algebraic topology to guide our analysis. The main concept is the natural inner product on cochains, which induces a combinatorial Hodge theory. The framework comprises of mutually consistent operations of differentiation and integration, has a discrete Stokes theorem, and preserves the invariants of the DeRham cohomology groups. The latter allows for an elementary calculation of the kernel of the discrete Laplacian. Our framework provides an abstraction that includes examples of compatible finite element, finite volume and finite difference methods. We describe how these methods result from the choice of a reconstructionmore » operator and when they are equivalent.« less

  6. Parallel macromolecular delivery and biochemical/electrochemical interface to cells employing nanostructures

    DOEpatents

    McKnight, Timothy E; Melechko, Anatoli V; Griffin, Guy D; Guillorn, Michael A; Merkulov, Vladimir L; Simpson, Michael L

    2015-03-31

    Systems and methods are described for parallel macromolecular delivery and biochemical/electrochemical interface to whole cells employing carbon nanostructures including nanofibers and nanotubes. A method includes providing a first material on at least a first portion of a first surface of a first tip of a first elongated carbon nanostructure; providing a second material on at least a second portion of a second surface of a second tip of a second elongated carbon nanostructure, the second elongated carbon nanostructure coupled to, and substantially parallel to, the first elongated carbon nanostructure; and penetrating a boundary of a biological sample with at least one member selected from the group consisting of the first tip and the second tip.

  7. An approach to optimize the batch mixing process for improving the quality consistency of the products made from traditional Chinese medicines*

    PubMed Central

    Yan, Bin-jun; Qu, Hai-bin

    2013-01-01

    The efficacy of traditional Chinese medicine (TCM) is based on the combined effects of its constituents. Variation in chemical composition between batches of TCM has always been the deterring factor in achieving consistency in efficacy. The batch mixing process can significantly reduce the batch-to-batch quality variation in TCM extracts by mixing them in a well-designed proportion. However, reducing the quality variation without sacrificing too much of the production efficiency is one of the challenges. Accordingly, an innovative and practical batch mixing method aimed at providing acceptable efficiency for industrial production of TCM products is proposed in this work, which uses a minimum number of batches of extracts to meet the content limits. The important factors affecting the utilization ratio of the extracts (URE) were studied by simulations. The results have shown that URE was affected by the correlation between the contents of constituents, and URE decreased with the increase in the number of targets and the relative standard deviations of the contents. URE could be increased by increasing the number of storage tanks. The results have provided a reference for designing the batch mixing process. The proposed method has possible application value in reducing the quality variation in TCM and providing acceptable production efficiency simultaneously. PMID:24190450

  8. An approach to optimize the batch mixing process for improving the quality consistency of the products made from traditional Chinese medicines.

    PubMed

    Yan, Bin-jun; Qu, Hai-bin

    2013-11-01

    The efficacy of traditional Chinese medicine (TCM) is based on the combined effects of its constituents. Variation in chemical composition between batches of TCM has always been the deterring factor in achieving consistency in efficacy. The batch mixing process can significantly reduce the batch-to-batch quality variation in TCM extracts by mixing them in a well-designed proportion. However, reducing the quality variation without sacrificing too much of the production efficiency is one of the challenges. Accordingly, an innovative and practical batch mixing method aimed at providing acceptable efficiency for industrial production of TCM products is proposed in this work, which uses a minimum number of batches of extracts to meet the content limits. The important factors affecting the utilization ratio of the extracts (URE) were studied by simulations. The results have shown that URE was affected by the correlation between the contents of constituents, and URE decreased with the increase in the number of targets and the relative standard deviations of the contents. URE could be increased by increasing the number of storage tanks. The results have provided a reference for designing the batch mixing process. The proposed method has possible application value in reducing the quality variation in TCM and providing acceptable production efficiency simultaneously.

  9. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less

  10. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghamousa, Amir; Shafieloo, Arman; Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation ofmore » the base ΛCDM model as cosmology's gold standard.« less

  11. Deformations of vector-scalar models

    NASA Astrophysics Data System (ADS)

    Barnich, Glenn; Boulanger, Nicolas; Henneaux, Marc; Julia, Bernard; Lekeu, Victor; Ranjbar, Arash

    2018-02-01

    Abelian vector fields non-minimally coupled to uncharged scalar fields arise in many contexts. We investigate here through algebraic methods their consistent deformations ("gaugings"), i.e., the deformations that preserve the number (but not necessarily the form or the algebra) of the gauge symmetries. Infinitesimal consistent deformations are given by the BRST cohomology classes at ghost number zero. We parametrize explicitly these classes in terms of various types of global symmetries and corresponding Noether currents through the characteristic cohomology related to antifields and equations of motion. The analysis applies to all ghost numbers and not just ghost number zero. We also provide a systematic discussion of the linear and quadratic constraints on these parameters that follow from higher-order consistency. Our work is relevant to the gaugings of extended supergravities.

  12. Ion binding compounds, radionuclide complexes, methods of making radionuclide complexes, methods of extracting radionuclides, and methods of delivering radionuclides to target locations

    DOEpatents

    Chen, Xiaoyuan; Wai, Chien M.; Fisher, Darrell R.

    2000-01-01

    The invention pertains to compounds for binding lanthanide ions and actinide ions. The invention further pertains to compounds for binding radionuclides, and to methods of making radionuclide complexes. Also, the invention pertains to methods of extracting radionuclides. Additionally, the invention pertains to methods of delivering radionuclides to target locations. In one aspect, the invention includes a compound comprising: a) a calix[n]arene group, wherein n is an integer greater than 3, the calix[n]arene group comprising an upper rim and a lower rim; b) at least one ionizable group attached to the lower rim; and c) an ion selected from the group consisting of lanthanide and actinide elements bound to the ionizable group. In another aspect, the invention includes a method of extracting a radionuclide, comprising: a) providing a sample comprising a radionuclide; b) providing a calix[n]arene compound in contact with the sample, wherein n is an integer greater than 3; and c) extracting radionuclide from the sample into the calix[n]arene compound. In yet another aspect, the invention includes a method of delivering a radionuclide to a target location, comprising: a) providing a calix[n]arene compound, wherein n is an integer greater than 3, the calix[n]arene compound comprising at least one ionizable group; b) providing a radionuclide bound to the calix[n]arene compound; and c) providing an antibody attached to the calix[n]arene compound, the antibody being specific for a material found at the target location.

  13. JPEG2000 encoding with perceptual distortion control.

    PubMed

    Liu, Zhen; Karam, Lina J; Watson, Andrew B

    2006-07-01

    In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.

  14. How Many Batches Are Needed for Process Validation under the New FDA Guidance?

    PubMed

    Yang, Harry

    2013-01-01

    The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and they highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance. The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and THEY highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance.

  15. Precipitating Condensation Clouds in Substellar Atmospheres

    NASA Technical Reports Server (NTRS)

    Ackerman, Andrew S.; Marley, Mark S.; Gore, Warren J. (Technical Monitor)

    2000-01-01

    We present a method to calculate vertical profiles of particle size distributions in condensation clouds of giant planets and brown dwarfs. The method assumes a balance between turbulent diffusion and precipitation in horizontally uniform cloud decks. Calculations for the Jovian ammonia cloud are compared with previous methods. An adjustable parameter describing the efficiency of precipitation allows the new model to span the range of predictions from previous models. Calculations for the Jovian ammonia cloud are found to be consistent with observational constraints. Example calculations are provided for water, silicate, and iron clouds on brown dwarfs and on a cool extrasolar giant planet.

  16. Formal Assurance Certifiable Tooling Formal Assurance Certifiable Tooling Strategy Final Report

    NASA Technical Reports Server (NTRS)

    Bush, Eric; Oglesby, David; Bhatt, Devesh; Murugesan, Anitha; Engstrom, Eric; Mueller, Joe; Pelican, Michael

    2017-01-01

    This is the Final Report of a research project to investigate issues and provide guidance for the qualification of formal methods tools under the DO-330 qualification process. It consisted of three major subtasks spread over two years: 1) an assessment of theoretical soundness issues that may affect qualification for three categories of formal methods tools, 2) a case study simulating the DO-330 qualification of two actual tool sets, and 3) an investigation of risk mitigation strategies that might be applied to chains of such formal methods tools in order to increase confidence in their certification of airborne software.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, E.; Engebrecht-Metzger, C.; Horowitz, S.

    As BA has grown to include a large and diverse cross-section of the home building and retrofit industries, it has become more important to develop accurate, consistent analysis techniques to measure progress towards the program's goals. The House Simulation Protocol (HSP) document provides guidance to program partners and managers so they can compare energy savings for new construction and retrofit projects. The HSP provides the program with analysis methods that are proven to be effective and reliable in investigating the energy use of advanced energy systems and of entire houses.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, E.; Engebrecht, C. Metzger; Horowitz, S.

    As Building America has grown to include a large and diverse cross-section of the home building and retrofit industries, it has become more important to develop accurate, consistent analysis techniques to measure progress towards the program's goals. The House Simulation Protocol (HSP) document provides guidance to program partners and managers so they can compare energy savings for new construction and retrofit projects. The HSP provides the program with analysis methods that are proven to be effective and reliable in investigating the energy use of advanced energy systems and of entire houses.

  19. Review of pharmacological therapy for tinnitus.

    PubMed

    Patterson, Matthew B; Balough, Ben J

    2006-01-01

    This article provides a review of studies investigating the pharmacological treatment of tinnitus. Tinnitus continues to be a significant and costly health problem without a uniformly accepted treatment. A wide variety of studies exploring prescription, supplement, and vitamin therapies are assessed for efficacy of treatment and for establishing consistencies in symptom definition, assessment, and outcome measures. This review reveals no compelling evidence suggesting the efficacy of any pharmacological agent in the treatment of tinnitus. Analysis of prior investigations provides insight to appropriate methods for future work, which are outlined.

  20. Defense Small Business Innovation Research Program (SBIR). Volume 3. Air Force Abstracts of Phase 1 Awards

    DTIC Science & Technology

    1990-01-01

    THERE WILL BE A CONTINUING NEED FOR A SENSITIVE, RAPID, AND ECONOMICAL TESTING PROCEDURE CAPABLE OF DETECTING DEFECTS AND PROVIDING FEEDBACK FOR QUALITY...SOLUTIONS. THE DKF METHOD PROVIDES OPTIMAL OR NEAR-OPTIMAL ACCURACY, REDUCE PROCESSING BURDEN, AND IMPROVE FAULT TOLERANCE. THE DKF/MMAE ( DMAE ) TECHNIQUES...DEVICES FOR B-SiC IS TO BE ABLE TO CONSISTENTLY PRODUCE INTRINSIC FILMS WITH VERY LOW DEFECTS AND TO DEVELOP SCHOTTKY AND OHMIC CONTACT MATERIALS THAT WILL

  1. MediaTracker system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandoval, D. M.; Strittmatter, R. B.; Abeyta, J. D.

    2004-01-01

    The initial objectives of this effort were to provide a hardware and software platform that can address the requirements for the accountability of classified removable electronic media and vault access logging. The Media Tracker system software assists classified media custodian in managing vault access logging and Media Tracking to prevent the inadvertent violation of rules or policies for the access to a restricted area and the movement and use of tracked items. The MediaTracker system includes the software tools to track and account for high consequence security assets and high value items. The overall benefits include: (1) real-time access tomore » the disposition of all Classified Removable Electronic Media (CREM), (2) streamlined security procedures and requirements, (3) removal of ambiguity and managerial inconsistencies, (4) prevention of incidents that can and should be prevented, (5) alignment with the DOE's initiative to achieve improvements in security and facility operations through technology deployment, and (6) enhanced individual responsibility by providing a consistent method of dealing with daily responsibilities. In response to initiatives to enhance the control of classified removable electronic media (CREM), the Media Tracker software suite was developed, piloted and implemented at the Los Alamos National Laboratory beginning in July 2000. The Media Tracker software suite assists in the accountability and tracking of CREM and other high-value assets. One component of the MediaTracker software suite provides a Laboratory-approved media tracking system. Using commercial touch screen and bar code technology, the MediaTracker (MT) component of the MediaTracker software suite provides an efficient and effective means to meet current Laboratory requirements and provides new-engineered controls to help assure compliance with those requirements. It also establishes a computer infrastructure at vault entrances for vault access logging, and can accommodate several methods of positive identification including smart cards and biometrics. Currently, we have three mechanisms that provide added security for accountability and tracking purposes. One mechanism consists of a portable, hand-held inventory scanner, which allows the custodian to physically track the items that are not accessible within a particular area. The second mechanism is a radio frequency identification (RFID) consisting of a monitoring portal, which tracks and logs in a database all activity tagged of items that pass through the portals. The third mechanism consists of an electronic tagging of a flash memory device for automated inventory of CREM in storage. By modifying this USB device the user is provided with added assurance, limiting the data from being obtained from any other computer.« less

  2. Applying lessons from commercial aviation safety and operations to resuscitation.

    PubMed

    Ornato, Joseph P; Peberdy, Mary Ann

    2014-02-01

    Both commercial aviation and resuscitation are complex activities in which team members must respond to unexpected emergencies in a consistent, high quality manner. Lives are at stake in both activities and the two disciplines have similar leadership structures, standard setting processes, training methods, and operational tools. Commercial aviation crews operate with remarkable consistency and safety, while resuscitation team performance and outcomes are highly variable. This commentary provides the perspective of two physician-pilots showing how commercial aviation training, operations, and safety principles can be adapted to resuscitation team training and performance. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    PubMed

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  4. The Effect of Pinyin Input Experience on the Link Between Semantic and Phonology of Chinese Character in Digital Writing.

    PubMed

    Chen, Jingjun; Luo, Rong; Liu, Huashan

    2017-08-01

    With the development of ICT, digital writing is becoming much more common in people's life. Differently from keyboarding alphabets directly to input English words, keyboarding Chinese character is always through typing phonetic alphabets and then identify the glyph provided by Pinyin input-method software while in this process which do not need users to produce orthography spelling, thus it is different from traditional written language production model based on handwriting process. Much of the research in this domain has found that using Pinyin input method is beneficial to Chinese characters recognition, but only a small part explored the effects of individual's Pinyin input experience on the Chinese characters production process. We ask whether using Pinyin input-method will strengthen the semantic-phonology linkage or semantic-orthography linkage in Chinese character mental lexicon. Through recording the RT and accuracy of participants completing semantic-syllable and semantic-glyph consistency judgments, the results found the accuracy of semantic-syllable consistency judgments in high Pinyin input experienced group was higher than that in low-experienced group, and RT was reversed. There were no significant differences on semantic-glyph consistency judgments between the two groups. We conclude that using Pinyin input method in Chinese digital writing can strengthen the semantic-phonology linkage while do not weakening the semantic-orthography linkage in mental lexicon at the same time, which means that Pinyin input method is beneficial to lexical processing involving Chinese cognition.

  5. An assessment of the risk of foreign animal disease introduction into the United States of America through garbage from Alaskan cruise ships.

    PubMed

    McElvaine, M D; McDowell, R M; Fite, R W; Miller, L

    1993-12-01

    The United States Department of Agriculture, Animal and Plant Health Inspection Service (USDA-APHIS) has been exploring methods of quantitative risk assessment to support decision-making, provide risk management options and identify research needs. With current changes in world trade, regulatory decisions must have a scientific basis which is transparent, consistent, documentable and defensible. These quantitative risk assessment methods are described in an accompanying paper in this issue. In the present article, the authors provide an illustration by presenting an application of these methods. Prior to proposing changes in regulations, USDA officials requested an assessment of the risk of introduction of foreign animal disease to the United States of America through garbage from Alaskan cruise ships. The risk assessment team used a combination of quantitative and qualitative methods to evaluate this question. Quantitative risk assessment methods were used to estimate the amount of materials of foreign origin being sent to Alaskan landfills. This application of quantitative risk assessment illustrates the flexibility of the methods in addressing specific questions. By applying these methods, specific areas were identified where more scientific information and research were needed. Even with limited information, the risk assessment provided APHIS management with a scientific basis for a regulatory decision.

  6. Young women's consistency of contraceptive use – Does depression or stress matter?

    PubMed Central

    Moreau, Caroline; Trussell, James; Barber, Jennifer

    2013-01-01

    Background We prospectively examined the influence of young women's depression and stress symptoms on their weekly consistency of contraceptive method use. Study Design Women ages 18-20 years (n=689) participating in a longitudinal cohort study completed weekly journals assessing reproductive, relationship and health characteristics. We used data through 12 months follow-up (n=8,877 journals) to examine relationships between baseline depression (CES-D) and stress (PSS-10) symptoms and consistency of contraceptive methods use with sexual activity each week. We analyzed data with random effects multinomial logistic regression. Results Consistent contraceptive use (72% of weeks) was 10-15 percentage points lower among women with moderate/severe baseline depression and stress symptoms than those without symptoms (p-values<0.001). Controlling for covariates, women with depression and stress symptoms had 47% and 69% reduced odds of contraceptive consistency each week than those without symptoms, respectively (OR 0.53, CI 0.31-0.91 and OR 0.31, CI 0.18-0.52). Stress predicted inconsistent use of oral contraceptives (OR 0.27, CI 0.12-0.58), condoms (OR 0.40, CI 0.23-0.69) and withdrawal (OR 0.12, CI 0.03-0.50). Conclusion Women with depression and stress symptoms appear to be at increased risk for user-related contraceptive failures, especially for the most commonly used methods. Implications Our study has shown that young women with elevated depression and stress symptoms appear to be at risk for inconsistent contraceptive use patterns, especially for the most common methods that require greater user effort and diligence. Based upon these findings, clinicians should consider women's psychological and emotional status when helping patients with contraceptive decision-making and management. User-dependent contraceptive method efficacy is important to address in education and counseling sessions, and women with stress or depression may be ideal candidates for long-acting reversible methods, which offer highly effective options with less user-related burden. Ongoing research will provide a greater understanding of how young women's dynamic mental health symptoms impact family planning behaviors and outcomes over time. PMID:23850075

  7. Young women's consistency of contraceptive use--does depression or stress matter?

    PubMed

    Stidham Hall, Kelli; Moreau, Caroline; Trussell, James; Barber, Jennifer

    2013-11-01

    We prospectively examined the influence of young women's depression and stress symptoms on their weekly consistency of contraceptive method use. Women ages 18-20 years (n = 689) participating in a longitudinal cohort study completed weekly journals assessing reproductive, relationship and health characteristics. We used data through 12 months of follow-up (n = 8877 journals) to examine relationships between baseline depression (CES-D) and stress (PSS-10) symptoms and consistency of contraceptive methods use with sexual activity each week. We analyzed data with random effects multivarible logistic regression. Consistent contraceptive use (72% of weeks) was 10-15 percentage points lower among women with moderate/severe baseline depression and stress symptoms than those without symptoms (p < .001). Controlling for covariates, women with depression and stress symptoms had 47% and 69% reduced odds of contraceptive consistency each week than those without symptoms, respectively (OR 0.53, CI 0.31-0.91 and OR 0.31, CI 0.18-0.52). Stress predicted inconsistent use of oral contraceptives (OR 0.27, CI 0.12-0.58), condoms (OR 0.40, CI 0.23-0.69) and withdrawal (OR 0.12, CI 0.03-0.50). Women with depression and stress symptoms appear to be at increased risk for user-related contraceptive failures, especially for the most commonly used methods. Our study has shown that young women with elevated depression and stress symptoms appear to be at risk for inconsistent contraceptive use patterns, especially for the most common methods that require greater user effort and diligence. Based upon these findings, clinicians should consider women's psychological and emotional status when helping patients with contraceptive decision-making and management. User-dependent contraceptive method efficacy is important to address in education and counseling sessions, and women with stress or depression may be ideal candidates for long-acting reversible methods, which offer highly effective options with less user-related burden. Ongoing research will provide a greater understanding of how young women's dynamic mental health symptoms impact family planning behaviors and outcomes over time. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Bisulfite-independent analysis of CpG island methylation enables genome-scale stratification of single cells

    PubMed Central

    Han, Lin; Wu, Hua-Jun; Zhu, Haiying; Kim, Kun-Yong; Marjani, Sadie L.; Riester, Markus; Euskirchen, Ghia; Zi, Xiaoyuan; Yang, Jennifer; Han, Jasper; Snyder, Michael; Park, In-Hyun; Irizarry, Rafael; Weissman, Sherman M.

    2017-01-01

    Abstract Conventional DNA bisulfite sequencing has been extended to single cell level, but the coverage consistency is insufficient for parallel comparison. Here we report a novel method for genome-wide CpG island (CGI) methylation sequencing for single cells (scCGI-seq), combining methylation-sensitive restriction enzyme digestion and multiple displacement amplification for selective detection of methylated CGIs. We applied this method to analyzing single cells from two types of hematopoietic cells, K562 and GM12878 and small populations of fibroblasts and induced pluripotent stem cells. The method detected 21 798 CGIs (76% of all CGIs) per cell, and the number of CGIs consistently detected from all 16 profiled single cells was 20 864 (72.7%), with 12 961 promoters covered. This coverage represents a substantial improvement over results obtained using single cell reduced representation bisulfite sequencing, with a 66-fold increase in the fraction of consistently profiled CGIs across individual cells. Single cells of the same type were more similar to each other than to other types, but also displayed epigenetic heterogeneity. The method was further validated by comparing the CpG methylation pattern, methylation profile of CGIs/promoters and repeat regions and 41 classes of known regulatory markers to the ENCODE data. Although not every minor methylation differences between cells are detectable, scCGI-seq provides a solid tool for unsupervised stratification of a heterogeneous cell population. PMID:28126923

  9. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes.

    PubMed

    Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando

    2015-07-21

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.

  10. Density of ocular components of the bovine eye.

    PubMed

    Su, Xiao; Vesco, Christina; Fleming, Jacquelyn; Choh, Vivian

    2009-10-01

    Density is essential for acoustic characterization of tissues and provides a basic input for ultrasound backscatter and absorption models. Despite the existence of extensive compilations of acoustic properties, neither unified data on ocular density nor comparisons of the densities between all ocular components can be found. This study was undertaken to determine the mass density of all the ocular components of the bovine eye. Liquid components were measured through mass/volume ratio, whereas solid tissues were measured with two different densitometry techniques based on Archimedes Principle. The first method determines the density by measuring dry and wet weight of the tissues. The second method consists of immersing the tissues in sucrose solutions of varying densities and observing their buoyancy. Although the mean densities for all tissues were found to be within 0.02 g/cm by both methods, only the sucrose solution method offered a consistent relative order for all measured ocular components, as well as a considerably smaller standard deviation (a maximum standard deviation of 0.004 g/cm for cornea). The lens was found to be the densest component, followed by the sclera, cornea, choroid, retina, aqueous, and vitreous humors. The consistent results of the sucrose solution tests suggest that the ocular mass density is a physical property that is more dependent on the compositional and structural characteristics of the tissue and than on population variability.

  11. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    NASA Astrophysics Data System (ADS)

    Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando

    2015-07-01

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.

  12. UNCLES: method for the identification of genes differentially consistently co-expressed in a specific subset of datasets.

    PubMed

    Abu-Jamous, Basel; Fa, Rui; Roberts, David J; Nandi, Asoke K

    2015-06-04

    Collective analysis of the increasingly emerging gene expression datasets are required. The recently proposed binarisation of consensus partition matrices (Bi-CoPaM) method can combine clustering results from multiple datasets to identify the subsets of genes which are consistently co-expressed in all of the provided datasets in a tuneable manner. However, results validation and parameter setting are issues that complicate the design of such methods. Moreover, although it is a common practice to test methods by application to synthetic datasets, the mathematical models used to synthesise such datasets are usually based on approximations which may not always be sufficiently representative of real datasets. Here, we propose an unsupervised method for the unification of clustering results from multiple datasets using external specifications (UNCLES). This method has the ability to identify the subsets of genes consistently co-expressed in a subset of datasets while being poorly co-expressed in another subset of datasets, and to identify the subsets of genes consistently co-expressed in all given datasets. We also propose the M-N scatter plots validation technique and adopt it to set the parameters of UNCLES, such as the number of clusters, automatically. Additionally, we propose an approach for the synthesis of gene expression datasets using real data profiles in a way which combines the ground-truth-knowledge of synthetic data and the realistic expression values of real data, and therefore overcomes the problem of faithfulness of synthetic expression data modelling. By application to those datasets, we validate UNCLES while comparing it with other conventional clustering methods, and of particular relevance, biclustering methods. We further validate UNCLES by application to a set of 14 real genome-wide yeast datasets as it produces focused clusters that conform well to known biological facts. Furthermore, in-silico-based hypotheses regarding the function of a few previously unknown genes in those focused clusters are drawn. The UNCLES method, the M-N scatter plots technique, and the expression data synthesis approach will have wide application for the comprehensive analysis of genomic and other sources of multiple complex biological datasets. Moreover, the derived in-silico-based biological hypotheses represent subjects for future functional studies.

  13. Consistent estimate of ocean warming, land ice melt and sea level rise from Observations

    NASA Astrophysics Data System (ADS)

    Blazquez, Alejandro; Meyssignac, Benoît; Lemoine, Jean Michel

    2016-04-01

    Based on the sea level budget closure approach, this study investigates the consistency of observed Global Mean Sea Level (GMSL) estimates from satellite altimetry, observed Ocean Thermal Expansion (OTE) estimates from in-situ hydrographic data (based on Argo for depth above 2000m and oceanic cruises below) and GRACE observations of land water storage and land ice melt for the period January 2004 to December 2014. The consistency between these datasets is a key issue if we want to constrain missing contributions to sea level rise such as the deep ocean contribution. Numerous previous studies have addressed this question by summing up the different contributions to sea level rise and comparing it to satellite altimetry observations (see for example Llovel et al. 2015, Dieng et al. 2015). Here we propose a novel approach which consists in correcting GRACE solutions over the ocean (essentially corrections of stripes and leakage from ice caps) with mass observations deduced from the difference between satellite altimetry GMSL and in-situ hydrographic data OTE estimates. We check that the resulting GRACE corrected solutions are consistent with original GRACE estimates of the geoid spherical harmonic coefficients within error bars and we compare the resulting GRACE estimates of land water storage and land ice melt with independent results from the literature. This method provides a new mass redistribution from GRACE consistent with observations from Altimetry and OTE. We test the sensibility of this method to the deep ocean contribution and the GIA models and propose best estimates.

  14. Patterns and Sequences: Interactive Exploration of Clickstreams to Understand Common Visitor Paths.

    PubMed

    Liu, Zhicheng; Wang, Yang; Dontcheva, Mira; Hoffman, Matthew; Walker, Seth; Wilson, Alan

    2017-01-01

    Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.

  15. Measuring cognition in teams: a cross-domain review.

    PubMed

    Wildman, Jessica L; Salas, Eduardo; Scott, Charles P R

    2014-08-01

    The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method. A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes. Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity. The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method. We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research. We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.

  16. Mangifera Indica leaf-assisted biosynthesis of well-dispersed silver nanoparticles

    NASA Astrophysics Data System (ADS)

    Philip, Daizy

    2011-01-01

    The use of various parts of plants for the synthesis of nanoparticles is considered as a green technology as it does not involve any harmful chemicals. The present study reports a facile and rapid biosynthesis of well-dispersed silver nanoparticles. The method developed is environmentally friendly and allows the reduction to be accelerated by changing the temperature and pH of the reaction mixture consisting of aqueous AgNO 3 and Mangifera Indica leaf extract. At a pH of 8, the colloid consists of well-dispersed triangular, hexagonal and nearly spherical nanoparticles having size ˜20 nm. The UV-vis spectrum of silver nanoparticles gave surface plasmon resonance (SPR) at 439 nm. The synthesized nanocrystals were characterized using transmission electron microscopy (TEM), X-ray diffraction (XRD) and Fourier transform infrared (FTIR) spectroscopy. Water soluble organics present in the leaf are responsible for the reduction of silver ions. This green method provides faster synthesis comparable to chemical methods and can be used in areas such as cosmetics, foods and medical applications.

  17. Induced electric currents in the Alaska oil pipeline measured by gradient, fluxgate, and SQUID magnetometers

    NASA Technical Reports Server (NTRS)

    Campbell, W. H.; Zimmerman, J. E.

    1979-01-01

    The field gradient method for observing the electric currents in the Alaska pipeline provided consistent values for both the fluxgate and SQUID method of observation. These currents were linearly related to the regularly measured electric and magnetic field changes. Determinations of pipeline current were consistent with values obtained by a direct connection, current shunt technique at a pipeline site about 9.6 km away. The gradient method has the distinct advantage of portability and buried- pipe capability. Field gradients due to the pipe magnetization, geological features, or ionospheric source currents do not seem to contribute a measurable error to such pipe current determination. The SQUID gradiometer is inherently sensitive enough to detect very small currents in a linear conductor at 10 meters, or conversely, to detect small currents of one amphere or more at relatively great distances. It is fairly straightforward to achieve imbalance less than one part in ten thousand, and with extreme care, one part in one million or better.

  18. A novel method for the identification of inorganic and organic gunshot residue particles of lead-free ammunitions from the hands of shooters using scanning laser ablation-ICPMS and Raman micro-spectroscopy.

    PubMed

    Abrego, Zuriñe; Grijalba, Nagore; Unceta, Nora; Maguregui, Maite; Sanchez, Alicia; Fernández-Isla, Alberto; Goicolea, M Aranzazu; Barrio, Ramón J

    2014-12-07

    A method based on scanning laser ablation and inductively coupled plasma-mass spectrometry (SLA-ICPMS) and Raman micro-spectroscopy for the detection and identification of compounds consistent with gunshot residue particles (GSR) has been developed. The method has been applied to the characterization of particles resulting from the discharge of firearms using lead-free ammunition. Modified tape lifts were used to collect the inorganic and organic residues from skin surfaces in a single sample. Using SLA-ICPMS, aggregates related to the composition of the ammunition, such as Cu-Zn-Sn, Zr-Sr, Cu-Zn, Al-Ti, or Al-Sr-Zr were detected, but this composition is only consistent with GSR from lead-free ammunitions. Additional evidence was provided by micro-Raman spectroscopy, which identified the characteristic organic groups of the particles as centralite, diphenylamine or their nitrated derivatives, which are indicative of GSR.

  19. A multi-center study benchmarks software tools for label-free proteome quantification

    PubMed Central

    Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan

    2016-01-01

    The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404

  20. Construction of high-rise building with underground parking in Moscow

    NASA Astrophysics Data System (ADS)

    Ilyichev, Vyacheslav; Nikiforova, Nadezhda; Konnov, Artem

    2018-03-01

    Paper presents results of scientific support to construction of unique residential building 108 m high with one storey underground part under high-rise section and 3-storey underground parking connected by underground passage. On-site soils included anthropogenic soil, clayey soils soft-stiff, saturated sands of varied grain coarseness. Design of retaining structure and support system for high-rise part excavation was developed. It suggested installation of steel pipes and struts. Construction of adjacent 3-storey underground parking by "Moscow method" is described in the paper. This method involves implementation of retaining wall consisted of prefabricated panels, truss structures (used as struts) and reinforced concrete slabs. Also design and construction technology is provided for foundations consisted of bored piles 800 MM in diameter joined by slab with base widening diameter of 1500 MM. Experiment results of static and dynamic load testing (ELDY method) are considered. Geotechnical monitoring data of adjacent building and utility systems settlement caused by construction of presented high-rise building were compared to numerical modelling results, predicted and permissible values.

  1. The Development and Implementation of a Model for Evaluating Clinical Specialty Education Programs.

    ERIC Educational Resources Information Center

    McLean, James E.; And Others

    A new method for evaluating cancer education programs, using an external/internal evaluation team is outlined. The internal program staff are required to collect the data, arrange for a site visit, provide access to personnel, and make available other information requested by the evaluators. The external team consists of a dentist with oncological…

  2. AUTOMOTIVE DIESEL MAINTENANCE 1. UNIT XX, CUMMINS DIESEL ENGINE, MAINTENANCE SUMMARY.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Vocational and Technical Education.

    THIS MODULE OF A 30-MODULE COURSE IS DESIGNED TO PROVIDE A SUMMARY OF THE REASONS AND PROCEDURES FOR DIESEL ENGINE MAINTENANCE. TOPICS ARE WHAT ENGINE BREAK-IN MEANS, ENGINE BREAK-IN, TORQUING BEARINGS (TEMPLATE METHOD), AND THE NEED FOR MAINTENANCE. THE MODULE CONSISTS OF A SELF-INSTRUCTIONAL BRANCH PROGRAMED TRAINING FILM "CUMMINS DIESEL ENGINE…

  3. A Scoping Review of Scoping Reviews: Advancing the Approach and Enhancing the Consistency

    ERIC Educational Resources Information Center

    Pham, Mai T.; Rajic, Andrijana; Greig, Judy D.; Sargeant, Jan M.; Papadopoulos, Andrew; McEwen, Scott A.

    2014-01-01

    Background: The scoping review has become an increasingly popular approach for synthesizing research evidence. It is a relatively new approach for which a universal study definition or definitive procedure has not been established. The purpose of this scoping review was to provide an overview of scoping reviews in the literature. Methods: A…

  4. Three-dimensional tracking solar energy concentrator and method for making same

    NASA Technical Reports Server (NTRS)

    Miller, C. G.; Pohl, J. G. (Inventor)

    1977-01-01

    A three dimensional tracking solar energy concentrator, consisting of a stretched aluminized polymeric membrane supported by a hoop, was presented. The system is sturdy enough to withstand expected windage forces and precipitation. It can provide the high temperature output needed by central station power plants for power production in the multi-megawatt range.

  5. A Mixed-Methods Evaluation of Social Work Learning Outcomes in Interprofessional Training with Medicine and Pharmacy Students

    ERIC Educational Resources Information Center

    Wharton, Tracy; Burg, Mary Ann

    2017-01-01

    Social work has moved firmly into a need for partnership training models, as our newest Educational Policy and Accreditation Standards explicitly call for interprofessional education (IPE). Although IPE is not a new model, we have not been consistently involved in training partnerships. Three professional schools formed partnerships to provide IPE…

  6. Risk terminology primer: Basic principles and a glossary for the wildland fire management community

    Treesearch

    Matthew P. Thompson; Tom Zimmerman; Dan Mindar; Mary Taber

    2016-01-01

    Risk management is being increasingly promoted as an appropriate method for addressing wildland fire management challenges. However, a lack of a common understanding of risk concepts and terminology is hindering effective application. In response, this General Technical Report provides a set of clear, consistent, understandable, and usable definitions for terms...

  7. The Newspaper in the Classroom: Suggestions for Using Your Newspaper in Classrooms of Junior and Senior High Schools.

    ERIC Educational Resources Information Center

    Copley Newspapers, San Diego, CA. Dept. of Education.

    Consisting of the combined findings of recent Newspaper in the Classroom Workshops and methods already successfully used in the schools in areas where Copley newspapers are published, this booklet provides techniques for using the newspaper in the following subject areas: social studies, United States history, United States government, world…

  8. Linking Soils and Down Woody Material Inventories for Cohesive Assessments of Ecosystem Carbon Pools

    Treesearch

    Katherine P. O' Neill; Christopher Woodall; Michael Amacher; Geoffrey Holden

    2005-01-01

    The Soils and Down Woody Materials (DWM) indicators collected by the Forest Inventory and Analysis program provide the only data available for nationally consistent monitoring of carbon storage in soils, the forest floor, and down woody debris. However, these indicators were developed and implemented separately, resulting in field methods and compilation procedures...

  9. Design and Implementation of a Multi-Strategy, Collegewide Program of Evaluation and Planning: The Mercy College Self-Study Project.

    ERIC Educational Resources Information Center

    Kraetzer, Mary C.; And Others

    The rationale, strategies, and methods of The Mercy College Self-Study Project are considered, and evaluation instruments are provided. This program of institutional evaluation and planning was initiated in 1980 and consists of: standardized surveys, a 10-year longitudinal (panel) study, and academic department self-studies. Questionnaires…

  10. HOT PRESSING WITH A TEMPERATURE GRADIENT

    DOEpatents

    Hausner, H.H.

    1958-05-20

    A method is described for producing powder metal compacts with a high length to width ratio, which are of substantially uniform density. The process consists in arranging a heating coil around the die and providing a temperature gradient along the length of the die with the highest temperature at the point of the compact farthest away from the ram or plunger.

  11. Neutron and gamma radiation shielding material, structure, and process of making structure

    DOEpatents

    Hondorp, Hugh L.

    1984-01-01

    The present invention is directed to a novel neutron and gamma radiation elding material consisting of 95 to 97 percent by weight SiO.sub.2 and 5 to 3 percent by weight sodium silicate. In addition, the method of using this composition to provide a continuous neutron and gamma radiation shielding structure is disclosed.

  12. Dreaming in the Classroom: Practices, Methods, and Resources in Dream Education. SUNY Series in Dream Studies

    ERIC Educational Resources Information Center

    King, Philip; Bulkeley, Kelly; Welt, Bernard

    2011-01-01

    "Dreaming in the Classroom" provides teachers from virtually all fields with a uniquely informative guidebook for introducing their students to the universal human phenomenon of dreaming. Although dreaming may not be held in high esteem in mainstream Western society, students at all education levels consistently enjoy learning about…

  13. Addressing the Missing Instructional Data Problem: Using a Teacher Log to Document Tier 1 Instruction

    ERIC Educational Resources Information Center

    Kurz, Alexander; Elliott, Stephen N.; Roach, Andrew T.

    2015-01-01

    Response-to-intervention (RTI) systems posit that Tier 1 consists of high-quality general classroom instruction using evidence-based methods to address the needs of most students. However, data on the extent to which general education teachers provide such instruction are rarely collected. This missing instructional data problem may result in RTI…

  14. Calculating CMMI-Based ROI: Why, When, What, and How?

    DTIC Science & Technology

    2007-03-01

    flows are discounted using either: • The company’s weighted average cost of capital ( WACC ) • A “hurdle rate” consisting of the company’s cost of...Provides a “one number” method of comparing projects that is independent of the company’s WACC or hurdle rate Cons • No way to know the dollar magnitude of

  15. Evidence-based nursing-sensitive indicators for patients hospitalized with depression in Thailand.

    PubMed

    Thapinta, Darawan; Anders, Robert L; Mahatnirunkul, Suwat; Srikosai, Soontaree

    2010-12-01

    The aim of this study was to develop and validate nursing-sensitive indicators for patients hospitalized with depression in Thailand. The initial draft, consisting of 12 categories with 37 subcategories, was then evaluated by experts in the US and Thailand. Hospital records were then utilized to evaluate the feasibility and efficacy of the indicators. The finalized instrument consisted of 11 categories with 43 items with a validity of .98 and internal consistency of .88. This is the first set of indicators developed to evaluate nursing-sensitivity for patients hospitalized with a diagnosis of depression in Thailand. Having nursing indicators for depressed patients provides nurses with concrete tools to evaluate their work with depressed patients, allowing these staff to assess their work in a very specific, methodical, and consistent manner. When problems are discovered, both the staff and administration can work to address these issues through training, procedural changes, and departmental shifts.

  16. Ambiguous taxa: Effects on the characterization and interpretation of invertebrate assemblages

    USGS Publications Warehouse

    Cuffney, T.F.; Bilger, Michael D.; Haigler, A.M.

    2007-01-01

    Damaged and immature specimens often result in macroinvertebrate data that contain ambiguous parent-child pairs (i.e., abundances associated with multiple related levels of the taxonomic hierarchy such as Baetis pluto and the associated ambiguous parent Baetis sp.). The choice of method used to resolve ambiguous parent-child pairs may have a very large effect on the characterization of invertebrate assemblages and the interpretation of responses to environmental change because very large proportions of taxa richness (73-78%) and abundance (79-91%) can be associated with ambiguous parents. To address this issue, we examined 16 variations of 4 basic methods for resolving ambiguous taxa: RPKC (remove parent, keep child), MCWP (merge child with parent), RPMC (remove parent or merge child with parent depending on their abundances), and DPAC (distribute parents among children). The choice of method strongly affected assemblage structure, assemblage characteristics (e.g., metrics), and the ability to detect responses along environmental (urbanization) gradients. All methods except MCWP produced acceptable results when used consistently within a study. However, the assemblage characteristics (e.g., values of assemblage metrics) differed widely depending on the method used, and data should not be combined unless the methods used to resolve ambiguous taxa are well documented and are known to be comparable. The suitability of the methods was evaluated and compared on the basis of 13 criteria that considered conservation of taxa richness and abundance, consistency among samples, methods, and studies, and effects on the interpretation of the data. Methods RPMC and DPAC had the highest suitability scores regardless of whether ambiguous taxa were resolved for each sample separately or for a group of samples. Method MCWP gave consistently poor results. Methods MCWP and DPAC approximate the use of family-level identifications and operational taxonomic units (OTU), respectively. Our results suggest that restricting identifications to the family level is not a good method of resolving ambiguous taxa, whereas generating OTUs works well provided that documentation issues are addressed. ?? 2007 by The North American Benthological Society.

  17. A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Davis, Paul Christopher

    1992-01-01

    A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport.

  18. Energy-Based Metrics for Arthroscopic Skills Assessment.

    PubMed

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  19. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    PubMed

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. A Novel Estimator for the Rate of Information Transfer by Continuous Signals

    PubMed Central

    Takalo, Jouni; Ignatova, Irina; Weckström, Matti; Vähäsöyrinki, Mikko

    2011-01-01

    The information transfer rate provides an objective and rigorous way to quantify how much information is being transmitted through a communications channel whose input and output consist of time-varying signals. However, current estimators of information content in continuous signals are typically based on assumptions about the system's linearity and signal statistics, or they require prohibitive amounts of data. Here we present a novel information rate estimator without these limitations that is also optimized for computational efficiency. We validate the method with a simulated Gaussian information channel and demonstrate its performance with two example applications. Information transfer between the input and output signals of a nonlinear system is analyzed using a sensory receptor neuron as the model system. Then, a climate data set is analyzed to demonstrate that the method can be applied to a system based on two outputs generated by interrelated random processes. These analyses also demonstrate that the new method offers consistent performance in situations where classical methods fail. In addition to these examples, the method is applicable to a wide range of continuous time series commonly observed in the natural sciences, economics and engineering. PMID:21494562

  1. Mixed-methods designs in mental health services research: a review.

    PubMed

    Palinkas, Lawrence A; Horwitz, Sarah M; Chamberlain, Patricia; Hurlburt, Michael S; Landsverk, John

    2011-03-01

    Despite increased calls for use of mixed-methods designs in mental health services research, how and why such methods are being used and whether there are any consistent patterns that might indicate a consensus about how such methods can and should be used are unclear. Use of mixed methods was examined in 50 peer-reviewed journal articles found by searching PubMed Central and 60 National Institutes of Health (NIH)-funded projects found by searching the CRISP database over five years (2005-2009). Studies were coded for aims and the rationale, structure, function, and process for using mixed methods. A notable increase was observed in articles published and grants funded over the study period. However, most did not provide an explicit rationale for using mixed methods, and 74% gave priority to use of quantitative methods. Mixed methods were used to accomplish five distinct types of study aims (assess needs for services, examine existing services, develop new or adapt existing services, evaluate services in randomized controlled trials, and examine service implementation), with three categories of rationale, seven structural arrangements based on timing and weighting of methods, five functions of mixed methods, and three ways of linking quantitative and qualitative data. Each study aim was associated with a specific pattern of use of mixed methods, and four common patterns were identified. These studies offer guidance for continued progress in integrating qualitative and quantitative methods in mental health services research consistent with efforts by NIH and other funding agencies to promote their use.

  2. Habitat Use and Trophic Structure in a Highly Migratory Predatory Fish Identified with Geochemical Proxies in Scales

    NASA Astrophysics Data System (ADS)

    Seeley, M.; Walther, B. D.

    2016-02-01

    Atlantic tarpon, Megalops atlanticus, are highly migratory euryhaline predators that occupy different habitats throughout ontogeny. Specifically, Atlantic tarpon are known to inhabit oligohaline waters, although the frequency and duration of movements across estuarine gradients into these waters are relatively unknown. This species supports over a two billion dollar industry within the Gulf of Mexico and is currently listed as vulnerable under the International Union for the Conservation of Nature (IUCN). A new non-lethal method for reconstructing migrations across estuaries relies on trace element and stable isotope compositions of growth increments in scales. We analyzed Atlantic tarpon scales from the Texas coast to validate this method using inductively coupled plasma mass spectrometry (ICP-MS) for trace elements and isotope ratio mass spectrometry (IR-MS) for stable isotope ratios. Multiple scales were also taken from the same individual to confirm the consistency of elemental uptake within the same individual. Results show that scale Ba:Ca, Sr:Ca and δ13C are effective proxies for salinity, while enrichments in δ15N are consistent with known ontogenetic trophic shifts. In addition, chemical transects across multiple scales from the same individual were highly consistent, suggesting that any non-regenerated scale removed from a fish can provide equivalent time series. Continuous life history profiles of scales were obtained via laser ablation transects of scale cross-sections to quantify trace element concentrations from the core (youngest increments) to the edge (oldest increments). Stable isotope and trace element results together indicate that behavior is highly variable between individuals, with some but not all fish transiting estuarine gradients into oligohaline waters. Our findings will provide novel opportunities to investigate alternative non-lethal methods to monitor fish migrations across chemical gradients.

  3. Upper mantle anisotropy from long-period P polarization

    NASA Astrophysics Data System (ADS)

    Schulte-Pelkum, Vera; Masters, Guy; Shearer, Peter M.

    2001-10-01

    We introduce a method to infer upper mantle azimuthal anisotropy from the polarization, i.e., the direction of particle motion, of teleseismic long-period P onsets. The horizontal polarization of the initial P particle motion can deviate by >10° from the great circle azimuth from station to source despite a high degree of linearity of motion. Recent global isotropic three-dimensional mantle models predict effects that are an order of magnitude smaller than our observations. Stations within regional distances of each other show consistent azimuthal deviation patterns, while the deviations seem to be independent of source depth and near-source structure. We demonstrate that despite this receiver-side spatial coherence, our polarization data cannot be fit by a large-scale joint inversion for whole mantle structure. However, they can be reproduced by azimuthal anisotropy in the upper mantle and crust. Modeling with an anisotropic reflectivity code provides bounds on the magnitude and depth range of the anisotropy manifested in our data. Our method senses anisotropy within one wavelength (250 km) under the receiver. We compare our inferred fast directions of anisotropy to those obtained from Pn travel times and SKS splitting. The results of the comparison are consistent with azimuthal anisotropy situated in the uppermost mantle, with SKS results deviating from Pn and Ppol in some regions with probable additional deeper anisotropy. Generally, our fast directions are consistent with anisotropic alignment due to lithospheric deformation in tectonically active regions and to absolute plate motion in shield areas. Our data provide valuable additional constraints in regions where discrepancies between results from different methods exist since the effect we observe is local rather than cumulative as in the case of travel time anisotropy and shear wave splitting. Additionally, our measurements allow us to identify stations with incorrectly oriented horizontal components.

  4. Characterization of Vertical Ozonesonde Measurements in Equatorial Regions Utilizing the Cooperative Enterprise SHADOZ

    NASA Technical Reports Server (NTRS)

    Schmidlin, F. J.; Thompson, A. M.; Holdren, D. H.; Northam, E. T.; Witte, J. C.; Oltmans, S. J.; Hoegger, B.; Levrat, G. M.; Kirchhoff, V.

    2000-01-01

    Vertical ozone profiles between the Equator and 10 S latitude available from the Southern Hemisphere Additional Ozone (SHADOZ) program provide consistent data Ozone sets from up to 10 sounding locations. SHADOZ designed to provide independent ozone profiles in the tropics for evaluation of satellite ozone data and models has made available over 600 soundings over the period 1998-1999. These observations provide an ideal data base for the detailed description of ozone and afford differential comparison between sites. TOMS total ozone when compared with correlative integrated total ozone overburden from the sondes is found to be negatively biased when using the classical constant mixing ratio procedure to determine residual ozone. On the other hand, the climatological method proposed by McPeters and Labow appears to give consistent results but is positively biased. The longer then two years series of measurements also was subjected to harmonic analysis to examine data cycles. These will be discussed as well.

  5. Impulse response method for characterization of echogenic liposomes.

    PubMed

    Raymond, Jason L; Luan, Ying; van Rooij, Tom; Kooiman, Klazina; Huang, Shao-Ling; McPherson, David D; Versluis, Michel; de Jong, Nico; Holland, Christy K

    2015-04-01

    An optical characterization method is presented based on the use of the impulse response to characterize the damping imparted by the shell of an air-filled ultrasound contrast agent (UCA). The interfacial shell viscosity was estimated based on the unforced decaying response of individual echogenic liposomes (ELIP) exposed to a broadband acoustic impulse excitation. Radius versus time response was measured optically based on recordings acquired using an ultra-high-speed camera. The method provided an efficient approach that enabled statistical measurements on 106 individual ELIP. A decrease in shell viscosity, from 2.1 × 10(-8) to 2.5 × 10(-9) kg/s, was observed with increasing dilatation rate, from 0.5 × 10(6) to 1 × 10(7) s(-1). This nonlinear behavior has been reported in other studies of lipid-shelled UCAs and is consistent with rheological shear-thinning. The measured shell viscosity for the ELIP formulation used in this study [κs = (2.1 ± 1.0) × 10(-8) kg/s] was in quantitative agreement with previously reported values on a population of ELIP and is consistent with other lipid-shelled UCAs. The acoustic response of ELIP therefore is similar to other lipid-shelled UCAs despite loading with air instead of perfluorocarbon gas. The methods described here can provide an accurate estimate of the shell viscosity and damping for individual UCA microbubbles.

  6. Ensemble Kalman filter for the reconstruction of the Earth's mantle circulation

    NASA Astrophysics Data System (ADS)

    Bocher, Marie; Fournier, Alexandre; Coltice, Nicolas

    2018-02-01

    Recent advances in mantle convection modeling led to the release of a new generation of convection codes, able to self-consistently generate plate-like tectonics at their surface. Those models physically link mantle dynamics to surface tectonics. Combined with plate tectonic reconstructions, they have the potential to produce a new generation of mantle circulation models that use data assimilation methods and where uncertainties in plate tectonic reconstructions are taken into account. We provided a proof of this concept by applying a suboptimal Kalman filter to the reconstruction of mantle circulation (Bocher et al., 2016). Here, we propose to go one step further and apply the ensemble Kalman filter (EnKF) to this problem. The EnKF is a sequential Monte Carlo method particularly adapted to solve high-dimensional data assimilation problems with nonlinear dynamics. We tested the EnKF using synthetic observations consisting of surface velocity and heat flow measurements on a 2-D-spherical annulus model and compared it with the method developed previously. The EnKF performs on average better and is more stable than the former method. Less than 300 ensemble members are sufficient to reconstruct an evolution. We use covariance adaptive inflation and localization to correct for sampling errors. We show that the EnKF results are robust over a wide range of covariance localization parameters. The reconstruction is associated with an estimation of the error, and provides valuable information on where the reconstruction is to be trusted or not.

  7. Methods matter: considering locomotory mode and respirometry technique when estimating metabolic rates of fishes

    PubMed Central

    Rummer, Jodie L.; Binning, Sandra A.; Roche, Dominique G.; Johansen, Jacob L.

    2016-01-01

    Respirometry is frequently used to estimate metabolic rates and examine organismal responses to environmental change. Although a range of methodologies exists, it remains unclear whether differences in chamber design and exercise (type and duration) produce comparable results within individuals and whether the most appropriate method differs across taxa. We used a repeated-measures design to compare estimates of maximal and standard metabolic rates (MMR and SMR) in four coral reef fish species using the following three methods: (i) prolonged swimming in a traditional swimming respirometer; (ii) short-duration exhaustive chase with air exposure followed by resting respirometry; and (iii) short-duration exhaustive swimming in a circular chamber. We chose species that are steady/prolonged swimmers, using either a body–caudal fin or a median–paired fin swimming mode during routine swimming. Individual MMR estimates differed significantly depending on the method used. Swimming respirometry consistently provided the best (i.e. highest) estimate of MMR in all four species irrespective of swimming mode. Both short-duration protocols (exhaustive chase and swimming in a circular chamber) produced similar MMR estimates, which were up to 38% lower than those obtained during prolonged swimming. Furthermore, underestimates were not consistent across swimming modes or species, indicating that a general correction factor cannot be used. However, SMR estimates (upon recovery from both of the exhausting swimming methods) were consistent across both short-duration methods. Given the increasing use of metabolic data to assess organismal responses to environmental stressors, we recommend carefully considering respirometry protocols before experimentation. Specifically, results should not readily be compared across methods; discrepancies could result in misinterpretation of MMR and aerobic scope. PMID:27382471

  8. A systematic evaluation of contemporary impurity correction methods in ITS-90 aluminium fixed point cells

    NASA Astrophysics Data System (ADS)

    da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham

    2017-06-01

    The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.

  9. Too much ado about instrumental variable approach: is the cure worse than the disease?

    PubMed

    Baser, Onur

    2009-01-01

    To review the efficacy of instrumental variable (IV) models in addressing a variety of assumption violations to ensure standard ordinary least squares (OLS) estimates are consistent. IV models gained popularity in outcomes research because of their ability to consistently estimate the average causal effects even in the presence of unmeasured confounding. However, in order for this consistent estimation to be achieved, several conditions must hold. In this article, we provide an overview of the IV approach, examine possible tests to check the prerequisite conditions, and illustrate how weak instruments may produce inconsistent and inefficient results. We use two IVs and apply Shea's partial R-square method, the Anderson canonical correlation, and Cragg-Donald tests to check for weak instruments. Hall-Peixe tests are applied to see if any of these instruments are redundant in the analysis. A total of 14,952 asthma patients from the MarketScan Commercial Claims and Encounters Database were examined in this study. Patient health care was provided under a variety of fee-for-service, fully capitated, and partially capitated health plans, including preferred provider organizations, point of service plans, indemnity plans, and health maintenance organizations. We used controller-reliever copay ratio and physician practice/prescribing patterns as an instrument. We demonstrated that the former was a weak and redundant instrument producing inconsistent and inefficient estimates of the effect of treatment. The results were worse than the results from standard regression analysis. Despite the obvious benefit of IV models, the method should not be used blindly. Several strong conditions are required for these models to work, and each of them should be tested. Otherwise, bias and precision of the results will be statistically worse than the results achieved by simply using standard OLS.

  10. A software platform for continuum modeling of ion channels based on unstructured mesh

    NASA Astrophysics Data System (ADS)

    Tu, B.; Bai, S. Y.; Chen, M. X.; Xie, Y.; Zhang, L. B.; Lu, B. Z.

    2014-01-01

    Most traditional continuum molecular modeling adopted finite difference or finite volume methods which were based on a structured mesh (grid). Unstructured meshes were only occasionally used, but an increased number of applications emerge in molecular simulations. To facilitate the continuum modeling of biomolecular systems based on unstructured meshes, we are developing a software platform with tools which are particularly beneficial to those approaches. This work describes the software system specifically for the simulation of a typical, complex molecular procedure: ion transport through a three-dimensional channel system that consists of a protein and a membrane. The platform contains three parts: a meshing tool chain for ion channel systems, a parallel finite element solver for the Poisson-Nernst-Planck equations describing the electrodiffusion process of ion transport, and a visualization program for continuum molecular modeling. The meshing tool chain in the platform, which consists of a set of mesh generation tools, is able to generate high-quality surface and volume meshes for ion channel systems. The parallel finite element solver in our platform is based on the parallel adaptive finite element package PHG which wass developed by one of the authors [1]. As a featured component of the platform, a new visualization program, VCMM, has specifically been developed for continuum molecular modeling with an emphasis on providing useful facilities for unstructured mesh-based methods and for their output analysis and visualization. VCMM provides a graphic user interface and consists of three modules: a molecular module, a meshing module and a numerical module. A demonstration of the platform is provided with a study of two real proteins, the connexin 26 and hemolysin ion channels.

  11. Continuation of research into software for space operations support, volume 1

    NASA Technical Reports Server (NTRS)

    Collier, Mark D.; Killough, Ronnie; Martin, Nancy L.

    1990-01-01

    A prototype workstation executive called the Hardware Independent Software Development Environment (HISDE) was developed. Software technologies relevant to workstation executives were researched and evaluated and HISDE was used as a test bed for prototyping efforts. New X Windows software concepts and technology were introduced into workstation executives and related applications. The four research efforts performed included: (1) Research into the usability and efficiency of Motif (an X Windows based graphic user interface) which consisted of converting the existing Athena widget based HISDE user interface to Motif demonstrating the usability of Motif and providing insight into the level of effort required to translate an application from widget to another; (2) Prototype a real time data display widget which consisted of research methods for and prototyping the selected method of displaying textual values in an efficient manner; (3) X Windows performance evaluation which consisted of a series of performance measurements which demonstrated the ability of low level X Windows to display textural information; (4) Convert the Display Manager to X Window/Motif which is the application used by NASA for data display during operational mode.

  12. Anatomical contouring variability in thoracic organs at risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCall, Ross, E-mail: rmccall86@gmail.com; MacLennan, Grayden; Taylor, Matthew

    2016-01-01

    The purpose of this study was to determine whether contouring thoracic organs at risk was consistent among medical dosimetrists and to identify how trends in dosimetrist's education and experience affected contouring accuracy. Qualitative and quantitative methods were used to contextualize the raw data that were obtained. A total of 3 different computed tomography (CT) data sets were provided to medical dosimetrists (N = 13) across 5 different institutions. The medical dosimetrists were directed to contour the lungs, heart, spinal cord, and esophagus. The medical dosimetrists were instructed to contour in line with their institutional standards and were allowed to usemore » any contouring tool or technique that they would traditionally use. The contours from each medical dosimetrist were evaluated against “gold standard” contours drawn and validated by 2 radiation oncology physicians. The dosimetrist-derived contours were evaluated against the gold standard using both a Dice coefficient method and a penalty-based metric scoring system. A short survey was also completed by each medical dosimetrist to evaluate their individual contouring experience. There was no significant variation in the contouring consistency of the lungs and spinal cord. Intradosimetrist contouring was consistent for those who contoured the esophagus and heart correctly; however, medical dosimetrists with a poor metric score showed erratic and inconsistent methods of contouring.« less

  13. Seven-Year Clinical Surveillance Program Demonstrates Consistent MARD Accuracy Performance of a Blood Glucose Test Strip.

    PubMed

    Setford, Steven; Grady, Mike; Mackintosh, Stephen; Donald, Robert; Levy, Brian

    2018-05-01

    MARD (mean absolute relative difference) is increasingly used to describe performance of glucose monitoring systems, providing a single-value quantitative measure of accuracy and allowing comparisons between different monitoring systems. This study reports MARDs for the OneTouch Verio® glucose meter clinical data set of 80 258 data points (671 individual batches) gathered as part of a 7.5-year self-surveillance program Methods: Test strips were routinely sampled from randomly selected manufacturer's production batches and sent to one of 3 clinic sites for clinical accuracy assessment using fresh capillary blood from patients with diabetes, using both the meter system and standard laboratory reference instrument. Evaluation of the distribution of strip batch MARD yielded a mean value of 5.05% (range: 3.68-6.43% at ±1.96 standard deviations from mean). The overall MARD for all clinic data points (N = 80 258) was also 5.05%, while a mean bias of 1.28 was recorded. MARD by glucose level was found to be consistent, yielding a maximum value of 4.81% at higher glucose (≥100 mg/dL) and a mean absolute difference (MAD) of 5.60 mg/dL at low glucose (<100 mg/dL). MARD by year of manufacture varied from 4.67-5.42% indicating consistent accuracy performance over the surveillance period. This 7.5-year surveillance program showed that this meter system exhibits consistently low MARD by batch, glucose level and year, indicating close agreement with established reference methods whilste exhibiting lower MARD values than continuous glucose monitoring (CGM) systems and providing users with confidence in the performance when transitioning to each new strip batch.

  14. The Average Star Formation Histories of Galaxies in Dark Matter Halos from z = 0-8

    NASA Astrophysics Data System (ADS)

    Behroozi, Peter S.; Wechsler, Risa H.; Conroy, Charlie

    2013-06-01

    We present a robust method to constrain average galaxy star formation rates (SFRs), star formation histories (SFHs), and the intracluster light (ICL) as a function of halo mass. Our results are consistent with observed galaxy stellar mass functions, specific star formation rates (SSFRs), and cosmic star formation rates (CSFRs) from z = 0 to z = 8. We consider the effects of a wide range of uncertainties on our results, including those affecting stellar masses, SFRs, and the halo mass function at the heart of our analysis. As they are relevant to our method, we also present new calibrations of the dark matter halo mass function, halo mass accretion histories, and halo-subhalo merger rates out to z = 8. We also provide new compilations of CSFRs and SSFRs; more recent measurements are now consistent with the buildup of the cosmic stellar mass density at all redshifts. Implications of our work include: halos near 1012 M ⊙ are the most efficient at forming stars at all redshifts, the baryon conversion efficiency of massive halos drops markedly after z ~ 2.5 (consistent with theories of cold-mode accretion), the ICL for massive galaxies is expected to be significant out to at least z ~ 1-1.5, and dwarf galaxies at low redshifts have higher stellar mass to halo mass ratios than previous expectations and form later than in most theoretical models. Finally, we provide new fitting formulae for SFHs that are more accurate than the standard declining tau model. Our approach places a wide variety of observations relating to the SFH of galaxies into a self-consistent framework based on the modern understanding of structure formation in ΛCDM. Constraints on the stellar mass-halo mass relationship and SFRs are available for download online.

  15. Bacteriophage vehicles for phage display: biology, mechanism, and application.

    PubMed

    Ebrahimizadeh, Walead; Rajabibazl, Masoumeh

    2014-08-01

    The phage display technique is a powerful tool for selection of various biological agents. This technique allows construction of large libraries from the antibody repertoire of different hosts and provides a fast and high-throughput selection method. Specific antibodies can be isolated based on distinctive characteristics from a library consisting of millions of members. These features made phage display technology preferred method for antibody selection and engineering. There are several phage display methods available and each has its unique merits and application. Selection of appropriate display technique requires basic knowledge of available methods and their mechanism. In this review, we describe different phage display techniques, available bacteriophage vehicles, and their mechanism.

  16. Comparing Study Populations of Men Who Have Sex with Men: Evaluating Consistency Within Repeat Studies and Across Studies in the Seattle Area Using Different Recruitment Methodologies

    PubMed Central

    Burt, Richard D.; Oster, Alexandra M.; Golden, Mathew R.; Thiede, Hanne

    2013-01-01

    There is no gold standard for recruiting unbiased samples of men who have sex with men (MSM). To assess differing recruitment methods, we compared Seattle-area MSM samples from: venue-day-time sampling-based National HIV Behavioral Surveillance (NHBS) surveys in 2008 and 2011, random-digit-dialed (RDD) surveys in 2003 and 2006, and STD clinic patient data 2001–2011. We compared sociodemographics, sexual and drug-associated behavior, and HIV status and testing. There was generally good consistency between the two NHBS surveys and within STD clinic data across time. NHBS participants reported higher levels of drug-associated and lower levels of sexual risk than STD clinic patients. RDD participants differed from the other study populations in sociodemographics and some risk behaviors. While neither NHBS nor the STD clinic study populations may be representative of all MSM, both appear to provide consistent samples of MSM subpopulations across time that can provide useful information to guide HIV prevention. PMID:23900958

  17. ANSI/ASHRAE/IES Standard 90.1-2016 Performance Rating Method Reference Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goel, Supriya; Rosenberg, Michael I.; Eley, Charles

    This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1-2016 (Standard 90.1-2016). The PRM can be used to demonstrate compliance with the standard and to rate the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. Use of the PRM for demonstrating compliance with Standard 90.1 is a new feature of the 2016 edition. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users ofmore » the PRM.« less

  18. Methods and kits for predicting a response to an erythropoietic agent

    DOEpatents

    Merchant, Michael L.; Klein, Jon B.; Brier, Michael E.; Gaweda, Adam E.

    2015-06-16

    Methods for predicting a response to an erythropoietic agent in a subject include providing a biological sample from the subject, and determining an amount in the sample of at least one peptide selected from the group consisting of SEQ ID NOS: 1-17. If there is a measurable difference in the amount of the at least one peptide in the sample, when compared to a control level of the same peptide, the subject is then predicted to have a good response or a poor response to the erythropoietic agent. Kits for predicting a response to an erythropoietic agent are further provided and include one or more antibodies, or fragments thereof, that specifically recognize a peptide of SEQ ID NOS: 1-17.

  19. Database recovery using redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1992-01-01

    Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper a method is proposed for using redundant disk arrays to support rapid-recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, it is shown that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  20. Recovery issues in databases using redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  1. Performance evaluation of redundant disk array support for transaction recovery

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.

    1991-01-01

    Redundant disk arrays provide a way of achieving rapid recovery from media failures with a relatively low storage cost for large scale data systems requiring high availability. Here, we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  2. Surface Modeling, Grid Generation, and Related Issues in Computational Fluid Dynamic (CFD) Solutions

    NASA Technical Reports Server (NTRS)

    Choo, Yung K. (Compiler)

    1995-01-01

    The NASA Steering Committee for Surface Modeling and Grid Generation (SMAGG) sponsored a workshop on surface modeling, grid generation, and related issues in Computational Fluid Dynamics (CFD) solutions at Lewis Research Center, Cleveland, Ohio, May 9-11, 1995. The workshop provided a forum to identify industry needs, strengths, and weaknesses of the five grid technologies (patched structured, overset structured, Cartesian, unstructured, and hybrid), and to exchange thoughts about where each technology will be in 2 to 5 years. The workshop also provided opportunities for engineers and scientists to present new methods, approaches, and applications in SMAGG for CFD. This Conference Publication (CP) consists of papers on industry overview, NASA overview, five grid technologies, new methods/ approaches/applications, and software systems.

  3. Phylogenetic rooting using minimal ancestor deviation.

    PubMed

    Tria, Fernando Domingues Kümmel; Landan, Giddy; Dagan, Tal

    2017-06-19

    Ancestor-descendent relations play a cardinal role in evolutionary theory. Those relations are determined by rooting phylogenetic trees. Existing rooting methods are hampered by evolutionary rate heterogeneity or the unavailability of auxiliary phylogenetic information. Here we present a rooting approach, the minimal ancestor deviation (MAD) method, which accommodates heterotachy by using all pairwise topological and metric information in unrooted trees. We demonstrate the performance of the method, in comparison to existing rooting methods, by the analysis of phylogenies from eukaryotes and prokaryotes. MAD correctly recovers the known root of eukaryotes and uncovers evidence for the origin of cyanobacteria in the ocean. MAD is more robust and consistent than existing methods, provides measures of the root inference quality and is applicable to any tree with branch lengths.

  4. Creating and Supporting a Mixed Methods Health Services Research Team

    PubMed Central

    Bowers, Barbara; Cohen, Lauren W; Elliot, Amy E; Grabowski, David C; Fishman, Nancy W; Sharkey, Siobhan S; Zimmerman, Sheryl; Horn, Susan D; Kemper, Peter

    2013-01-01

    Objective. To use the experience from a health services research evaluation to provide guidance in team development for mixed methods research. Methods. The Research Initiative Valuing Eldercare (THRIVE) team was organized by the Robert Wood Johnson Foundation to evaluate The Green House nursing home culture change program. This article describes the development of the research team and provides insights into how funders might engage with mixed methods research teams to maximize the value of the team. Results. Like many mixed methods collaborations, the THRIVE team consisted of researchers from diverse disciplines, embracing diverse methodologies, and operating under a framework of nonhierarchical, shared leadership that required new collaborations, engagement, and commitment in the context of finite resources. Strategies to overcome these potential obstacles and achieve success included implementation of a Coordinating Center, dedicated time for planning and collaborating across researchers and methodologies, funded support for in-person meetings, and creative optimization of resources. Conclusions. Challenges are inevitably present in the formation and operation of effective mixed methods research teams. However, funders and research teams can implement strategies to promote success. PMID:24138774

  5. The retrospective binning method improves the consistency of phase binning in respiratory-gated PET/CT

    NASA Astrophysics Data System (ADS)

    Didierlaurent, D.; Ribes, S.; Batatia, H.; Jaudet, C.; Dierickx, L. O.; Zerdoud, S.; Brillouet, S.; Caselles, O.; Courbon, F.

    2012-12-01

    This study assesses the accuracy of prospective phase-gated PET/CT data binning and presents a retrospective data binning method that improves image quality and consistency. Respiratory signals from 17 patients who underwent 4D PET/CT were analysed to evaluate the reproducibility of temporal triggers used for the standard phase-based gating method. Breathing signals were reprocessed to implement retrospective PET data binning. The mean and standard deviation of time lags between automatic triggers provided by the Real-time Position Management (RPM, Varian) gating device and inhalation peaks derived from respiratory curves were computed for each patient. The total number of respiratory cycles available for 4D PET/CT according to the binning mode (prospective versus retrospective) was compared. The maximum standardized uptake value (SUVmax), biological tumour volume (BTV) and tumour trajectory measures were determined from the PET/CT images of five patients. Compared to retrospective binning (RB), prospective gating approach led to (i) a significant loss in breathing cycles (15%) and (ii) the inconsistency of data binning due to temporal dispersion of triggers (average 396 ms). Consequently, tumour characterization could be impacted. In retrospective mode, SUVmax was up to 27% higher, where no significant difference appeared in BTV. In addition, prospective mode gave an inconsistent spatial location of the tumour throughout the bins. Improved consistency with breathing patterns and greater motion amplitude of the tumour centroid were observed with retrospective mode. The detection of the tumour motion and trajectory was improved also for small temporal dispersion of triggers. This study shows that the binning mode could have a significant impact on 4D PET images. The consistency of triggers with breathing signals should be checked before clinical use of gated PET/CT images, and our RB method improves 4D PET/CT image quantification.

  6. Context-specific metabolic networks are consistent with experiments.

    PubMed

    Becker, Scott A; Palsson, Bernhard O

    2008-05-16

    Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are "genome-scale" and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME) to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.

  7. Evaluation of Statistical Methods for Modeling Historical Resource Production and Forecasting

    NASA Astrophysics Data System (ADS)

    Nanzad, Bolorchimeg

    This master's thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed "cycle-jumping" wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and gamma parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.

  8. A comparison of the Sensititre® MYCOTB panel and the agar proportion method for the susceptibility testing of Mycobacterium tuberculosis.

    PubMed

    Abuali, M M; Katariwala, R; LaBombardi, V J

    2012-05-01

    The agar proportion method (APM) for determining Mycobacterium tuberculosis susceptibilities is a qualitative method that requires 21 days in order to produce the results. The Sensititre method allows for a quantitative assessment. Our objective was to compare the accuracy, time to results, and ease of use of the Sensititre method to the APM. 7H10 plates in the APM and 96-well microtiter dry MYCOTB panels containing 12 antibiotics at full dilution ranges in the Sensititre method were inoculated with M. tuberculosis and read for colony growth. Thirty-seven clinical isolates were tested using both methods and 26 challenge strains of blinded susceptibilities were tested using the Sensititre method only. The Sensititre method displayed 99.3% concordance with the APM. The APM provided reliable results on day 21, whereas the Sensititre method displayed consistent results by day 10. The Sensititre method provides a more rapid, quantitative, and efficient method of testing both first- and second-line drugs when compared to the gold standard. It will give clinicians a sense of the degree of susceptibility, thus, guiding the therapeutic decision-making process. Furthermore, the microwell plate format without the need for instrumentation will allow its use in resource-poor settings.

  9. The Revival of Research Circles: Meeting the Needs of Modern Aging and the Third Age

    ERIC Educational Resources Information Center

    Ostlund, Britt

    2008-01-01

    This article provides evidence that it is worthwhile to reconsider the traditional research circle method as a means of involving people in the third age in fulfilling their needs to participate in learning activities and make their voices heard. The findings are based on three cases of research circles consistently driven by the interests of the…

  10. Longitudinal Examination of Aggression and Study Skills from Middle to High School: Implications for Dropout Prevention

    ERIC Educational Resources Information Center

    Orpinas, Pamela; Raczynski, Katherine; Hsieh, Hsien-Lin; Nahapetyan, Lusine; Horne, Arthur M.

    2018-01-01

    Background: High school completion provides health and economic benefits. The purpose of this study is to describe dropout rates based on longitudinal trajectories of aggression and study skills using teacher ratings. Methods: The sample consisted of 620 randomly selected sixth graders. Every year from Grade 6 to 12, a teacher completed a…

  11. Embedded Diagnostic/Prognostic Reasoning and Information Continuity for Improved Avionics Maintenance

    DTIC Science & Technology

    2006-01-01

    enabling technologies such as built-in-test, advanced health monitoring algorithms, reliability and component aging models, prognostics methods, and...deployment and acceptance. This framework and vision is consistent with the onboard PHM ( Prognostic and Health Management) as well as advanced... monitored . In addition to the prognostic forecasting capabilities provided by monitoring system power, multiple confounding errors by electronic

  12. Puerto Ricans in California: A Staff Report of the Western Regional Office, United States Commission on Civil Rights.

    ERIC Educational Resources Information Center

    Montez, Philip; Pilla, Thomas V.

    This study was undertaken to provide insight into the circumstances of California's Puerto Ricans who are only now surfacing as a distinct Latino bloc within the State's larger Hispanic population. Research methods consisted of a demographic analysis of Puerto Ricans in California and interviews with community representatives and public officials…

  13. Users guide for noble fir bough cruiser.

    Treesearch

    Roger D. Fight; Keith A. Blatner; Roger C. Chapman; William E. Schlosser

    2005-01-01

    The bough cruiser spreadsheet was developed to provide a method for cruising noble fir (Abies procera Rehd.) stands to estimate the weight of boughs that might be harvested. No boughs are cut as part of the cruise process. The approach is based on a two-stage sample. The first stage consists of fixed-radius plots that are used to estimate the...

  14. School Accounting, Budgeting and Finance Challenges. Programs to Help Your School District Improve Its Accounting Procedures, Budgeting Methods and Financial Reporting.

    ERIC Educational Resources Information Center

    Stolberg, Charles G., Ed.

    To help improve school district financial management, the Association of School Business Officials at its 1980 annual meeting held a special session consisting of 20 "mini-workshops" about successful, field-proven practices in school budgeting, accounting, auditing, and other financial tasks. This document provides summaries of the…

  15. Exploring Cultural Predictors of Military Intervention Success

    DTIC Science & Technology

    2015-04-01

    research employed a sequential, mixed method analysis consisting of a quantitative ex post facto analysis of United Nation’s (UN) interventions... research . Results In spite of the many assumptions and limitation forced upon the research by its ex post facto design, it nonetheless provided some... post facto exploration of predictors of military intervention success. As such, the research examined pre- and post -intervention

  16. Heat Bonding of Irradiated Ethylene Vinyl Acetate

    NASA Technical Reports Server (NTRS)

    Slack, D. H.

    1986-01-01

    Reliable method now available for joining parts of this difficult-tobond material. Heating fixture encircles ethylene vinyl acetate multiplesocket part, providing heat to it and to tubes inserted in it. Fixtures specially designed to match parts to be bonded. Tube-and-socket bonds made with this technique subjected to tensile tests. Bond strengths of 50 percent that of base material obtained consistently.

  17. Improving pedagogic competence using an e-learning approach for pre-service mathematics teachers

    NASA Astrophysics Data System (ADS)

    Retnowati, E.; Murdiyani, N. M.; Marsigit; Sugiman; Mahmudi, A.

    2018-03-01

    This article reported a classroom action research that was aimed to improve student’s pedagogic competence during a course namely Methods of Mathematics Instruction. An asynchronous e-learning approach was provided as supplementary material to the main lecture. This e-learning consisted of selected references and educational website addresses and also facilitated online discussions about various methods of mathematics instructions. The subject was twenty-six pre-service teachers in the Department of Mathematics Education, Yogyakarta State University, Indonesia, conducted by the researchers. The research completed three cycles, where each cycle consisted of plan-action-reflection. Through observation, documentation, and interview, it was concluded that asynchronous e-learning might be used to improve pedagogic competence when direct instruction is also applied in the classroom. Direct instruction in this study provided review, explanation, scheme, and examples which could be used by students to select relevant resources in the e-learning portal. Moreover, the pedagogic competence improved after students accomplished assignments to identify aspects of pedagogic instruction either from analyzing videos in e-learning course or simulating in the classroom with direct commentaries. Supporting factors were enthusiasm, discipline, and interactions among students and lecturer that were built throughout the lectures.

  18. Self-Consistent Optimization of Excited States within Density-Functional Tight-Binding.

    PubMed

    Kowalczyk, Tim; Le, Khoa; Irle, Stephan

    2016-01-12

    We present an implementation of energies and gradients for the ΔDFTB method, an analogue of Δ-self-consistent-field density functional theory (ΔSCF) within density-functional tight-binding, for the lowest singlet excited state of closed-shell molecules. Benchmarks of ΔDFTB excitation energies, optimized geometries, Stokes shifts, and vibrational frequencies reveal that ΔDFTB provides a qualitatively correct description of changes in molecular geometries and vibrational frequencies due to excited-state relaxation. The accuracy of ΔDFTB Stokes shifts is comparable to that of ΔSCF-DFT, and ΔDFTB performs similarly to ΔSCF with the PBE functional for vertical excitation energies of larger chromophores where the need for efficient excited-state methods is most urgent. We provide some justification for the use of an excited-state reference density in the DFTB expansion of the electronic energy and demonstrate that ΔDFTB preserves many of the properties of its parent ΔSCF approach. This implementation fills an important gap in the extended framework of DFTB, where access to excited states has been limited to the time-dependent linear-response approach, and affords access to rapid exploration of a valuable class of excited-state potential energy surfaces.

  19. Analysis of academic programs: comparing nursing and other university majors in the application of a quality, potential and cost model.

    PubMed

    Booker, Kathy; Hilgenberg, Cheryl

    2010-01-01

    Nursing is often considered expensive in the cost analysis of academic programs. Yet nursing programs have the power to attract many students, and the national nursing shortage has resulted in a high demand for nurses. Methods to systematically assess programs across an entire university academic division are often dissimilar in technique and outcome. At a small, private, Midwestern university, a model for comprehensive program assessment, titled the Quality, Potential and Cost (QPC) model, was developed and applied to each major offered at the university through the collaborative effort of directors, chairs, deans, and the vice president for academic affairs. The QPC model provides a means of equalizing data so that single measures (such as cost) are not viewed in isolation. It also provides a common language to ensure that all academic leaders at an institution apply consistent methods for assessment of individual programs. The application of the QPC model allowed for consistent, fair assessments and the ability to allocate resources to programs according to strategic direction. In this article, the application of the QPC model to School of Nursing majors and other selected university majors will be illustrated. Copyright 2010 Elsevier Inc. All rights reserved.

  20. Noise Attenuation Performance of a Helmholtz Resonator Array Consist of Several Periodic Parts

    PubMed Central

    Wu, Dizi; Zhang, Nan; Mak, Cheuk Ming; Cai, Chenzhi

    2017-01-01

    The acoustic performance of the ducted Helmholtz resonator (HR) system is analyzed theoretically and numerically. The periodic HR array could provide a wider noise attenuation band due to the coupling of the Bragg reflection and the HR’s resonance. However, the transmission loss achieved by a periodic HR array is mainly dependent on the number of HRs, which restricted by the available space in the longitudinal direction of the duct. The full distance along the longitudinal direction of the duct for HR’s installation is sometimes unavailable in practical applications. Only several pieces of the duct may be available for the installation. It is therefore that this paper concentrates on the acoustic performance of a HR array consisting of several periodic parts. The transfer matrix method and the Bragg theory are used to investigate wave propagation in the duct. The theoretical prediction results show good agreement with the Finite Element Method (FEM) simulation results. The present study provides a practical way in noise control application of ventilation ductwork system by utilizing the advantage of periodicity with the limitation of available completed installation length for HRs. PMID:28471383

  1. Modelling Coastal Cliff Recession Based on the GIM-DDD Method

    NASA Astrophysics Data System (ADS)

    Gong, Bin; Wang, Shanyong; Sloan, Scott William; Sheng, Daichao; Tang, Chun'an

    2018-04-01

    The unpredictable and instantaneous collapse behaviour of coastal rocky cliffs may cause damage that extends significantly beyond the area of failure. Gravitational movements that occur during coastal cliff recession involve two major stages: the small deformation stage and the large displacement stage. In this paper, a method of simulating the entire progressive failure process of coastal rocky cliffs is developed based on the gravity increase method (GIM), the rock failure process analysis method and the discontinuous deformation analysis method, and it is referred to as the GIM-DDD method. The small deformation stage, which includes crack initiation, propagation and coalescence processes, and the large displacement stage, which includes block translation and rotation processes during the rocky cliff collapse, are modelled using the GIM-DDD method. In addition, acoustic emissions, stress field variations, crack propagation and failure mode characteristics are further analysed to provide insights that can be used to predict, prevent and minimize potential economic losses and casualties. The calculation and analytical results are consistent with previous studies, which indicate that the developed method provides an effective and reliable approach for performing rocky cliff stability evaluations and coastal cliff recession analyses and has considerable potential for improving the safety and protection of seaside cliff areas.

  2. Using the theory of reasoned action to explain physician intention to prescribe emergency contraception.

    PubMed

    Sable, Marjorie R; Schwartz, Lisa R; Kelly, Patricia J; Lisbon, Eleanor; Hall, Matthew A

    2006-03-01

    Although research has examined providers' knowledge, attitudes and prescribing behaviors with regard to emergency contraception, none has used a theory-based approach to understanding the interplay of these factors. A cross-sectional survey of 96 faculty physicians from one Southern and three Midwestern universities was conducted in 2004 to assess factors associated with intention to prescribe emergency contraception. The theory of reasoned action guided the study hypotheses and survey design. Correlation and regression analyses were used to examine the data. Only 42% of respondents strongly intended to prescribe emergency contraception for teenagers, but 65-77% intended to do so for all other specified groups (women who ask for the method, who have had a method problem, who have experienced rape or incest, and who have had unprotected sex). Consistent with the theory of reasoned action, high intention to prescribe emergency contraception was associated with positive attitudes toward doing so and with the perception that specific colleagues or professional groups support prescribing it; however, the perception of support by colleagues or professional groups in general did not predict intention. Also consistent with the theory, physicians' knowledge about emergency contraception and their demographic characteristics were not significant. Interventions to encourage physicians to provide emergency contraception should take into account their attitudes toward the method and the components of those attitudes.

  3. [Construction of a psychological aging scale for healthy people].

    PubMed

    Lin, Fei; Long, Yao; Zeng, Ni; Wu, Lei; Huang, Helang

    2017-04-28

    To construct a psychological aging scale, and to provide a tool and indexes for scientific evaluation on aging.
 Methods: The age-related psychological items were collected through literature screening and expert interview. The importance, feasibilityand the degree of authority for the psychological index system were graded by two rounds of Delphi method. Using analytic hierarchy process, the weight of dimensions and items were determined. The analysis for internal consistency reliability, correlation and exploratory factor was performed to evaluate the reliability and validity of the scales.
 Results: By two rounds of Delphi method, 17 experts offered the results as follows: the coefficient of expert authorities was 0.88±0.06, the coordination coefficients for the importance and feasibility in second round were 0.456 (P<0.01) and 0.666 (P<0.01), respectively. The consistency was good. The psychological aging scale for healthy people included 4 dimensions as follows: cognitive function, emotion, personality and motivation. The weight coefficients for the 4 dimensions were 0.338, 0.250, 0.166 and 0.258, respectively. The Cronbach's α coefficient for the scale was 0.822, the reliability was 0.817, the content validity index (CVI) was 0.847, and the cumulative contribution rate for the 5 factors was51.42%.
 Conclusion: The psychological aging scale is satisfied, which can provide reference for the evaluation for aging. The indicators were representative and well-recognized.

  4. Calculated electronic, transport, and related properties of zinc blende boron arsenide (zb-BAs)

    DOE PAGES

    Nwigboji, Ifeanyi H.; Malozovsky, Yuriy; Franklin, Lashounda; ...

    2016-10-11

    Here, we present the results from ab-initio, self-consistent density functional theory (DFT) calculations of electronic, transport, and bulk properties of zinc blende boron arsenide. We utilized the local density approximation potential of Ceperley and Alder, as parameterized by Vosko and his group, the linear combination of Gaussian orbitals formalism, and the Bagayoko, Zhao, and Williams (BZW) method, as enhanced by Ekuma and Franklin (BZW-EF), in carrying out our completely self-consistent calculations. With this method, the results of our calculations have the full, physical content of density functional theory (DFT). Our results include electronic energy bands, densities of states, effective masses,more » and the bulk modulus. Our calculated, indirect band gap of 1.48 eV, from C to a conduction band minimum close to X, for the room temperature lattice constant of 4.777 Å, is in an excellent agreement with the experimental value of 1.46 6 0.02 eV. We thor-oughly explain the reasons for the excellent agreement between our findings and corresponding, experimental ones. This work provides a confirmation of the capability of DFT to describe accu-rately properties of materials, provides a confirmation of the capability of DFT to describe accu-rately properties of materials, if the computations adhere strictly to the conditions of validity of DFT, as done by the BZW-EF method.« less

  5. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    PubMed Central

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967

  6. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    PubMed

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  7. General introduction for the “National Field Manual for the Collection of Water-Quality Data”

    USGS Publications Warehouse

    ,

    2018-02-28

    BackgroundAs part of its mission, the U.S. Geological Survey (USGS) collects data to assess the quality of our Nation’s water resources. A high degree of reliability and standardization of these data are paramount to fulfilling this mission. Documentation of nationally accepted methods used by USGS personnel serves to maintain consistency and technical quality in data-collection activities. “The National Field Manual for the Collection of Water-Quality Data” (NFM) provides documented guidelines and protocols for USGS field personnel who collect water-quality data. The NFM provides detailed, comprehensive, and citable procedures for monitoring the quality of surface water and groundwater. Topics in the NFM include (1) methods and protocols for sampling water resources, (2) methods for processing samples for analysis of water quality, (3) methods for measuring field parameters, and (4) specialized procedures, such as sampling water for low levels of mercury and organic wastewater chemicals, measuring biological indicators, and sampling bottom sediment for chemistry. Personnel who collect water-quality data for national USGS programs and projects, including projects supported by USGS cooperative programs, are mandated to use protocols provided in the NFM per USGS Office of Water Quality Technical Memorandum 2002.13. Formal training, for example, as provided in the USGS class, “Field Water-Quality Methods for Groundwater and Surface Water,” and field apprenticeships supplement the guidance provided in the NFM and ensure that the data collected are high quality, accurate, and scientifically defensible.

  8. Angular velocity of gravitational radiation from precessing binaries and the corotating frame

    NASA Astrophysics Data System (ADS)

    Boyle, Michael

    2013-05-01

    This paper defines an angular velocity for time-dependent functions on the sphere and applies it to gravitational waveforms from compact binaries. Because it is geometrically meaningful and has a clear physical motivation, the angular velocity is uniquely useful in helping to solve an important—and largely ignored—problem in models of compact binaries: the inverse problem of deducing the physical parameters of a system from the gravitational waves alone. It is also used to define the corotating frame of the waveform. When decomposed in this frame, the waveform has no rotational dynamics and is therefore as slowly evolving as possible. The resulting simplifications lead to straightforward methods for accurately comparing waveforms and constructing hybrids. As formulated in this paper, the methods can be applied robustly to both precessing and nonprecessing waveforms, providing a clear, comprehensive, and consistent framework for waveform analysis. Explicit implementations of all these methods are provided in accompanying computer code.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Ehwang; Gao, Yuqian; Wu, Chaochao

    Here, mass spectrometry (MS) based targeted proteomic methods such as selected reaction monitoring (SRM) are becoming the method of choice for preclinical verification of candidate protein biomarkers. The Clinical Proteomic Tumor Analysis Consortium (CPTAC) of the National Cancer Institute has investigated the standardization and analytical validation of the SRM assays and demonstrated robust analytical performance on different instruments across different laboratories. An Assay Portal has also been established by CPTAC to provide the research community a resource consisting of large set of targeted MS-based assays, and a depository to share assays publicly, providing that assays meet the guidelines proposed bymore » CPTAC. Herein, we report 98 SRM assays covering 70 candidate protein biomarkers previously reported as associated with ovarian cancer that have been thoroughly characterized according to the CPTAC Assay Characterization Guidance Document. The experiments, methods and results for characterizing these SRM assays for their MS response, repeatability, selectivity, stability, and reproducible detection of endogenous analytes are described in detail.« less

  10. Method and apparatus for drying web

    DOEpatents

    Orloff, David I.; Kloth, Gerald R.; Rudemiller, Gary R.

    1992-01-01

    The present invention is directed to a method and apparatus for drying a web of paper utilizing impulse drying techniques. In the method of the invention for drying a paper web, the paper web is transported through a pair of rolls wherein at least one of the rolls has been heated to an elevated temperature. The heated roll is provided with a surface having a low thermal diffusivity of less than about 1.times.10.sup.-6 m.sup.2 /s. The surface material of the roll is preferably prepared from a material selected from the group consisting of ceramics, polymers, glass, inorganic plastics, composite materials and cermets. The heated roll may be constructed entirely from the material having a low thermal diffusivity or the roll may be formed from metal, such as steel or aluminum, or other suitable material which is provided with a surface layer of a material having a low thermal diffusivity.

  11. Detection of the Dinozoans Pfiesteria piscicida and P. shumwayae: a review of detection methods and geographic distribution.

    PubMed

    Rublee, Parke A; Remington, David L; Schaefer, Eric F; Marshall, Michael M

    2005-01-01

    Molecular methods, including conventional PCR, real-time PCR, denaturing gradient gel electrophoresis, fluorescent fragment detection PCR, and fluorescent in situ hybridization, have all been developed for use in identifying and studying the distribution of the toxic dinoflagellates Pfiesteria piscicida and P. shumwayae. Application of the methods has demonstrated a worldwide distribution of both species and provided insight into their environmental tolerance range and temporal changes in distribution. Genetic variability among geographic locations generally appears low in rDNA genes, and detection of the organisms in ballast water is consistent with rapid dispersal or high gene flow among populations, but additional sequence data are needed to verify this hypothesis. The rapid development and application of these tools serves as a model for study of other microbial taxa and provides a basis for future development of tools that can simultaneously detect multiple targets.

  12. Extraction of Blebs in Human Embryonic Stem Cell Videos.

    PubMed

    Guan, Benjamin X; Bhanu, Bir; Talbot, Prue; Weng, Nikki Jo-Hao

    2016-01-01

    Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.

  13. A community effort to assess and improve drug sensitivity prediction algorithms

    PubMed Central

    Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo

    2015-01-01

    Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods. PMID:24880487

  14. A community effort to assess and improve drug sensitivity prediction algorithms.

    PubMed

    Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo

    2014-12-01

    Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods.

  15. From stochastic processes to numerical methods: A new scheme for solving reaction subdiffusion fractional partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au

    We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less

  16. Incremental Transductive Learning Approaches to Schistosomiasis Vector Classification

    NASA Astrophysics Data System (ADS)

    Fusco, Terence; Bi, Yaxin; Wang, Haiying; Browne, Fiona

    2016-08-01

    The key issues pertaining to collection of epidemic disease data for our analysis purposes are that it is a labour intensive, time consuming and expensive process resulting in availability of sparse sample data which we use to develop prediction models. To address this sparse data issue, we present the novel Incremental Transductive methods to circumvent the data collection process by applying previously acquired data to provide consistent, confidence-based labelling alternatives to field survey research. We investigated various reasoning approaches for semi-supervised machine learning including Bayesian models for labelling data. The results show that using the proposed methods, we can label instances of data with a class of vector density at a high level of confidence. By applying the Liberal and Strict Training Approaches, we provide a labelling and classification alternative to standalone algorithms. The methods in this paper are components in the process of reducing the proliferation of the Schistosomiasis disease and its effects.

  17. [Determinants of strategic management of a health center].

    PubMed

    Huard, Pierre; Schaller, Philippe

    2014-01-01

    The article highlights the value of a strategic approach for the development of a primary care health centre. The method is adapted from corporate strategy: (i) analysis of the situation of the health centre and the obstacles to its development. (ii) selection of relations on which the strategy can be developed. (iii) elaboration of a system of interventions to create a cumulative development process. (iv) Illustration of the method by application to a case. The example illustrates the principles and method and highlights the importance of interpretations and choices in elaboration of a strategy, which is therefore always a unique construction. The strategic approach provides a framework that (i) provides a subject of discussion and negotiation between members of the health centre, (ii) strengthens the consistency of structural decisions, (iii) helps the health centre to overcome obstacles and initiate a development process.

  18. Improving access to health care for chronic hepatitis B among migrant Chinese populations: A systematic mixed methods review of barriers and enablers.

    PubMed

    Vedio, A; Liu, E Z H; Lee, A C K; Salway, S

    2017-07-01

    Migrant Chinese populations in Western countries have a high prevalence of chronic hepatitis B but often experience poor access to health care and late diagnosis. This systematic review aimed to identify obstacles and supports to timely and appropriate health service use among these populations. Systematic searches resulted in 48 relevant studies published between 1996 and 2015. Data extraction and synthesis were informed by models of healthcare access that highlight the interplay of patient, provider and health system factors. There was strong consistent evidence of low levels of knowledge among patients and community members; but interventions that were primarily focused on increasing knowledge had only modest positive effects on testing and/or vaccination. There was strong consistent evidence that Chinese migrants tend to misunderstand the need for health care for hepatitis B and have low satisfaction with services. Stigma was consistently associated with hepatitis B, and there was weak but consistent evidence of stigma acting as a barrier to care. However, available evidence on the effects of providing culturally appropriate services for hepatitis B on increasing uptake is limited. There was strong consistent evidence that health professionals miss opportunities for testing and vaccination. Practitioner education interventions may be important, but evidence of effectiveness is limited. A simple prompt in patient records for primary care physicians improved the uptake of testing, and a dedicated service increased targeted vaccination coverage for newborns. Further development and more rigorous evaluation of more holistic approaches that address patient, provider and system obstacles are needed. © 2017 The Authors. Journal of Viral Hepatitis Published by John Wiley & Sons Ltd.

  19. Retrieval of exoplanet emission spectra with HyDRA

    NASA Astrophysics Data System (ADS)

    Gandhi, Siddharth; Madhusudhan, Nikku

    2018-02-01

    Thermal emission spectra of exoplanets provide constraints on the chemical compositions, pressure-temperature (P-T) profiles, and energy transport in exoplanetary atmospheres. Accurate inferences of these properties rely on the robustness of the atmospheric retrieval methods employed. While extant retrieval codes have provided significant constraints on molecular abundances and temperature profiles in several exoplanetary atmospheres, the constraints on their deviations from thermal and chemical equilibria have yet to be fully explored. Our present work is a step in this direction. We report HyDRA, a disequilibrium retrieval framework for thermal emission spectra of exoplanetary atmospheres. The retrieval code uses the standard architecture of a parametric atmospheric model coupled with Bayesian statistical inference using the Nested Sampling algorithm. For a given dataset, the retrieved compositions and P-T profiles are used in tandem with the GENESIS self-consistent atmospheric model to constrain layer-by-layer deviations from chemical and radiative-convective equilibrium in the observable atmosphere. We demonstrate HyDRA on the Hot Jupiter WASP-43b with a high-precision emission spectrum. We retrieve an H2O mixing ratio of log(H2O) = -3.54^{+0.82}_{-0.52}, consistent with previous studies. We detect H2O and a combined CO/CO2 at 8-sigma significance. We find the dayside P-T profile to be consistent with radiative-convective equilibrium within the 1-sigma limits and with low day-night redistribution, consistent with previous studies. The derived compositions are also consistent with thermochemical equilibrium for the corresponding distribution of P-T profiles. In the era of high precision and high resolution emission spectroscopy, HyDRA provides a path to retrieve disequilibrium phenomena in exoplanetary atmospheres.

  20. Method of measuring the mass flow rate of a substance entering a cocurrent fluid stream

    DOEpatents

    Cochran, Jr., Henry D.

    1978-04-11

    This invention relates to an improved method of monitoring the mass flow rate of a substance entering a cocurrent fluid stream. The method very basically consists of heating equal sections of the fluid stream above and below the point of entry of the substance to be monitored, and measuring and comparing the resulting change in temperature of the sections. Advantage is taken of the difference in thermal characteristics of the fluid and the substance to be measured to correlate temperature differences in the sections above and below the substance feed point for providing an indication of the mass flow rate of the substance.

  1. Dual initiation strip charge apparatus and methods for making and implementing the same

    DOEpatents

    Jakaboski, Juan-Carlos [Albuquerque, NM; Todd,; Steven, N [Rio Rancho, NM; Polisar, Stephen [Albuquerque, NM; Hughs, Chance [Tijeras, NM

    2011-03-22

    A Dual Initiation Strip Charge (DISC) apparatus is initiated by a single initiation source and detonates a strip of explosive charge at two separate contacts. The reflection of explosively induced stresses meet and create a fracture and breach a target along a generally single fracture contour and produce generally fragment-free scattering and no spallation. Methods for making and implementing a DISC apparatus provide numerous advantages over previous methods of creating explosive charges by utilizing steps for rapid prototyping; by implementing efficient steps and designs for metering consistent, repeatable, and controlled amount of high explosive; and by utilizing readily available materials.

  2. Supersonic pressure measurements and comparison of theory to experiment for an arrow-wing configuration

    NASA Technical Reports Server (NTRS)

    Manro, M. E.

    1976-01-01

    A wind tunnel test of an arrow-wing-body configuration consisting of flat and twisted wings, as well as leading- and trailing-edge control surface deflections, was conducted at Mach numbers from 1.54 to 2.50 to provide an experimental pressure data base for comparison with theoretical methods. Theory-to-experiment comparisons of detailed pressure distributions were made using a state-of-the-art inviscid flow, constant-pressure-panel method. Emphasis was on conditions under which this theory is valid for both flat and twisted wings.

  3. Exploring the dynamics of balance data — movement variability in terms of drift and diffusion

    NASA Astrophysics Data System (ADS)

    Gottschall, Julia; Peinke, Joachim; Lippens, Volker; Nagel, Volker

    2009-02-01

    We introduce a method to analyze postural control on a balance board by reconstructing the underlying dynamics in terms of a Langevin model. Drift and diffusion coefficients are directly estimated from the data and fitted by a suitable parametrization. The governing parameters are utilized to evaluate balance performance and the impact of supra-postural tasks on it. We show that the proposed method of analysis gives not only self-consistent results but also provides a plausible model for the reconstruction of balance dynamics.

  4. There is no silver bullet--a guide to low-level data transforms and normalisation methods for microarray data.

    PubMed

    Kreil, David P; Russell, Roslin R

    2005-03-01

    To overcome random experimental variation, even for simple screens, data from multiple microarrays have to be combined. There are, however, systematic differences between arrays, and any bias remaining after experimental measures to ensure consistency needs to be controlled for. It is often difficult to make the right choice of data transformation and normalisation methods to achieve this end. In this tutorial paper we review the problem and a selection of solutions, explaining the basic principles behind normalisation procedures and providing guidance for their application.

  5. Progress in performance enhancement methods for capacitive silicon resonators

    NASA Astrophysics Data System (ADS)

    Van Toan, Nguyen; Ono, Takahito

    2017-11-01

    In this paper, we review the progress in recent studies on the performance enhancement methods for capacitive silicon resonators. We provide information on various fabrication technologies and design considerations that can be employed to improve the performance of capacitive silicon resonators, including low motional resistance, small insertion loss, and high quality factor (Q). This paper contains an overview of device structures and working principles, fabrication technologies consisting of hermetic packaging, deep reactive-ion etching and neutral beam etching, and design considerations including mechanically coupled, movable electrode structures and piezoresistive heat engines.

  6. Chapter 10 Human Oocyte Vitrification.

    PubMed

    Rienzi, Laura; Cobo, Ana; Ubaldi, Filippo Maria

    2017-01-01

    Discovery and widespread application of successful cryopreservation methods for MII-phase oocytes was one of the greatest successes in human reproduction during the past decade. Although considerable improvements in traditional slow-rate freezing were also achieved, the real breakthrough was the result of introduction of vitrification. Here we describe the method that is most commonly applied for this purpose, provides consistent survival and in vitro developmental rates, results in pregnancy and birth rates comparable to those achievable with fresh oocytes, and does not result in higher incidence of gynecological or postnatal complications.

  7. Transonic pressure measurements and comparison of theory to experiment for three arrow-wing configurations

    NASA Technical Reports Server (NTRS)

    Manro, M. E.

    1982-01-01

    Wind tunnel tests of arrow-wing body configurations consisting of flat, twisted, and cambered twisted wings, as well as a variety of leading and trailing edge control surface deflections, were conducted at Mach numbers from 0.4 to 1.05 to provide an experimental pressure data base for comparison with theoretical methods. Theory to experiment comparisons of detailed pressure distributions were made using state of the art attached flow methods. Conditions under which these theories are valid for these wings are presented.

  8. Quantum simulation of dissipative processes without reservoir engineering

    DOE PAGES

    Di Candia, R.; Pedernales, J. S.; del Campo, A.; ...

    2015-05-29

    We present a quantum algorithm to simulate general finite dimensional Lindblad master equations without the requirement of engineering the system-environment interactions. The proposed method is able to simulate both Markovian and non-Markovian quantum dynamics. It consists in the quantum computation of the dissipative corrections to the unitary evolution of the system of interest, via the reconstruction of the response functions associated with the Lindblad operators. Our approach is equally applicable to dynamics generated by effectively non-Hermitian Hamiltonians. We confirm the quality of our method providing specific error bounds that quantify its accuracy.

  9. A transaction assessment method for allocation of transmission services

    NASA Astrophysics Data System (ADS)

    Banunarayanan, Venkatasubramaniam

    The purpose of this research is to develop transaction assessment methods for allocating transmission services that are provided by an area/utility to power transactions. Transmission services are the services needed to deliver, or provide the capacity to deliver, real and reactive power from one or more supply points to one or more delivery points. As the number of transactions increase rapidly in the emerging deregulated environment, accurate quantification of the transmission services an area/utility provides to accommodate a transaction is becoming important, because then appropriate pricing schemes can be developed to compensate for the parties that provide these services. The Allocation methods developed are based on the "Fair Resource Allocation Principle" and they determine for each transaction the following: the flowpath of the transaction (both real and reactive power components), generator reactive power support from each area/utility, real power loss support from each area/utility. Further, allocation methods for distributing the cost of relieving congestion on transmission lines caused by transactions are also developed. The main feature of the proposed methods is representation of actual usage of the transmission services by the transactions. The proposed method is tested extensively on a variety of systems. The allocation methods developed in this thesis for allocation of transmission services to transactions is not only useful in studying the impact of transactions on a transmission system in a multi-transaction case, but they are indeed necessary to meet the criteria set forth by FERC with regard to pricing based on actual usage. The "consistency" of the proposed allocation methods has also been investigated and tested.

  10. Discontinuous Skeletal Gradient Discretisation methods on polytopal meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Pietro, Daniele A.; Droniou, Jérôme; Manzini, Gianmarco

    Here, in this work we develop arbitrary-order Discontinuous Skeletal Gradient Discretisations (DSGD) on general polytopal meshes. Discontinuous Skeletal refers to the fact that the globally coupled unknowns are broken polynomials on the mesh skeleton. The key ingredient is a high-order gradient reconstruction composed of two terms: (i) a consistent contribution obtained mimicking an integration by parts formula inside each element and (ii) a stabilising term for which sufficient design conditions are provided. An example of stabilisation that satisfies the design conditions is proposed based on a local lifting of high-order residuals on a Raviart–Thomas–Nédélec subspace. We prove that the novelmore » DSGDs satisfy coercivity, consistency, limit-conformity, and compactness requirements that ensure convergence for a variety of elliptic and parabolic problems. Lastly, links with Hybrid High-Order, non-conforming Mimetic Finite Difference and non-conforming Virtual Element methods are also studied. Numerical examples complete the exposition.« less

  11. The Influence of Unsteadiness on the Analysis of Pressure Gain Combustion Devices

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Kaemming, Tom

    2013-01-01

    Pressure gain combustion (PGC) has been the object of scientific study for over a century due to its promise of improved thermodynamic efficiency. In many recent application concepts PGC is utilized as a component in an otherwise continuous, normally steady flow system, such as a gas turbine or ram jet engine. However, PGC is inherently unsteady. Failure to account for the effects of this periodic unsteadiness can lead to misunderstanding and errors in performance calculations. This paper seeks to provide some clarity by presenting a consistent method of thermodynamic cycle analysis for a device utilizing PGC technology. The incorporation of the unsteady PGC process into the conservation equations for a continuous flow device is presented. Most importantly, the appropriate method for computing the conservation of momentum is presented. It will be shown that proper, consistent analysis of cyclic conservation principles produces representative performance predictions.

  12. Discontinuous Skeletal Gradient Discretisation methods on polytopal meshes

    DOE PAGES

    Di Pietro, Daniele A.; Droniou, Jérôme; Manzini, Gianmarco

    2017-11-21

    Here, in this work we develop arbitrary-order Discontinuous Skeletal Gradient Discretisations (DSGD) on general polytopal meshes. Discontinuous Skeletal refers to the fact that the globally coupled unknowns are broken polynomials on the mesh skeleton. The key ingredient is a high-order gradient reconstruction composed of two terms: (i) a consistent contribution obtained mimicking an integration by parts formula inside each element and (ii) a stabilising term for which sufficient design conditions are provided. An example of stabilisation that satisfies the design conditions is proposed based on a local lifting of high-order residuals on a Raviart–Thomas–Nédélec subspace. We prove that the novelmore » DSGDs satisfy coercivity, consistency, limit-conformity, and compactness requirements that ensure convergence for a variety of elliptic and parabolic problems. Lastly, links with Hybrid High-Order, non-conforming Mimetic Finite Difference and non-conforming Virtual Element methods are also studied. Numerical examples complete the exposition.« less

  13. On the use of the generalized SPRT method in the equivalent hard sphere approximation for nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Noguere, Gilles; Archier, Pascal; Bouland, Olivier; Capote, Roberto; Jean, Cyrille De Saint; Kopecky, Stefan; Schillebeeckx, Peter; Sirakov, Ivan; Tamagno, Pierre

    2017-09-01

    A consistent description of the neutron cross sections from thermal energy up to the MeV region is challenging. One of the first steps consists in optimizing the optical model parameters using average resonance parameters, such as the neutron strength functions. They can be derived from a statistical analysis of the resolved resonance parameters, or calculated with the generalized form of the SPRT method by using scattering matrix elements provided by optical model calculations. One of the difficulties is to establish the contributions of the direct and compound nucleus reactions. This problem was solved by using a slightly modified average R-Matrix formula with an equivalent hard sphere radius deduced from the phase shift originating from the potential. The performances of the proposed formalism are illustrated with results obtained for the 238U+n nuclear systems.

  14. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  15. Aerogel and xerogel composites for use as carbon anodes

    DOEpatents

    Cooper, John F.; Tillotson, Thomas M.; Hrubesh, Lawrence W.

    2010-10-12

    A method for forming a reinforced rigid anode monolith and fuel and product of such method. The method includes providing a solution of organic aerogel or xerogel precursors including at least one of a phenolic resin, phenol (hydroxybenzene), resorcinol(1,3-dihydroxybenzene), or catechol(1,2-dihydroxybenzene); at least one aldehyde compound selected from the group consisting of formaldehyde, acetaldehyde, and furfuraldehyde; and an alkali carbonate or phosphoric acid catalyst; adding internal reinforcement materials comprising carbon to said precursor solution to form a precursor mixture; gelling said precursor mixture to form a composite gel; drying said composite gel; and pyrolyzing said composite gel to form a wettable aerogel/carbon composite or a wettable xerogel/carbon composite, wherein said composites comprise chars and said internal reinforcement materials, and wherein said composite is suitable for use as an anode with the chars being fuel capable of being combusted in a molten salt electrochemical fuel cell in the range from 500 C to 800 C to produce electrical energy. Additional methods and systems/compositions are also provided.

  16. An E-plane analysis of aperture-matched horn antennas using the moment method and the uniform geometrical theory of diffraction

    NASA Technical Reports Server (NTRS)

    Heedy, D. J.; Burnside, W. D.

    1984-01-01

    The moment method and the uniform geometrical theory of diffraction are utilized to obtain two separate solutions for the E-plane field pattern of an aperture-matched horn antenna. This particular horn antenna consists of a standard pyramidal horn with the following modifications: a rolled edge section attached to the aperture edges and a curved throat section. The resulting geometry provides significantly better performance in terms of the pattern, impedance, and frequency characteristics than normally obtainable. The moment method is used to calculate the E-plane pattern and BSWR of the antenna. However, at higher frequencies, large amounts of computation time are required. The uniform geometrical theory of diffraction provides a quick and efficient high frequency solution for the E-plane field pattern. In fact, the uniform geometrical theory of diffraction may be used to initially design the antenna; then, the moment method may be applied to fine tune the design. This procedure has been successfully applied to a compact range feed design.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walz-Flannigan, A; Lucas, J; Buchanan, K

    Purpose: Manual technique selection in radiography is needed for imaging situations where there is difficulty in proper positioning for AEC, prosthesis, for non-bucky imaging, or for guiding image repeats. Basic information about how to provide consistent image signal and contrast for various kV and tissue thickness is needed to create manual technique charts, and relevant for physicists involved in technique chart optimization. Guidance on technique combinations and rules-of-thumb to provide consistent image signal still in use today are based on measurements with optical density of screen-film combinations and older generation x-ray systems. Tools such as a kV-scale chart can bemore » useful to know how to modify mAs when kV is changed in order to maintain consistent image receptor signal level. We evaluate these tools for modern equipment for use in optimizing proper size scaled techniques. Methods: We used a water phantom to measure calibrated signal change for CR and DR (with grid) for various beam energies. Tube current values were calculated that would yield a consistent image signal response. Data was fit to provide sufficient granularity of detail to compose technique-scale chart. Tissue thickness approximated equivalence to 80% of water depth. Results: We created updated technique-scale charts, providing mAs and kV combinations to achieve consistent signal for CR and DR for various tissue equivalent thicknesses. We show how this information can be used to create properly scaled size-based manual technique charts. Conclusion: Relative scaling of mAs and kV for constant signal (i.e. the shape of the curve) appears substantially similar between film-screen and CR/DR. This supports the notion that image receptor related differences are minor factors for relative (not absolute) changes in mAs with varying kV. However, as demonstrated creation of these difficult to find detailed technique-scales are useful tools for manual chart optimization.« less

  18. Prevalence of Dental Caries Among Primary School Children of India – A Cross-Sectional Study

    PubMed Central

    Hiremath, Anand; Ankola, Anil V; Hebbal, Mamata; Mohandoss, Suganya; Pastay, Pratibha

    2016-01-01

    Introduction In India, the trend indicates an increase in oral health problems especially dental caries, which has been consistently increasing both in prevalence and in severity. Children of all age groups are affected by dental caries. It becomes imperative to collect the data on prevalence of dental caries and treatment needs to provide preventive care. Aim To assess the prevalence of dental caries and treatment needs of 6-11years old Indian school children. Materials and Methods This was a cross-sectional study. Sampling frame consisted of 6-11years old primary school children. Study sample consisted of 13,200 children selected from 10 talukas of Belgavi District, Karnataka, India. Clinical examination for dmft and DMFT was carried out in the school premises by five teams, each consisting of one faculty, three postgraduate students and five interns from the KLE VK Institute of Dental Sciences, Belagavi, Karnataka, India. The examiners were trained and calibrated by the principal investigator. Statistical analysis was done using Chi-square and t-test. Results The overall caries prevalence was 78.9%, mean dmft was 2.97±2.62 and mean DMFT was 0.17±0.53. The decayed teeth component was the principal component in both dmft and DMFT indices. The mean dmft in boys was higher compared to girls and it was found to be statistically significant (p<0.05). Conclusion This study provided us with the baseline data, using which treatment was provided to all the children screened. The children were provided treatment at the camp site/dental hospital/satellite centers and primary health care centers according to the facilities available. PMID:27891457

  19. Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0)

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen

    2016-07-01

    The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.

  20. A geochemical atlas of South Carolina--an example using data from the National Geochemical Survey

    USGS Publications Warehouse

    Sutphin, David M.

    2005-01-01

    National Geochemical Survey data from stream-sediment and soil samples, which have been analyzed using consistent methods, were used to create maps, graphs, and tables that were assembled in a consistent atlas format that characterizes the distribution of major and trace chemical elements in South Carolina. Distribution patterns of the elements in South Carolina may assist mineral exploration, agriculture, waste-disposal-siting issues, health, environmental, and other studies. This atlas is an example of how data from the National Geochemical Survey may be used to identify general or regional patterns of elemental occurrences and to provide a snapshot of element concentration in smaller areas.

  1. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  2. Fast detection and characterization of organic and inorganic gunshot residues on the hands of suspects by CMV-GC-MS and LIBS.

    PubMed

    Tarifa, Anamary; Almirall, José R

    2015-05-01

    A rapid method for the characterization of both organic and inorganic components of gunshot residues (GSR) is proposed as an alternative tool to facilitate the identification of a suspected shooter. In this study, two fast screening methods were developed and optimized for the detection of organic compounds and inorganic components indicative of GSR presence on the hands of shooters and non-shooters. The proposed methods consist of headspace extraction of volatile organic compounds using a capillary microextraction of volatiles (CMV) device previously reported as a high-efficiency sampler followed by detection by GC-MS. This novel sampling technique has the potential to yield fast results (<2min sampling) and high sensitivity capable of detecting 3ng of diphenylamine (DPA) and 8ng of nitroglycerine (NG). Direct analysis of the headspace of over 50 swabs collected from the hands of suspected shooters (and non-shooters) provides information regarding VOCs present on their hands. In addition, a fast laser induced breakdown spectroscopy (LIBS) screening method for the detection of the inorganic components indicative of the presence of GSR (Sb, Pb and Ba) is described. The sampling method for the inorganics consists of liquid extraction of the target elements from the same cotton swabs (previously analyzed for VOCs) and an additional 30 swab samples followed by spiking 1μL of the extract solution onto a Teflon disk and then analyzed by LIBS. Advantages of LIBS include fast analysis (~12s per sample) and high selectivity and sensitivity, with expected LODs 0.1-18ng for each of the target elements after sampling. The analytical performance of the LIBS method is also compared to previously reported methods (inductively coupled plasma-optical emission spectroscopy). The combination of fast CMV sampling, unambiguous organic compound identification with GC-MS and fast LIBS analysis provides the basis for a new comprehensive screening method for GSR. Copyright © 2015 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Low cost composite manufacturing utilizing intelligent pultrusion and resin transfer molding (IPRTM)

    NASA Astrophysics Data System (ADS)

    Bradley, James E.; Wysocki, Tadeusz S., Jr.

    1993-02-01

    This article describes an innovative method for the economical manufacturing of large, intricately-shaped tubular composite parts. Proprietary intelligent process control techniques are combined with standard pultrusion and RTM methodologies to provide high part throughput, performance, and quality while substantially reducing scrap, rework costs, and labor requirements. On-line process monitoring and control is achieved through a smart tooling interface consisting of modular zone tiles installed on part-specific die assemblies. Real-time archiving of process run parameters provides enhanced SPC and SQC capabilities.

  4. Liquid phase low temperature method for production of methanol from synthesis gas and catalyst formulations therefor

    DOEpatents

    Mahajan, Devinder

    2005-07-26

    The invention provides a homogenous catalyst for the production of methanol from purified synthesis gas at low temperature and low pressure which includes a transition metal capable of forming transition metal complexes with coordinating ligands and an alkoxide, the catalyst dissolved in a methanol solvent system, provided the transition metal complex is not transition metal carbonyl. The coordinating ligands can be selected from the group consisting of N-donor ligands, P-donor ligands, O-donor ligands, C-donor ligands, halogens and mixtures thereof.

  5. Discrete Roughness Effects on Shuttle Orbiter at Mach 6

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Hamilton, H. Harris, II

    2002-01-01

    Discrete roughness boundary layer transition results on a Shuttle Orbiter model in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel have been reanalyzed with new boundary layer calculations to provide consistency for comparison to other published results. The experimental results were previously obtained utilizing the phosphor thermography system to monitor the status of the boundary layer via global heat transfer images of the Orbiter windward surface. The size and location of discrete roughness elements were systematically varied along the centerline of the 0.0075-scale model at an angle of attack of 40 deg and the boundary layer response recorded. Various correlative approaches were attempted, with the roughness transition correlations based on edge properties providing the most reliable results. When a consistent computational method is used to compute edge conditions, transition datasets for different configurations at several angles of attack have been shown to collapse to a well-behaved correlation.

  6. Method and apparatus for cutting and abrading with sublimable particles

    DOEpatents

    Bingham, D.N.

    1995-10-10

    A gas delivery system provides a first gas as a liquid under extreme pressure and as a gas under intermediate pressure. Another gas delivery system provides a second gas under moderate pressure. The second gas is selected to solidify at a temperature at or above the temperature of the liquefied gas. A nozzle assembly connected to the gas delivery systems produces a stream containing a liquid component, a solid component, and a gas component. The liquid component of the stream consists of a high velocity jet of the liquefied first gas. The high velocity jet is surrounded by a particle sheath that consists of solid particles of the second gas which solidifies in the nozzle upon contact with the liquefied gas of the high velocity jet. The gas component of the stream is a high velocity flow of the first gas that encircles the particle sheath, forming an outer jacket. 6 figs.

  7. Method and apparatus for cutting and abrading with sublimable particles

    DOEpatents

    Bingham, Dennis N.

    1995-01-01

    A gas delivery system provides a first gas as a liquid under extreme pressure and as a gas under intermediate pressure. Another gas delivery system provides a second gas under moderate pressure. The second gas is selected to solidify at a temperature at or above the temperature of the liquified gas. A nozzle assembly connected to the gas delivery systems produces a stream containing a liquid component, a solid component, and a gas component. The liquid component of the stream consists of a high velocity jet of the liquified first gas. The high velocity jet is surrounded by a particle sheath that consists of solid particles of the second gas which solidifies in the nozzle upon contact with the liquified gas of the high velocity jet. The gas component of the stream is a high velocity flow of the first gas that encircles the particle sheath, forming an outer jacket.

  8. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    PubMed

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. MR Imaging Anatomy in Neurodegeneration: A Robust Volumetric Parcellation Method of the Frontal Lobe Gyri with Quantitative Validation in Patients with Dementia

    PubMed Central

    Iordanova, B.; Rosenbaum, D.; Norman, D.; Weiner, M.; Studholme, C.

    2007-01-01

    BACKGROUND AND PURPOSE Brain volumetry is widely used for evaluating tissue degeneration; however, the parcellation methods are rarely validated and use arbitrary planes to mark boundaries of brain regions. The goal of this study was to develop, validate, and apply an MR imaging tracing method for the parcellation of 3 major gyri of the frontal lobe, which uses only local landmarks intrinsic to the structures of interest, without the need for global reorientation or the use of dividing planes or lines. METHODS Studies were performed on 25 subjects—healthy controls and subjects diagnosed with Lewy body dementia and Alzheimer disease—with significant variation in the underlying gyral anatomy and state of atrophy. The protocol was evaluated by using multiple observers tracing scans of subjects diagnosed with neurodegenerative disease and those aging normally, and the results were compared by spatial overlap agreement. To confirm the results, observers marked the same locations in different brains. We illustrated the variabilities of the key boundaries that pose the greatest challenge to defining consistent parcellations across subjects. RESULTS The resulting gyral volumes were evaluated, and their consistency across raters was used as an additional assessment of the validity of our marking method. The agreement on a scale of 0–1 was found to be 0.83 spatial and 0.90 volumetric for the same rater and 0.85 spatial and 0.90 volumetric for 2 different raters. The results revealed that the protocol remained consistent across different neurodegenerative conditions. CONCLUSION Our method provides a simple and reliable way for the volumetric evaluation of frontal lobe neurodegeneration and can be used as a resource for larger comparative studies as well as a validation procedure of automated algorithms. PMID:16971629

  10. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-07

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.

  11. Development of Mobile Electronic Health Records Application in a Secondary General Hospital in Korea

    PubMed Central

    Park, Min Ah; Hong, Eunseok; Kim, Sunhyu; Ahn, Ryeok; Hong, Jungseok; Song, Seungyeol; Kim, Tak; Kim, Jeongkeun; Yeo, Seongwoon

    2013-01-01

    Objectives The recent evolution of mobile devices has opened new possibilities of providing strongly integrated mobile services in healthcare. The objective of this paper is to describe the decision driver, development, and implementation of an integrated mobile Electronic Health Record (EHR) application at Ulsan University Hospital. This application helps healthcare providers view patients' medical records and information without a stationary computer workstation. Methods We developed an integrated mobile application prototype that aimed to improve the mobility and usability of healthcare providers during their daily medical activities. The Android and iOS platform was used to create the mobile EHR application. The first working version was completed in 5 months and required 1,080 development hours. Results The mobile EHR application provides patient vital signs, patient data, text communication, and integrated EHR. The application allows our healthcare providers to know the status of patients within and outside the hospital environment. The application provides a consistent user environment on several compatible Android and iOS devices. A group of 10 beta testers has consistently used and maintained our copy of the application, suggesting user acceptance. Conclusions We are developing the integrated mobile EHR application with the goals of implementing an environment that is user-friendly, implementing a patient-centered system, and increasing the hospital's competitiveness. PMID:24523996

  12. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors

    PubMed Central

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-01-01

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028

  13. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.

    PubMed

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-04-03

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.

  14. Bisulfite-independent analysis of CpG island methylation enables genome-scale stratification of single cells.

    PubMed

    Han, Lin; Wu, Hua-Jun; Zhu, Haiying; Kim, Kun-Yong; Marjani, Sadie L; Riester, Markus; Euskirchen, Ghia; Zi, Xiaoyuan; Yang, Jennifer; Han, Jasper; Snyder, Michael; Park, In-Hyun; Irizarry, Rafael; Weissman, Sherman M; Michor, Franziska; Fan, Rong; Pan, Xinghua

    2017-06-02

    Conventional DNA bisulfite sequencing has been extended to single cell level, but the coverage consistency is insufficient for parallel comparison. Here we report a novel method for genome-wide CpG island (CGI) methylation sequencing for single cells (scCGI-seq), combining methylation-sensitive restriction enzyme digestion and multiple displacement amplification for selective detection of methylated CGIs. We applied this method to analyzing single cells from two types of hematopoietic cells, K562 and GM12878 and small populations of fibroblasts and induced pluripotent stem cells. The method detected 21 798 CGIs (76% of all CGIs) per cell, and the number of CGIs consistently detected from all 16 profiled single cells was 20 864 (72.7%), with 12 961 promoters covered. This coverage represents a substantial improvement over results obtained using single cell reduced representation bisulfite sequencing, with a 66-fold increase in the fraction of consistently profiled CGIs across individual cells. Single cells of the same type were more similar to each other than to other types, but also displayed epigenetic heterogeneity. The method was further validated by comparing the CpG methylation pattern, methylation profile of CGIs/promoters and repeat regions and 41 classes of known regulatory markers to the ENCODE data. Although not every minor methylation differences between cells are detectable, scCGI-seq provides a solid tool for unsupervised stratification of a heterogeneous cell population. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacón, Enrique, E-mail: echacon@icmm.csic.es; Tarazona, Pedro, E-mail: pedro.tarazona@uam.es; Bresme, Fernando, E-mail: f.bresme@imperial.ac.uk

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributionsmore » related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.« less

  16. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  17. Total testosterone quantitative measurement in serum by LC-MS/MS☆

    PubMed Central

    Wang, Yuesong; Gay, Gabrielle D.; Botelho, Julianne Cook; Caudill, Samuel P.; Vesper, Hubert W.

    2016-01-01

    Reliable measurement of total testosterone is essential for the diagnosis, treatment and prevention of a number of hormone-related diseases affecting adults and children. A mass spectrometric method for testosterone determination in human serum was carefully developed and thoroughly validated. Total testosterone from 100 μL serum is released from proteins with acidic buffer and isolated by two serial liquid–liquid extraction steps. The first extraction step isolates the lipid fractions from an acidic buffer solution using ethyl acetate and hexane. The organic phase is dried down and reconstituted in a basic buffer solution. The second extraction step removes the phospholipids and other components by hexane extraction. Liquid chromatography–isotopic dilution tandem mass spectrometry is used to quantify the total testosterone. The sample preparation is automatically conducted in a liquid-handling system with 96-deepwell plates. The method limit of detection is 9.71 pmol/L (0.280 ng/dL) and the method average percent bias is not significantly different from reference methods. The performance of this method has proven to be consistent with the method precision over a 2-year period ranging from 3.7 to 4.8% for quality control pools at the concentrations 0.527, 7.90 and 30.7 nmol/L (15.2, 228, and 886 ng/dL), respectively. This method provides consistently high accuracy and excellent precision for testosterone determination in human serum across all clinical relevant concentrations. PMID:24960363

  18. Team-based assessment of professional behavior in medical students

    PubMed Central

    RAEE, HOJAT; AMINI, MITRA; MOMEN NASAB, AMENEH; MALEK POUR, ABDOLRASOUL; JAFARI, MOHAMMAD MORAD

    2014-01-01

    Introduction: Self and peer assessment provides important information about the individual’s performance and behavior in all aspects of their professional environment work. The aim of this study is to evaluate the professional behavior and performance in medical students in the form of team based assessment. Methods: In a cross-sectional study, 100 medical students in the 7th year of education were randomly selected and enrolled; for each student five questionnaires were filled out, including one self-assessment, two peer assessments and two residents assessment. The scoring system of the questionnaires was based on seven point Likert scale.  After filling out the questions in the questionnaire, numerical data and written comments provided to the students were collected, analyzed and discussed. Internal consistency (Cronbach’s alpha) of the questionnaires was assessed. A p<0.05 was considered as significant level. Results: Internal consistency was acceptable (Cronbach’s alpha 0.83). Interviews revealed that the majority of students and assessors interviewed found the method acceptable. The range of scores was 1-6 (Mean±SD=4.39±0.57) for the residents' assessment, 2-6 (Mean±SD= 4.49±0.53) for peer assessment, and 3-7 (Mean±SD=5.04±0.32) for self-assessment. There was a significant difference between self assessment and other methods of assessment. Conclusions: This study demonstrates that a team-based assessment is an acceptable and feasible method for peer and self-assessment of medical students’ learning in a clinical clerkship, and has some advantages over traditional assessment methods. Further studies are needed to focus on the strengths and weaknesses. PMID:25512933

  19. a Weighted Closed-Form Solution for Rgb-D Data Registration

    NASA Astrophysics Data System (ADS)

    Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.

    2016-06-01

    Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.

  20. Galaxy bias from the Dark Energy Survey Science Verification data: combining galaxy density maps and weak lensing maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C.; Pujol, A.; Gaztañaga, E.

    We measure the redshift evolution of galaxy bias from a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for amore » $$\\sim$$116 deg$$^{2}$$ area of the Dark Energy Survey (DES) Science Verification data. This method was first developed in Amara et al. (2012) and later re-examined in a companion paper (Pujol et al., in prep) with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a magnitude-limited galaxy sample. We find the galaxy bias and 1$$\\sigma$$ error bars in 4 photometric redshift bins to be 1.33$$\\pm$$0.18 (z=0.2-0.4), 1.19$$\\pm$$0.23 (z=0.4-0.6), 0.99$$\\pm$$0.36 ( z=0.6-0.8), and 1.66$$\\pm$$0.56 (z=0.8-1.0). These measurements are consistent at the 1-2$$\\sigma$$ level with mea- surements on the same dataset using galaxy clustering and cross-correlation of galaxies with CMB lensing. In addition, our method provides the only $$\\sigma_8$$-independent constraint among the three. We forward-model the main observational effects using mock galaxy catalogs by including shape noise, photo-z errors and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Furthermore, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less

  1. Methodological considerations of the GRADE method.

    PubMed

    Malmivaara, Antti

    2015-02-01

    The GRADE method (Grading of Recommendations, Assessment, Development, and Evaluation) provides a tool for rating the quality of evidence for systematic reviews and clinical guidelines. This article aims to analyse conceptually how well grounded the GRADE method is, and to suggest improvements. The eight criteria for rating the quality of evidence as proposed by GRADE are here analysed in terms of each criterion's potential to provide valid information for grading evidence. Secondly, the GRADE method of allocating weights and summarizing the values of the criteria is considered. It is concluded that three GRADE criteria have an appropriate conceptual basis to be used as indicators of confidence in research evidence in systematic reviews: internal validity of a study, consistency of the findings, and publication bias. In network meta-analyses, the indirectness of evidence may also be considered. It is here proposed that the grade for the internal validity of a study could in some instances justifiably decrease the overall grade by three grades (e.g. from high to very low) instead of the up to two grade decrease, as suggested by the GRADE method.

  2. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    PubMed Central

    Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki

    2013-01-01

    We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787

  3. Creating and supporting a mixed methods health services research team.

    PubMed

    Bowers, Barbara; Cohen, Lauren W; Elliot, Amy E; Grabowski, David C; Fishman, Nancy W; Sharkey, Siobhan S; Zimmerman, Sheryl; Horn, Susan D; Kemper, Peter

    2013-12-01

    To use the experience from a health services research evaluation to provide guidance in team development for mixed methods research. The Research Initiative Valuing Eldercare (THRIVE) team was organized by the Robert Wood Johnson Foundation to evaluate The Green House nursing home culture change program. This article describes the development of the research team and provides insights into how funders might engage with mixed methods research teams to maximize the value of the team. Like many mixed methods collaborations, the THRIVE team consisted of researchers from diverse disciplines, embracing diverse methodologies, and operating under a framework of nonhierarchical, shared leadership that required new collaborations, engagement, and commitment in the context of finite resources. Strategies to overcome these potential obstacles and achieve success included implementation of a Coordinating Center, dedicated time for planning and collaborating across researchers and methodologies, funded support for in-person meetings, and creative optimization of resources. Challenges are inevitably present in the formation and operation of effective mixed methods research teams. However, funders and research teams can implement strategies to promote success. © Health Research and Educational Trust.

  4. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    PubMed

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  5. An algorithm for separation of mixed sparse and Gaussian sources

    PubMed Central

    Akkalkotkar, Ameya

    2017-01-01

    Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814

  6. An algorithm for separation of mixed sparse and Gaussian sources.

    PubMed

    Akkalkotkar, Ameya; Brown, Kevin Scott

    2017-01-01

    Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.

  7. Multispectral scanner system parameter study and analysis software system description, volume 2

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.

    1978-01-01

    The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.

  8. An Evaluation of Active Learning Causal Discovery Methods for Reverse-Engineering Local Causal Pathways of Gene Regulation

    PubMed Central

    Ma, Sisi; Kemmeren, Patrick; Aliferis, Constantin F.; Statnikov, Alexander

    2016-01-01

    Reverse-engineering of causal pathways that implicate diseases and vital cellular functions is a fundamental problem in biomedicine. Discovery of the local causal pathway of a target variable (that consists of its direct causes and direct effects) is essential for effective intervention and can facilitate accurate diagnosis and prognosis. Recent research has provided several active learning methods that can leverage passively observed high-throughput data to draft causal pathways and then refine the inferred relations with a limited number of experiments. The current study provides a comprehensive evaluation of the performance of active learning methods for local causal pathway discovery in real biological data. Specifically, 54 active learning methods/variants from 3 families of algorithms were applied for local causal pathways reconstruction of gene regulation for 5 transcription factors in S. cerevisiae. Four aspects of the methods’ performance were assessed, including adjacency discovery quality, edge orientation accuracy, complete pathway discovery quality, and experimental cost. The results of this study show that some methods provide significant performance benefits over others and therefore should be routinely used for local causal pathway discovery tasks. This study also demonstrates the feasibility of local causal pathway reconstruction in real biological systems with significant quality and low experimental cost. PMID:26939894

  9. An evaluation of fossil tip-dating versus node-age calibrations in tetraodontiform fishes (Teleostei: Percomorphaceae).

    PubMed

    Arcila, Dahiana; Alexander Pyron, R; Tyler, James C; Ortí, Guillermo; Betancur-R, Ricardo

    2015-01-01

    Time-calibrated phylogenies based on molecular data provide a framework for comparative studies. Calibration methods to combine fossil information with molecular phylogenies are, however, under active development, often generating disagreement about the best way to incorporate paleontological data into these analyses. This study provides an empirical comparison of the most widely used approach based on node-dating priors for relaxed clocks implemented in the programs BEAST and MrBayes, with two recently proposed improvements: one using a new fossilized birth-death process model for node dating (implemented in the program DPPDiv), and the other using a total-evidence or tip-dating method (implemented in MrBayes and BEAST). These methods are applied herein to tetraodontiform fishes, a diverse group of living and extinct taxa that features one of the most extensive fossil records among teleosts. Previous estimates of time-calibrated phylogenies of tetraodontiforms using node-dating methods reported disparate estimates for their age of origin, ranging from the late Jurassic to the early Paleocene (ca. 150-59Ma). We analyzed a comprehensive dataset with 16 loci and 210 morphological characters, including 131 taxa (95 extant and 36 fossil species) representing all families of fossil and extant tetraodontiforms, under different molecular clock calibration approaches. Results from node-dating methods produced consistently younger ages than the tip-dating approaches. The older ages inferred by tip dating imply an unlikely early-late Jurassic (ca. 185-119Ma) origin for this order and the existence of extended ghost lineages in their fossil record. Node-based methods, by contrast, produce time estimates that are more consistent with the stratigraphic record, suggesting a late Cretaceous (ca. 86-96Ma) origin. We show that the precision of clade age estimates using tip dating increases with the number of fossils analyzed and with the proximity of fossil taxa to the node under assessment. This study suggests that current implementations of tip dating may overestimate ages of divergence in calibrated phylogenies. It also provides a comprehensive phylogenetic framework for tetraodontiform systematics and future comparative studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Modified RNA-seq method for microbial community and diversity analysis using rRNA in different types of environmental samples

    PubMed Central

    Yan, Yong-Wei; Zou, Bin; Zhu, Ting; Hozzein, Wael N.

    2017-01-01

    RNA-seq-based SSU (small subunit) rRNA (ribosomal RNA) analysis has provided a better understanding of potentially active microbial community within environments. However, for RNA-seq library construction, high quantities of purified RNA are typically required. We propose a modified RNA-seq method for SSU rRNA-based microbial community analysis that depends on the direct ligation of a 5’ adaptor to RNA before reverse-transcription. The method requires only a low-input quantity of RNA (10–100 ng) and does not require a DNA removal step. The method was initially tested on three mock communities synthesized with enriched SSU rRNA of archaeal, bacterial and fungal isolates at different ratios, and was subsequently used for environmental samples of high or low biomass. For high-biomass salt-marsh sediments, enriched SSU rRNA and total nucleic acid-derived RNA-seq datasets revealed highly consistent community compositions for all of the SSU rRNA sequences, and as much as 46.4%-59.5% of 16S rRNA sequences were suitable for OTU (operational taxonomic unit)-based community and diversity analyses with complete coverage of V1-V2 regions. OTU-based community structures for the two datasets were also highly consistent with those determined by all of the 16S rRNA reads. For low-biomass samples, total nucleic acid-derived RNA-seq datasets were analyzed, and highly active bacterial taxa were also identified by the OTU-based method, notably including members of the previously underestimated genus Nitrospira and phylum Acidobacteria in tap water, members of the phylum Actinobacteria on a shower curtain, and members of the phylum Cyanobacteria on leaf surfaces. More than half of the bacterial 16S rRNA sequences covered the complete region of primer 8F, and non-coverage rates as high as 38.7% were obtained for phylum-unclassified sequences, providing many opportunities to identify novel bacterial taxa. This modified RNA-seq method will provide a better snapshot of diverse microbial communities, most notably by OTU-based analysis, even communities with low-biomass samples. PMID:29016661

  11. Model-assisted template extraction SRAF application to contact holes patterns in high-end flash memory device fabrication

    NASA Astrophysics Data System (ADS)

    Seoud, Ahmed; Kim, Juhwan; Ma, Yuansheng; Jayaram, Srividya; Hong, Le; Chae, Gyu-Yeol; Lee, Jeong-Woo; Park, Dae-Jin; Yune, Hyoung-Soon; Oh, Se-Young; Park, Chan-Ha

    2018-03-01

    Sub-resolution assist feature (SRAF) insertion techniques have been effectively used for a long time now to increase process latitude in the lithography patterning process. Rule-based SRAF and model-based SRAF are complementary solutions, and each has its own benefits, depending on the objectives of applications and the criticality of the impact on manufacturing yield, efficiency, and productivity. Rule-based SRAF provides superior geometric output consistency and faster runtime performance, but the associated recipe development time can be of concern. Model-based SRAF provides better coverage for more complicated pattern structures in terms of shapes and sizes, with considerably less time required for recipe development, although consistency and performance may be impacted. In this paper, we introduce a new model-assisted template extraction (MATE) SRAF solution, which employs decision tree learning in a model-based solution to provide the benefits of both rule-based and model-based SRAF insertion approaches. The MATE solution is designed to automate the creation of rules/templates for SRAF insertion, and is based on the SRAF placement predicted by model-based solutions. The MATE SRAF recipe provides optimum lithographic quality in relation to various manufacturing aspects in a very short time, compared to traditional methods of rule optimization. Experiments were done using memory device pattern layouts to compare the MATE solution to existing model-based SRAF and pixelated SRAF approaches, based on lithographic process window quality, runtime performance, and geometric output consistency.

  12. Methods of refining natural oils and methods of producing fuel compositions

    DOEpatents

    Firth, Bruce E; Kirk, Sharon E; Gavaskar, Vasudeo S

    2015-11-04

    A method of refining a natural oil includes: (a) providing a feedstock that includes a natural oil; (b) reacting the feedstock in the presence of a metathesis catalyst to form a metathesized product that includes olefins and esters; (c) passivating residual metathesis catalyst with an agent selected from the group consisting of phosphorous acid, phosphinic acid, and a combination thereof; (d) separating the olefins in the metathesized product from the esters in the metathesized product; and (e) transesterifying the esters in the presence of an alcohol to form a transesterified product and/or hydrogenating the olefins to form a fully or partially saturated hydrogenated product. Methods for suppressing isomerization of olefin metathesis products produced in a metathesis reaction, and methods of producing fuel compositions are described.

  13. A simplified focusing and astigmatism correction method for a scanning electron microscope

    NASA Astrophysics Data System (ADS)

    Lu, Yihua; Zhang, Xianmin; Li, Hai

    2018-01-01

    Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.

  14. Modeling techniques for quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  15. Team-based assessment of professional behavior in medical students.

    PubMed

    Raee, Hojat; Amini, Mitra; Momen Nasab, Ameneh; Malek Pour, Abdolrasoul; Jafari, Mohammad Morad

    2014-07-01

    Self and peer assessment provides important information about the individual's performance and behavior in all aspects of their professional environment work. The aim of this study is to evaluate the professional behavior and performance in medical students in the form of team based assessment. In a cross-sectional study, 100 medical students in the 7(th) year of education were randomly selected and enrolled; for each student five questionnaires were filled out, including one self-assessment, two peer assessments and two residents assessment. The scoring system of the questionnaires was based on seven point Likert scale.  After filling out the questions in the questionnaire, numerical data and written comments provided to the students were collected, analyzed and discussed. Internal consistency (Cronbach's alpha) of the questionnaires was assessed. A p<0.05 was considered as significant level. Internal consistency was acceptable (Cronbach's alpha 0.83). Interviews revealed that the majority of students and assessors interviewed found the method acceptable. The range of scores was 1-6 (Mean±SD=4.39±0.57) for the residents' assessment, 2-6 (Mean±SD= 4.49±0.53) for peer assessment, and 3-7 (Mean±SD=5.04±0.32) for self-assessment. There was a significant difference between self assessment and other methods of assessment. This study demonstrates that a team-based assessment is an acceptable and feasible method for peer and self-assessment of medical students' learning in a clinical clerkship, and has some advantages over traditional assessment methods. Further studies are needed to focus on the strengths and weaknesses.

  16. Extension of the Optimized Virtual Fields Method to estimate viscoelastic material parameters from 3D dynamic displacement fields

    PubMed Central

    Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.

    2015-01-01

    In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416

  17. Association between physician compensation methods and delivery of guideline-concordant STD care: is there a link?

    PubMed

    Pourat, Nadereh; Rice, Thomas; Tai-Seale, Ming; Bolan, Gail; Nihalani, Jas

    2005-07-01

    To examine the association between primary care physician (PCP) reimbursement and delivery of sexually transmitted disease (STD) services. Cross-sectional sample of PCPs contracted with Medicaid managed care organizations in 2002 in 8 California counties with the highest rates of Medicaid enrollment and chlamydia cases. The association between physician reimbursement methods and physician practices in delivery of STD services was examined in multiple logistic regression models, controlling for a number of potential confounders. Evidence of an association between reimbursement based on management of utilization and the PCP practice of providing chlamydia drugs for the partner's treatment was most apparent. In adjusted analyses, physicians reimbursed with capitation and a financial incentive for management of utilization (odds ratio [OR] = 1.63) or salary and a financial incentive for management of utilization (OR = 2.63) were more likely than those reimbursed under other methods to prescribe chlamydia drugs for the partner. However, PCPs least often reported they annually screened females aged 15-19 years for chlamydia (OR = 0.63) if reimbursed under salary and a financial incentive for productivity, or screened females aged 20-25 years (OR = 0.43) if reimbursed under salary and a financial incentive for financial performance. Some physician reimbursement methods may influence care delivery, but reimbursement is not consistently associated with how physicians deliver STD care. Interventions to encourage physicians to consistently provide guideline-concordant care despite conflicting financial incentives can maintain quality of care. In addition, incentives that may improve guideline-concordant care should be strengthened.

  18. Modeling techniques for quantum cascade lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation ofmore » quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.« less

  19. First-Principles Lattice Dynamics Method for Strongly Anharmonic Crystals

    NASA Astrophysics Data System (ADS)

    Tadano, Terumasa; Tsuneyuki, Shinji

    2018-04-01

    We review our recent development of a first-principles lattice dynamics method that can treat anharmonic effects nonperturbatively. The method is based on the self-consistent phonon theory, and temperature-dependent phonon frequencies can be calculated efficiently by incorporating recent numerical techniques to estimate anharmonic force constants. The validity of our approach is demonstrated through applications to cubic strontium titanate, where overall good agreement with experimental data is obtained for phonon frequencies and lattice thermal conductivity. We also show the feasibility of highly accurate calculations based on a hybrid exchange-correlation functional within the present framework. Our method provides a new way of studying lattice dynamics in severely anharmonic materials where the standard harmonic approximation and the perturbative approach break down.

  20. Adaptive identification of vessel's added moments of inertia with program motion

    NASA Astrophysics Data System (ADS)

    Alyshev, A. S.; Melnikov, V. G.

    2018-05-01

    In this paper, we propose a new experimental method for determining the moments of inertia of the ship model. The paper gives a brief review of existing methods, a description of the proposed method and experimental stand, test procedures and calculation formulas and experimental results. The proposed method is based on the energy approach with special program motions. The ship model is fixed in a special rack consisting of a torsion element and a set of additional servo drives with flywheels (reactive wheels), which correct the motion. The servo drives with an adaptive controller provide the symmetry of the motion, which is necessary for the proposed identification procedure. The effectiveness of the proposed approach is confirmed by experimental results.

  1. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    PubMed Central

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  2. Application of a New Ensemble Conserving Quantum Dynamics Simulation Algorithm to Liquid para-Hydrogen and ortho-Deuterium

    DOE PAGES

    Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar; ...

    2015-06-30

    Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less

  3. Comparison of two methods for estimating base flow in selected reaches of the South Platte River, Colorado

    USGS Publications Warehouse

    Capesius, Joseph P.; Arnold, L. Rick

    2012-01-01

    The Mass Balance results were quite variable over time such that they appeared suspect with respect to the concept of groundwater flow as being gradual and slow. The large degree of variability in the day-to-day and month-to-month Mass Balance results is likely the result of many factors. These factors could include ungaged stream inflows or outflows, short-term streamflow losses to and gains from temporary bank storage, and any lag in streamflow accounting owing to streamflow lag time of flow within a reach. The Pilot Point time series results were much less variable than the Mass Balance results and extreme values were effectively constrained. Less day-to-day variability, smaller magnitude extreme values, and smoother transitions in base-flow estimates provided by the Pilot Point method are more consistent with a conceptual model of groundwater flow being gradual and slow. The Pilot Point method provided a better fit to the conceptual model of groundwater flow and appeared to provide reasonable estimates of base flow.

  4. Application of a New Ensemble Conserving Quantum Dynamics Simulation Algorithm to Liquid para-Hydrogen and ortho-Deuterium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar

    Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less

  5. Application of a new ensemble conserving quantum dynamics simulation algorithm to liquid para-hydrogen and ortho-deuterium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kyle K. G., E-mail: kylesmith@utexas.edu; Poulsen, Jens Aage, E-mail: jens72@chem.gu.se; Nyman, Gunnar, E-mail: nyman@chem.gu.se

    We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm{sup −3}) and (T = 23.0 K, n = 24.61 nm{sup −3}), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCWmore » provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less

  6. Application of a new ensemble conserving quantum dynamics simulation algorithm to liquid para-hydrogen and ortho-deuterium.

    PubMed

    Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Cunsolo, Alessandro; Rossky, Peter J

    2015-06-28

    We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm(-3)) and (T = 23.0 K, n = 24.61 nm(-3)), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.

  7. Towards a true aerosol-and-cloud retrieval scheme

    NASA Astrophysics Data System (ADS)

    Thomas, Gareth; Poulsen, Caroline; Povey, Adam; McGarragh, Greg; Jerg, Matthias; Siddans, Richard; Grainger, Don

    2014-05-01

    The Optimal Retrieval of Aerosol and Cloud (ORAC) - formally the Oxford-RAL Aerosol and Cloud retrieval - offers a framework that can provide consistent and well characterised properties of both aerosols and clouds from a range of imaging satellite instruments. Several practical issues stand in the way of achieving the potential of this combined scheme however; in particular the sometimes conflicting priorities and requirements of aerosol and cloud retrieval problems, and the question of the unambiguous identification of aerosol and cloud pixels. This presentation will present recent developments made to the ORAC scheme for both aerosol and cloud, and detail how these are being integrated into a single retrieval framework. The implementation of a probabilistic method for pixel identification will also be presented, for both cloud detection and aerosol/cloud type selection. The method is based on Bayesian methods applied the optimal estimation retrieval output of ORAC and is particularly aimed at providing additional information in the so-called "twilight zone", where pixels can't be unambiguously identified as either aerosol or cloud and traditional cloud or aerosol products do not provide results.

  8. Design of robust systems by means of the numerical optimization with harmonic changing of the model parameters

    NASA Astrophysics Data System (ADS)

    Zhmud, V. A.; Reva, I. L.; Dimitrov, L. V.

    2017-01-01

    The design of robust feedback systems by means of the numerical optimization method is mostly accomplished with modeling of the several systems simultaneously. In each such system, regulators are similar. But the object models are different. It includes all edge values from the possible variants of the object model parameters. With all this, not all possible sets of model parameters are taken into account. Hence, the regulator can be not robust, i. e. it can not provide system stability in some cases, which were not tested during the optimization procedure. The paper proposes an alternative method. It consists in sequent changing of all parameters according to harmonic low. The frequencies of changing of each parameter are aliquant. It provides full covering of the parameters space.

  9. Nonlocal Symmetries, Conservation Laws and Interaction Solutions of the Generalised Dispersive Modified Benjamin-Bona-Mahony Equation

    NASA Astrophysics Data System (ADS)

    Yan, Xue-Wei; Tian, Shou-Fu; Dong, Min-Jie; Wang, Xiu-Bin; Zhang, Tian-Tian

    2018-05-01

    We consider the generalised dispersive modified Benjamin-Bona-Mahony equation, which describes an approximation status for long surface wave existed in the non-linear dispersive media. By employing the truncated Painlevé expansion method, we derive its non-local symmetry and Bäcklund transformation. The non-local symmetry is localised by a new variable, which provides the corresponding non-local symmetry group and similarity reductions. Moreover, a direct method can be provided to construct a kind of finite symmetry transformation via the classic Lie point symmetry of the normal prolonged system. Finally, we find that the equation is a consistent Riccati expansion solvable system. With the help of the Jacobi elliptic function, we get its interaction solutions between solitary waves and cnoidal periodic waves.

  10. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    DOEpatents

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  11. Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Suhwan; Kim, Min-Cheol; Sim, Eunji

    2017-05-01

    All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex

  12. Development and Standardization of the Diagnostic Adaptive Behavior Scale: Application of Item Response Theory to the Assessment of Adaptive Behavior

    ERIC Educational Resources Information Center

    Tassé, Marc J.; Schalock, Robert L.; Thissen, David; Balboni, Giulia; Bersani, Henry, Jr.; Borthwick-Duffy, Sharon A.; Spreat, Scott; Widaman, Keith F.; Zhang, Dalun; Navas, Patricia

    2016-01-01

    The Diagnostic Adaptive Behavior Scale (DABS) was developed using item response theory (IRT) methods and was constructed to provide the most precise and valid adaptive behavior information at or near the cutoff point of making a decision regarding a diagnosis of intellectual disability. The DABS initial item pool consisted of 260 items. Using IRT…

  13. Providing Agility in C2 Environments Through Networked Information Processing: A Model of Expertise

    DTIC Science & Technology

    2014-06-01

    personal benefits promoting self-esteem, pride, self-efficacy, personal identification with colleagues and organizations, obtaining a better...studies based on staged-event methods with “target-present” and “target-absent” lineups . The results show that when choosers make positive... identification , the correlation between confidence and accuracy was consistently high. Besides correct choosers have a higher mean confidence level than

  14. 8 CFR 100. 3 - Places where, and methods whereby, information may be secured or submittals or requests made.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... information relative to a matter handled by CBP, ICE or USCIS or any person desiring to make a submittal or..., ICE or USCIS as appropriate. When the submittal or request consists of a formal application for one of the documents, privileges, or other benefits provided for in the laws administered by CBP, ICE or...

  15. 8 CFR 100.3 - Places where, and methods whereby, information may be secured or submittals or requests made.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... information relative to a matter handled by CBP, ICE or USCIS or any person desiring to make a submittal or..., ICE or USCIS as appropriate. When the submittal or request consists of a formal application for one of the documents, privileges, or other benefits provided for in the laws administered by CBP, ICE or...

  16. 8 CFR 100.3 - Places where, and methods whereby, information may be secured or submittals or requests made.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... information relative to a matter handled by CBP, ICE or USCIS or any person desiring to make a submittal or..., ICE or USCIS as appropriate. When the submittal or request consists of a formal application for one of the documents, privileges, or other benefits provided for in the laws administered by CBP, ICE or...

  17. 8 CFR 100.3 - Places where, and methods whereby, information may be secured or submittals or requests made.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... information relative to a matter handled by CBP, ICE or USCIS or any person desiring to make a submittal or..., ICE or USCIS as appropriate. When the submittal or request consists of a formal application for one of the documents, privileges, or other benefits provided for in the laws administered by CBP, ICE or...

  18. 8 CFR 100. 3 - Places where, and methods whereby, information may be secured or submittals or requests made.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... information relative to a matter handled by CBP, ICE or USCIS or any person desiring to make a submittal or..., ICE or USCIS as appropriate. When the submittal or request consists of a formal application for one of the documents, privileges, or other benefits provided for in the laws administered by CBP, ICE or...

  19. Evaluation of various soil water samplers for virological sampling.

    PubMed Central

    Wang, D S; Lance, J C; Gerba, C P

    1980-01-01

    Two commercially available soil water samplers and a ceramic sampler constructed in our laboratories were evaluated for their ability to recover viruses from both tap water and secondary sewage effluent. The ceramic sampler consistently gave the best recoveries of viruses from water samples. Soil columns containing ceramic samplers at various depths provide a simple method for studying virus transport through sewage-contaminated soils. Images PMID:6247976

  20. Shallow Water Reverberation Measurement and Prediction

    DTIC Science & Technology

    1994-06-01

    tool . The temporal signal processing consisted of a short-time Fourier transform spectral estimation method applied to data from a single hydrophone...The three-dimensional Hamiltonian Acoustic Ray-tracing Program for the Ocean (HARPO) was used as the primary propagation modeling tool . The temporal...summarizes the work completed and discusses lessons learned . Advice regarding future work to refine the present study will be provided. 6 our poiut source

  1. Separation of pigment formulations by high-performance thin-layer chromatography with automated multiple development.

    PubMed

    Stiefel, Constanze; Dietzel, Sylvia; Endress, Marc; Morlock, Gertrud E

    2016-09-02

    Food packaging is designed to provide sufficient protection for the respective filling, legally binding information for the consumers like nutritional facts or filling information, and an attractive appearance to promote the sale. For quality and safety of the package, a regular quality control of the used printing materials is necessary to get consistently good print results, to avoid migration of undesired ink components into the food and to identify potentially faulty ink batches. Analytical approaches, however, have hardly been considered for quality assurance so far due to the lack of robust, suitable methods for the analysis of rarely soluble pigment formulations. Thus, a simple and generic high-performance thin-layer chromatography (HPTLC) method for the separation of different colored pigment formulations was developed on HPTLC plates silica gel 60 by automated multiple development. The gradient system provided a sharp resolution for differently soluble pigment constituents like additives and coating materials. The results of multi-detection allowed a first assignment of the differently detectable bands to particular chemical substance classes (e.g., lipophilic components), enabled the comparison of different commercially available pigment batches and revealed substantial variations in the composition of the batches. Hyphenation of HPTLC with high resolution mass spectrometry and infrared spectroscopy allowed the characterization of single unknown pigment constituents, which may partly be responsible for known quality problems during printing. The newly developed, precise and selective HPTLC method can be used as part of routine quality control for both, incoming pigment batches and monitoring of internal pigment production processes, to secure a consistent pigment composition resulting in consistent ink quality, a faultless print image and safe products. Hyphenation of HPTLC with the A. fischeri bioassay gave first information on the bioactivity or rather on the toxicological potential of different compounds of the pigment formulations. The results of the bioassay might be helpful to choose pigment compositions that provide both, a high printing quality but at the same time guarantee a high consumer safety, especially in regard to smaller pigment components, which tend to migrate through the packaging. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Improving health worker performance of abortion services: an assessment of post-training support to providers in India, Nepal and Nigeria.

    PubMed

    Benson, Janie; Healy, Joan; Dijkerman, Sally; Andersen, Kathryn

    2017-11-21

    Health worker performance has been the focus of numerous interventions and evaluation studies in low- and middle-income countries. Few have examined changes in individual provider performance with an intervention encompassing post-training support contacts to improve their clinical practice and resolve programmatic problems. This paper reports the results of an intervention with 3471 abortion providers in India, Nepal and Nigeria. Following abortion care training, providers received in-person visits and virtual contacts by a clinical and programmatic support team for a 12-month period, designed to address their individual practice issues. The intervention also included technical assistance to and upgrades in facilities where the providers worked. Quantitative measures to assess provider performance were established, including: 1) Increase in service provision; 2) Consistent service provision; 3) Provision of high quality of care through use of World Health Organization-recommended uterine evacuation technologies, management of pain and provision of post-abortion contraception; and 4) Post-abortion contraception method mix. Descriptive univariate analysis was conducted, followed by examination of the bivariate relationships between all independent variables and the four dependent performance outcome variables by calculating unadjusted odds ratios, by country and overall. Finally, multivariate logistic regression was performed for each outcome. Providers received an average of 5.7 contacts. Sixty-two percent and 46% of providers met measures for consistent service provision and quality of care, respectively. Fewer providers achieved an increased number of services (24%). Forty-six percent provided an appropriate postabortion contraceptive mix to clients. Most providers met the quality components for use of WHO-recommended abortion methods and provision of pain management. Factors significantly associated with achievement of all measures were providers working in sites offering community outreach and those trained in intervention year two. The number of in-person contacts was significantly associated with achievement of three of four measures. Post-training support holds promise for strengthening health worker performance. Further research is needed to compare this intervention with other approaches and assess how post-training contacts could be incorporated into current health system supervision.

  3. Non-invasive continuous blood pressure measurement based on mean impact value method, BP neural network, and genetic algorithm.

    PubMed

    Tan, Xia; Ji, Zhong; Zhang, Yadan

    2018-04-25

    Non-invasive continuous blood pressure monitoring can provide an important reference and guidance for doctors wishing to analyze the physiological and pathological status of patients and to prevent and diagnose cardiovascular diseases in the clinical setting. Therefore, it is very important to explore a more accurate method of non-invasive continuous blood pressure measurement. To address the shortcomings of existing blood pressure measurement models based on pulse wave transit time or pulse wave parameters, a new method of non-invasive continuous blood pressure measurement - the GA-MIV-BP neural network model - is presented. The mean impact value (MIV) method is used to select the factors that greatly influence blood pressure from the extracted pulse wave transit time and pulse wave parameters. These factors are used as inputs, and the actual blood pressure values as outputs, to train the BP neural network model. The individual parameters are then optimized using a genetic algorithm (GA) to establish the GA-MIV-BP neural network model. Bland-Altman consistency analysis indicated that the measured and predicted blood pressure values were consistent and interchangeable. Therefore, this algorithm is of great significance to promote the clinical application of a non-invasive continuous blood pressure monitoring method.

  4. Estimating meningitis hospitalization rates for sentinel hospitals conducting invasive bacterial vaccine-preventable diseases surveillance.

    PubMed

    2013-10-04

    The World Health Organization (WHO)-coordinated Global Invasive Bacterial Vaccine-Preventable Diseases (IB-VPD) sentinel hospital surveillance network provides data for decision making regarding use of pneumococcal conjugate vaccine and Haemophilus influenzae type b (Hib) vaccine, both recommended for inclusion in routine childhood immunization programs worldwide. WHO recommends that countries conduct sentinel hospital surveillance for meningitis among children aged <5 years, including collection of cerebrospinal fluid (CSF) for laboratory detection of bacterial etiologies. Surveillance for pneumonia and sepsis are recommended at selected hospitals with well-functioning laboratories where meningitis surveillance consistently meets process indicators (e.g., surveillance performance indicators). To use sentinel hospital surveillance for meningitis to estimate meningitis hospitalization rates, WHO developed a rapid method to estimate the number of children at-risk for meningitis in a sentinel hospital catchment area. Monitoring changes in denominators over time using consistent methods is essential for interpreting changes in sentinel surveillance incidence data and for assessing the effect of vaccine introduction on disease epidemiology. This report describes the method and its use in The Gambia and Senegal.

  5. Integrated carbon and chlorine isotope modeling: applications to chlorinated aliphatic hydrocarbons dechlorination.

    PubMed

    Jin, Biao; Haderlein, Stefan B; Rolle, Massimo

    2013-02-05

    We propose a self-consistent method to predict the evolution of carbon and chlorine isotope ratios during degradation of chlorinated hydrocarbons. The method treats explicitly the cleavage of isotopically different C-Cl bonds and thus considers, simultaneously, combined carbon-chlorine isotopologues. To illustrate the proposed modeling approach we focus on the reductive dehalogenation of chlorinated ethenes. We compare our method with the currently available approach, in which carbon and chlorine isotopologues are treated separately. The new approach provides an accurate description of dual-isotope effects regardless of the extent of the isotope fractionation and physical characteristics of the experimental system. We successfully applied the new approach to published experimental results on dehalogenation of chlorinated ethenes both in well-mixed systems and in situations where mass-transfer limitations control the overall rate of biodegradation. The advantages of our self-consistent dual isotope modeling approach proved to be most evident when isotope fractionation factors of carbon and chlorine differed significantly and for systems with mass-transfer limitations, where both physical and (bio)chemical transformation processes affect the observed isotopic values.

  6. On controlling nonlinear dissipation in high order filter methods for ideal and non-ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjogreen, B.

    2004-01-01

    The newly developed adaptive numerical dissipation control in spatially high order filter schemes for the compressible Euler and Navier-Stokes equations has been recently extended to the ideal and non-ideal magnetohydrodynamics (MHD) equations. These filter schemes are applicable to complex unsteady MHD high-speed shock/shear/turbulence problems. They also provide a natural and efficient way for the minimization of Div(B) numerical error. The adaptive numerical dissipation mechanism consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. The numerical dissipation considered consists of high order linear dissipation for the suppression of high frequency oscillation and the nonlinear dissipative portion of high-resolution shock-capturing methods for discontinuity capturing. The applicable nonlinear dissipative portion of high-resolution shock-capturing methods is very general. The objective of this paper is to investigate the performance of three commonly used types of nonlinear numerical dissipation for both the ideal and non-ideal MHD.

  7. Is provider type associated with cancer screening and prevention: advanced practice registered nurses, physician assistants, and physicians

    PubMed Central

    2014-01-01

    Background Physician recommendations for cancer screening and prevention are associated with patient compliance. However, time constraints may limit physicians’ ability to provide all recommended preventive services, especially with increasing demand from the Affordable Care Act in the United States. Team-based practice that includes advanced practice registered nurses and physician assistants (APRN/PA) may help meet this demand. This study investigates the relationship between an APRN/PA visit and receipt of guideline-consistent cancer screening and prevention recommendations. Methods Data from the 2010 National Health Interview Survey were analyzed with multivariate logistic regression to assess provider type seen and receipt of guideline-consistent cancer screening and prevention recommendations (n = 26,716). Results In adjusted analyses, women who saw a primary care physician (PCP) and an APRN/PA or a PCP without an APRN/PA in the past 12 months were more likely to be compliant with cervical and breast cancer screening guidelines than women who did not see a PCP or APRN/PA (all p < 0.0001 for provider type). Women and men who saw a PCP and an APRN/PA or a PCP without an APRN/PA were also more likely to receive guideline consistent colorectal cancer screening and advice to quit smoking and participate in physical activity than women and men who did not see a PCP or APRN/PA (all p < 0.01 for provider type). Conclusions Seeing a PCP alone, or in conjunction with an APRN/PA is associated with patient receipt of guideline-consistent cancer prevention and screening recommendations. Integrating APRN/PA into primary care may assist with the delivery of cancer prevention and screening services. More intervention research efforts are needed to explore how APRN/PA will be best able to increase cancer screening, HPV vaccination, and receipt of behavioral counseling, especially during this era of healthcare reform. PMID:24685149

  8. Resolved-particle simulation by the Physalis method: Enhancements and new capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierakowski, Adam J., E-mail: sierakowski@jhu.edu; Prosperetti, Andrea; Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede

    2016-03-15

    We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrativemore » simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.« less

  9. Equivalence of Laptop and Tablet Administrations of the Minnesota Multiphasic Personality Inventory-2 Restructured Form.

    PubMed

    Menton, William H; Crighton, Adam H; Tarescavage, Anthony M; Marek, Ryan J; Hicks, Adam D; Ben-Porath, Yossef S

    2017-06-01

    The present study investigated the comparability of laptop computer- and tablet-based administration modes for the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). Employing a counterbalanced within-subjects design, the MMPI-2-RF was administered via both modes to a sample of college undergraduates ( N = 133). Administration modes were compared in terms of mean scale scores, internal consistency, test-retest consistency, external validity, and administration time. Mean scores were generally similar, and scores produced via both methods appeared approximately equal in terms of internal consistency and test-retest consistency. Scores from the two modalities also evidenced highly similar patterns of associations with external criteria. Notably, tablet administration of the MMPI-2-RF was substantially longer than laptop administration in the present study (mean difference 7.2 minutes, Cohen's d = .95). Overall, results suggest that varying administration mode between laptop and tablet has a negligible influence on MMPI-2-RF scores, providing evidence that these modes of administration can be considered psychometrically equivalent.

  10. A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies

    NASA Astrophysics Data System (ADS)

    Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.

    2018-06-01

    We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.

  11. Partitioning of functional gene expression data using principal points.

    PubMed

    Kim, Jaehee; Kim, Haseong

    2017-10-12

    DNA microarrays offer motivation and hope for the simultaneous study of variations in multiple genes. Gene expression is a temporal process that allows variations in expression levels with a characterized gene function over a period of time. Temporal gene expression curves can be treated as functional data since they are considered as independent realizations of a stochastic process. This process requires appropriate models to identify patterns of gene functions. The partitioning of the functional data can find homogeneous subgroups of entities for the massive genes within the inherent biological networks. Therefor it can be a useful technique for the analysis of time-course gene expression data. We propose a new self-consistent partitioning method of functional coefficients for individual expression profiles based on the orthonormal basis system. A principal points based functional partitioning method is proposed for time-course gene expression data. The method explores the relationship between genes using Legendre coefficients as principal points to extract the features of gene functions. Our proposed method provides high connectivity in connectedness after clustering for simulated data and finds a significant subsets of genes with the increased connectivity. Our approach has comparative advantages that fewer coefficients are used from the functional data and self-consistency of principal points for partitioning. As real data applications, we are able to find partitioned genes through the gene expressions found in budding yeast data and Escherichia coli data. The proposed method benefitted from the use of principal points, dimension reduction, and choice of orthogonal basis system as well as provides appropriately connected genes in the resulting subsets. We illustrate our method by applying with each set of cell-cycle-regulated time-course yeast genes and E. coli genes. The proposed method is able to identify highly connected genes and to explore the complex dynamics of biological systems in functional genomics.

  12. Forecasting daily patient volumes in the emergency department.

    PubMed

    Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L

    2008-02-01

    Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.

  13. Further experience with the local lymph node assay using standard radioactive and nonradioactive cell count measurements.

    PubMed

    Kolle, Susanne N; Basketter, David; Schrage, Arnhild; Gamer, Armin O; van Ravenzwaay, Bennard; Landsiedel, Robert

    2012-08-01

    In a previous study, the predictive capacity of a modified local lymph node assay (LLNA) based on cell counts, the LNCC, was demonstrated to be closely similar to that of the original assay. In addition, a range of substances, including some technical/commercial materials and a range of agrochemical formulations (n = 180) have also been assessed in both methods in parallel. The results in the LNCC and LLNA were generally consistent, with 86% yielding an identical classification outcome. Discordant results were associated with borderline data and were evenly distributed between the two methods. Potency information derived from each method also demonstrated good consistency (n = 101), with 93% of predictions being close. Skin irritation was observed only infrequently and was most commonly associated with positive results; it was not associated with the discordant results. Where different vehicles were used with the same test material, the effect on sensitizing activity was modest, consistent with historical data. Analysis of positive control data indicated that the LNCC and LLNA displayed similar levels of biological variation. When taken in combination with the previously published results on LLNA Performance Standard chemicals, it is concluded that the LNCC provides a viable non-radioactive alternative to the LLNA for the assessment of substances, including potency predictions, as well as for the evaluation of preparations. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Second-order variational equations for N-body simulations

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2016-07-01

    First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.

  15. A Healthy Eating Education Program for Midwives to Investigate and Explore Their Knowledge, Understanding, and Confidence to Support Pregnant Women to Eat Healthily: Protocol for a Mixed-Methods Study

    PubMed Central

    Steen, Mary P; Jayasekara, Rasika; Fleet, Julie-Anne

    2018-01-01

    Background Nutrition and healthy eating behaviors during pregnancy are vitally important for the health of a mother and her developing baby. However, some midwives have reported a lack of evidence-based nutrition knowledge for providing information about healthy eating to women during pregnancy. Objective In this study, the aim is to design and evaluate a healthy eating education program to enhance midwives’ knowledge, understanding, and confidence to support pregnant women in South Australia to make healthy eating choices. Methods This mixed-methods study consists of two phases. The first phase, Phase 1, consists of an education program for midwives, “Healthy Eating in Pregnancy,” to be delivered through a workshop or webinar. Each midwife will attend one workshop or webinar, which will be approximately two hours in length. This program will be evaluated through pre-, immediate-, and post-educational questionnaires utilizing a website specifically designed for this study. The participants will be midwives who are members of the Australian College of Midwives and the Australian Nursing and Midwives Federation, and users of social media (eg, Facebook and Twitter) residing and employed in South Australia. Phase 2 will consist of semistructured interviews with a purposive sample of midwives. These interviews will be undertaken to gain an in-depth understanding of midwives’ views and how confident they feel educating pregnant women after receiving the healthy eating education. Interviews will be face-to-face or conducted by telephone with midwives who have participated in the healthy eating educational program. Results A systematic review has previously been undertaken to inform this study protocol. This paper describes and discusses the protocol for this mixed-methods study, which will be completed in April 2019. Conclusions The results from the systematic review suggest that there is clear justification to undertake this mixed-methods study to investigate and explore midwives’ knowledge, understanding and confidence to support healthy eating in pregnant women. The results and conclusions from the systematic review provided some guidance for the design and development of this study protocol. This mixed-methods study will address a gap in the literature. The results from quantitative and qualitative data sources in this proposed study will help to draw conclusions to address the research topic. Registered Report Identifier RR1-10.2196/9861 PMID:29802092

  16. Development of a Self-Rated Mixed Methods Skills Assessment: The National Institutes of Health Mixed Methods Research Training Program for the Health Sciences.

    PubMed

    Guetterman, Timothy C; Creswell, John W; Wittink, Marsha; Barg, Fran K; Castro, Felipe G; Dahlberg, Britt; Watkins, Daphne C; Deutsch, Charles; Gallo, Joseph J

    2017-01-01

    Demand for training in mixed methods is high, with little research on faculty development or assessment in mixed methods. We describe the development of a self-rated mixed methods skills assessment and provide validity evidence. The instrument taps six research domains: "Research question," "Design/approach," "Sampling," "Data collection," "Analysis," and "Dissemination." Respondents are asked to rate their ability to define or explain concepts of mixed methods under each domain, their ability to apply the concepts to problems, and the extent to which they need to improve. We administered the questionnaire to 145 faculty and students using an internet survey. We analyzed descriptive statistics and performance characteristics of the questionnaire using the Cronbach alpha to assess reliability and an analysis of variance that compared a mixed methods experience index with assessment scores to assess criterion relatedness. Internal consistency reliability was high for the total set of items (0.95) and adequate (≥0.71) for all but one subscale. Consistent with establishing criterion validity, respondents who had more professional experiences with mixed methods (eg, published a mixed methods article) rated themselves as more skilled, which was statistically significant across the research domains. This self-rated mixed methods assessment instrument may be a useful tool to assess skills in mixed methods for training programs. It can be applied widely at the graduate and faculty level. For the learner, assessment may lead to enhanced motivation to learn and training focused on self-identified needs. For faculty, the assessment may improve curriculum and course content planning.

  17. Moving towards the goals of FP2020 - classifying contraceptives.

    PubMed

    Festin, Mario Philip R; Kiarie, James; Solo, Julie; Spieler, Jeffrey; Malarcher, Shawn; Van Look, Paul F A; Temmerman, Marleen

    2016-10-01

    With the renewed focus on family planning, a clear and transparent understanding is needed for the consistent classification of contraceptives, especially in the commonly used modern/traditional system. The World Health Organization Department of Reproductive Health and Research and the United States Agency for International Development (USAID) therefore convened a technical consultation in January 2015 to address issues related to classifying contraceptives. The consultation defined modern contraceptive methods as having a sound basis in reproductive biology, a precise protocol for correct use and evidence of efficacy under various conditions based on appropriately designed studies. Methods in country programs like Fertility Awareness Based Methods [such as Standard Days Method (SDM) and TwoDay Method], Lactational Amenorrhea Method (LAM) and emergency contraception should be reported as modern. Herbs, charms and vaginal douching are not counted as contraceptive methods as they have no scientific basis in preventing pregnancy nor are in country programs. More research is needed on defining and measuring use of emergency contraceptive methods, to reflect their contribution to reducing unmet need. The ideal contraceptive classification system should be simple, easy to use, clear and consistent, with greater parsimony. Measurement challenges remain but should not be the driving force to determine what methods are counted or reported as modern or not. Family planning programs should consider multiple attributes of contraceptive methods (e.g., level of effectiveness, need for program support, duration of labeled use, hormonal or nonhormonal) to ensure they provide a variety of methods to meet the needs of women and men. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Brain tumor classification and segmentation using sparse coding and dictionary learning.

    PubMed

    Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo

    2016-08-01

    This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.

  19. From SOPs to Reports to Evaluations: Learning and Memory ...

    EPA Pesticide Factsheets

    In an era of global trade and regulatory cooperation, consistent and scientifically based interpretation of developmental neurotoxicity (DNT) studies is essential. Because there is flexibility in the selection of test method(s), consistency can be especially challenging for learning and memory tests required by EPA and OECD DNT guidelines (chemicals and pesticides) and recommended for ICH prenatal/postnatal guidelines (pharmaceuticals). A well­ reasoned uniform approach is particularly important for variable endpoints and if non-standard tests are used. An understanding of the purpose behind the tests and expected outcomes is critical, and attention to elements of experimental design, conduct, and reporting can improve study design by the investigator as well as accuracy and consistency of interpretation by evaluators. This understanding also directs which information must be clearly described in study reports. While missing information may be available in standardized operating procedures (SOPs), if not clearly reflected in report submissions there may be questions and misunderstandings by evaluators which could impact risk assessments. A practical example will be presented to provide insights into important variables and reporting approaches. Cognitive functions most often tested in guidelines studies include associative, positional, sequential, and spatial learning and memory in weanling and adult animals. These complex behaviors tap different bra

  20. Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.

    PubMed

    Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P

    2015-10-01

    Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.

Top