Sample records for method typically involves

  1. The Multilevel Mixed Intact Group Analysis: A Mixed Method to Seek, Detect, Describe, and Explain Differences Among Intact Groups

    ERIC Educational Resources Information Center

    Schoonenboom, Judith

    2016-01-01

    Educational innovations often involve intact subgroups, such as school classes or university departments. In small-scale educational evaluation research, typically involving 1 to 20 subgroups, differences among these subgroups are often neglected. This article presents a mixed method from a qualitative perspective, in which differences among…

  2. The Question-Driven Laboratory Exercise: A New Pedagogy Applied to a Green Modification of Grignard Reagent Formation and Reaction

    ERIC Educational Resources Information Center

    Teixeira, Jennifer M.; Byers, Jessie Nedrow; Perez, Marilu G.; Holman, R. W.

    2010-01-01

    Experimental exercises within second-year-level organic laboratory manuals typically involve a statement of a principle that is then validated by student generation of data in a single experiment. These experiments are structured in the exact opposite order of the scientific method, in which data interpretation, typically from multiple related…

  3. Electrolytic systems and methods for making metal halides and refining metals

    DOEpatents

    Holland, Justin M.; Cecala, David M.

    2015-05-26

    Disclosed are electrochemical cells and methods for producing a halide of a non-alkali metal and for electrorefining the halide. The systems typically involve an electrochemical cell having a cathode structure configured for dissolving a hydrogen halide that forms the halide into a molten salt of the halogen and an alkali metal. Typically a direct current voltage is applied across the cathode and an anode that is fabricated with the non-alkali metal such that the halide of the non-alkali metal is formed adjacent the anode. Electrorefining cells and methods involve applying a direct current voltage across the anode where the halide of the non-alkali metal is formed and the cathode where the non-alkali metal is electro-deposited. In a representative embodiment the halogen is chlorine, the alkali metal is lithium and the non-alkali metal is uranium.

  4. Specter: linear deconvolution for targeted analysis of data-independent acquisition mass spectrometry proteomics.

    PubMed

    Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D

    2018-05-01

    Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.

  5. Method for removal of beryllium contamination from an article

    DOEpatents

    Simandl, Ronald F.; Hollenbeck, Scott M.

    2012-12-25

    A method of removal of beryllium contamination from an article is disclosed. The method typically involves dissolving polyisobutylene in a solvent such as hexane to form a tackifier solution, soaking the substrate in the tackifier to produce a preform, and then drying the preform to produce the cleaning medium. The cleaning media are typically used dry, without any liquid cleaning agent to rub the surface of the article and remove the beryllium contamination below a non-detect level. In some embodiments no detectible residue is transferred from the cleaning wipe to the article as a result of the cleaning process.

  6. Chapter 16: Retrocommissioning Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Tiessen, Alex

    Retrocommissioning (RCx) is a systematic process for optimizing energy performance in existing buildings. It specifically focuses on improving the control of energy-using equipment (e.g., heating, ventilation, and air conditioning [HVAC] equipment and lighting) and typically does not involve equipment replacement. Field results have shown proper RCx can achieve energy savings ranging from 5 percent to 20 percent, with a typical payback of two years or less (Thorne 2003). The method presented in this protocol provides direction regarding: (1) how to account for each measure's specific characteristics and (2) how to choose the most appropriate savings verification approach.

  7. Solving ay'' + by' + cy = 0 with a Simple Product Rule Approach

    ERIC Educational Resources Information Center

    Tolle, John

    2011-01-01

    When elementary ordinary differential equations (ODEs) of first and second order are included in the calculus curriculum, second-order linear constant coefficient ODEs are typically solved by a method more appropriate to differential equations courses. This method involves the characteristic equation and its roots, complex-valued solutions, and…

  8. Approximations to the distribution of a test statistic in covariance structure analysis: A comprehensive study.

    PubMed

    Wu, Hao

    2018-05-01

    In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.

  9. Scaled Inverse Probability Weighting: A Method to Assess Potential Bias Due to Event Nonreporting in Ecological Momentary Assessment Studies

    ERIC Educational Resources Information Center

    Kovalchik, Stephanie A.; Martino, Steven C.; Collins, Rebecca L.; Shadel, William G.; D'Amico, Elizabeth J.; Becker, Kirsten

    2018-01-01

    Ecological momentary assessment (EMA) is a popular assessment method in psychology that aims to capture events, emotions, and cognitions in real time, usually repeatedly throughout the day. Because EMA typically involves more intensive monitoring than traditional assessment methods, missing data are commonly an issue and this missingness may bias…

  10. Using blue mussels (Mytilus spp.) as biosentinels of Cryptosporidium spp. and Toxoplasma gondii contamination in marine aquatic environments

    USDA-ARS?s Scientific Manuscript database

    Methods to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminant. While informative, many of these methods suffer from poor recovery rates and only provide a snapshot of the microbial load at the time of collectio...

  11. Infrared Spectroscopy of Deuterated Compounds.

    ERIC Educational Resources Information Center

    MacCarthy, Patrick

    1985-01-01

    Background information, procedures used, and typical results obtained are provided for an experiment (based on the potassium bromide pressed-pellet method) involving the infrared spectroscopy of deuterated compounds. Deuteration refers to deuterium-hydrogen exchange at active hydrogen sites in the molecule. (JN)

  12. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  13. Peer and Self-Reports of Victimization and Bullying: Their Differential Association with Internalizing Problems and Social Adjustment

    ERIC Educational Resources Information Center

    Bouman, Thijs; van der Meulen, Matty; Goossens, Frits A.; Olthof, Tjeert; Vermande, Marjolijn M.; Aleva, Elisabeth A.

    2012-01-01

    Researchers typically employ either peer or self-reports to assess involvement in bullying. In this study, we examined the merits of each method for the identification of child characteristics related to victimization and bullying others. Accordingly, we investigated the difference between these two methods with regard to their relationship with…

  14. The development and implementation of a method using blue mussels (Mytilus spp.) as biosentinels of Cryptosporidium spp. and Toxoplasma gondii contamination in marine aquatic environments

    EPA Science Inventory

    It is estimated that protozoan parasites still account for greater than one third of waterborne disease outbreaks reported. Methods used to monitor microbial contamination typically involve collecting discrete samples at specific time-points and analyzing for a single contaminan...

  15. Method and apparatus for characterizing and enhancing the functional performance of machine tools

    DOEpatents

    Barkman, William E; Babelay, Jr., Edwin F; Smith, Kevin Scott; Assaid, Thomas S; McFarland, Justin T; Tursky, David A; Woody, Bethany; Adams, David

    2013-04-30

    Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include workpiece surface finish, and the ability to generate chips of the desired length.

  16. Graphing Polar Curves

    ERIC Educational Resources Information Center

    Lawes, Jonathan F.

    2013-01-01

    Graphing polar curves typically involves a combination of three traditional techniques, all of which can be time-consuming and tedious. However, an alternative method--graphing the polar function on a rectangular plane--simplifies graphing, increases student understanding of the polar coordinate system, and reinforces graphing techniques learned…

  17. Use of an Anatomical Scalar to Control for Sex-Based Size Differences in Measures of Hyoid Excursion during Swallowing

    ERIC Educational Resources Information Center

    Molfenter, Sonja M.; Steele, Catriona M.

    2014-01-01

    Purpose: Traditional methods for measuring hyoid excursion from dynamic videofluoroscopy recordings involve calculating changes in position in absolute units (mm). This method shows a high degree of variability across studies but agreement that greater hyoid excursion occurs inmen than in women. Given that men are typically taller than women, the…

  18. Mapping Capacitive Coupling Among Pixels in a Sensor Array

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh; Cole, David M.; Smith, Roger M.

    2010-01-01

    An improved method of mapping the capacitive contribution to cross-talk among pixels in an imaging array of sensors (typically, an imaging photodetector array) has been devised for use in calibrating and/or characterizing such an array. The method involves a sequence of resets of subarrays of pixels to specified voltages and measurement of the voltage responses of neighboring non-reset pixels.

  19. True orbit simulation of piecewise linear and linear fractional maps of arbitrary dimension using algebraic numbers

    NASA Astrophysics Data System (ADS)

    Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji

    2015-06-01

    We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.

  20. Optical Sensor/Actuator Locations for Active Structural Acoustic Control

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Palumbo, Daniel L.; Kincaid, Rex K.

    1998-01-01

    Researchers at NASA Langley Research Center have extensive experience using active structural acoustic control (ASAC) for aircraft interior noise reduction. One aspect of ASAC involves the selection of optimum locations for microphone sensors and force actuators. This paper explains the importance of sensor/actuator selection, reviews optimization techniques, and summarizes experimental and numerical results. Three combinatorial optimization problems are described. Two involve the determination of the number and position of piezoelectric actuators, and the other involves the determination of the number and location of the sensors. For each case, a solution method is suggested, and typical results are examined. The first case, a simplified problem with simulated data, is used to illustrate the method. The second and third cases are more representative of the potential of the method and use measured data. The three case studies and laboratory test results establish the usefulness of the numerical methods.

  1. Selective flotation of phosphate minerals with hydroxamate collectors

    DOEpatents

    Miller, Jan D.; Wang, Xuming; Li, Minhua

    2002-01-01

    A method is disclosed for separating phosphate minerals from a mineral mixture, particularly from high-dolomite containing phosphate ores. The method involves conditioning the mineral mixture by contacting in an aqueous in environment with a collector in an amount sufficient for promoting flotation of phosphate minerals. The collector is a hydroxamate compound of the formula; ##STR1## wherein R is generally hydrophobic and chosen such that the collector has solubility or dispersion properties it can be distributed in the mineral mixture, typically an alkyl, aryl, or alkylaryl group having 6 to 18 carbon atoms. M is a cation, typically hydrogen, an alkali metal or an alkaline earth metal. Preferably, the collector also comprises an alcohol of the formula, R'--OH wherein R' is generally hydrophobic and chosen such that the collector has solubility or dispersion properties so that it can be distributed in the mineral mixture, typically an alkyl, aryl, or alkylaryl group having 6 to 18 carbon atoms.

  2. RELIGION AND DISASTER VICTIM IDENTIFICATION.

    PubMed

    Levinson, Jay; Domb, Abraham J

    2014-12-01

    Disaster Victim Identification (DVI) is a triangle, the components of which are secular law, religious law and custom and professional methods. In cases of single non-criminal deaths, identification often rests with a hospital or a medical authority. When dealing with criminal or mass death incidents, the law, in many jurisdictions, assigns identification to the coroner/medical examiner, who typically uses professional methods and only answers the religious requirements of the deceased's next-of-kin according to his personal judgment. This article discusses religious considerations regarding scientific methods and their limitations, as well as the ethical issues involved in the government coroner/medical examiner's becoming involved in clarifying and answering the next-of-kin's religious requirements.

  3. Solving Differential Equations Using Modified Picard Iteration

    ERIC Educational Resources Information Center

    Robin, W. A.

    2010-01-01

    Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…

  4. The Calculus of a Vase

    ERIC Educational Resources Information Center

    Scherger, Nicole

    2012-01-01

    Of the most universal applications in integral calculus are those involved with finding volumes of solids of revolution. These profound problems are typically taught with traditional approaches of the disk and shell methods, after which most calculus curriculums will additionally cover arc length and surfaces of revolution. Even in these visibly…

  5. ANALYSIS OF THE ENANTIOMERS OF CHIRAL PESTICIDES AND OTHER POLLUTANTS IN ENVIRONMENTAL SAMPLES BY CAPILLARY ELECTROPHORESIS

    EPA Science Inventory

    The generic method described here involves typical capillary electrophoresis (CE) techniques, with the addition of cyclodextrin chiral selectors to the electrolyte for enantiomer separation and also, in the case of neutral analytes, the further addition of a micelle forming comp...

  6. Enhancing the Student Experience of Laboratory Practicals through Digital Video Guides

    ERIC Educational Resources Information Center

    Croker, Karen; Andersson, Holger; Lush, David; Prince, Rob; Gomez, Stephen

    2010-01-01

    Laboratory-based learning allows students to experience bioscience principles first hand. In our experience, practical content and equipment may have changed over time, but teaching methods largely remain the same, typically involving; whole class introduction with a demonstration, students emulating the demonstration in small groups, gathering…

  7. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  8. An efficient strongly coupled immersed boundary method for deforming bodies

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Colonius, Tim

    2016-11-01

    Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.

  9. Method and apparatus for characterizing and enhancing the dynamic performance of machine tools

    DOEpatents

    Barkman, William E; Babelay, Jr., Edwin F

    2013-12-17

    Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include dynamic one axis positional accuracy of the machine tool, dynamic cross-axis stability of the machine tool, and dynamic multi-axis positional accuracy of the machine tool.

  10. Methods for Integrating Moderation and Mediation: A General Analytical Framework Using Moderated Path Analysis

    ERIC Educational Resources Information Center

    Edwards, Jeffrey R.; Lambert, Lisa Schurer

    2007-01-01

    Studies that combine moderation and mediation are prevalent in basic and applied psychology research. Typically, these studies are framed in terms of moderated mediation or mediated moderation, both of which involve similar analytical approaches. Unfortunately, these approaches have important shortcomings that conceal the nature of the moderated…

  11. Auditory Training with Multiple Talkers and Passage-Based Semantic Cohesion

    ERIC Educational Resources Information Center

    Casserly, Elizabeth D.; Barney, Erin C.

    2017-01-01

    Purpose: Current auditory training methods typically result in improvements to speech recognition abilities in quiet, but learner gains may not extend to other domains in speech (e.g., recognition in noise) or self-assessed benefit. This study examined the potential of training involving multiple talkers and training emphasizing discourse-level…

  12. Worlds Apart? English in German Youth Cultures and in Educational Settings

    ERIC Educational Resources Information Center

    Grau, Maike

    2009-01-01

    This paper focuses on German teenagers and their contact with English in two different contexts: in free-time activities typically involving the mass media, and in institutionalised language learning settings at school. It draws on an empirical study carried out in German secondary schools. Its mixed methods approach combines a questionnaire study…

  13. Factors that Enhance English-Speaking Speech-Language Pathologists' Transcription of Cantonese-Speaking Children's Consonants

    ERIC Educational Resources Information Center

    Lockart, Rebekah; McLeod, Sharynne

    2013-01-01

    Purpose: To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Method: Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Results: Task 1 (baseline) involved transcribing…

  14. Children's Acquisition of English Onset and Coda /l/: Articulatory Evidence

    ERIC Educational Resources Information Center

    Lin, Susan; Demuth, Katherine

    2015-01-01

    Purpose: The goal of this study was to better understand how and when onset /l/ ("leap") and coda /l/ ("peel") are acquired by children by examining both the articulations involved and adults' perceptions of the produced segments. Method: Twenty-five typically developing Australian English-speaking children aged 3;0…

  15. Reducing Covert Self-Injurious Behavior Maintained by Automatic Reinforcement through a Variable Momentary DRO Procedure

    ERIC Educational Resources Information Center

    Toussaint, Karen A.; Tiger, Jeffrey H.

    2012-01-01

    Covert self-injurious behavior (i.e., behavior that occurs in the absence of other people) can be difficult to treat. Traditional treatments typically have involved sophisticated methods of observation and often have employed positive punishment procedures. The current study evaluated the effectiveness of a variable momentary differential…

  16. Building an Inclusive Research Team: The Importance of Team Building and Skills Training

    ERIC Educational Resources Information Center

    Strnadová, Iva; Cumming, Therese M.; Knox, Marie; Parmenter, Trevor

    2014-01-01

    Background: Inclusive research teams typically describe their experiences and analyse the type of involvement of researchers with disability, but the process of building research teams and the need for research training still remain underexplored in the literature. Materials and Method: Four researchers with intellectual disabilities and four…

  17. 76 FR 52034 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Granting Approval of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... a period of time greater than one day because mathematical compounding prevents the Funds from... not actively managed by traditional methods, which typically involve effecting changes in the... mathematical approach to determine the type, quantity, and mix of investment positions that it believes should...

  18. Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling

    ERIC Educational Resources Information Center

    Oort, Frans J.; Jak, Suzanne

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…

  19. Daily Behavior Report Cards with and without Home-Based Consequences: Improving Classroom Behavior in Low Income, African American Children with ADHD

    ERIC Educational Resources Information Center

    Jurbergs, Nichole; Palcic, Jennette L.; Kelley, Mary L.

    2010-01-01

    Daily Behavior Report Cards (DBRC), which typically require teachers to evaluate students' daily behavior and parents to provide contingent consequences, are an effective and acceptable method for improving children's classroom behavior. The current study evaluated whether parent involvement is an essential treatment component or whether teacher…

  20. Current research issues related to post-wildfire runoff and erosion processes

    Treesearch

    John A. Moody; Richard A. Shakesby; Peter R. Robichaud; Susan H. Cannon; Deborah A. Martin

    2013-01-01

    Research into post-wildfire effects began in the United Statesmore than 70 years ago and only later extended to other parts of the world. Post-wildfire responses are typically transient, episodic, variable in space and time, dependent on thresholds, and involve multiple processes measured by different methods. These characteristics tend to hinder research progress, but...

  1. An Alternative Teaching Method of Conditional Probabilities and Bayes' Rule: An Application of the Truth Table

    ERIC Educational Resources Information Center

    Satake, Eiki; Vashlishan Murray, Amy

    2015-01-01

    This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…

  2. What Gene-Environment Interactions Can Tell Us about Social Competence in Typical and Atypical Populations

    ERIC Educational Resources Information Center

    Iarocci, Grace; Yager, Jodi; Elfers, Theo

    2007-01-01

    Social competence is a complex human behaviour that is likely to involve a system of genes that interacts with a myriad of environmental risk and protective factors. The search for its genetic and environmental origins and influences is equally complex and will require a multidimensional conceptualization and multiple methods and levels of…

  3. An Analytical Chemistry Experiment in Simultaneous Spectrophotometric Determination of Fe(III) and Cu(II) with Hexacyanoruthenate(II) Reagent.

    ERIC Educational Resources Information Center

    Mehra, M. C.; Rioux, J.

    1982-01-01

    Experimental procedures, typical observations, and results for the simultaneous analysis of Fe(III) and Cu(II) in a solution are discussed. The method is based on selective interaction between the two ions and potassium hexacyanoruthenate(II) in acid solution involving no preliminary sample preparations. (Author/JN)

  4. Visual Puzzles, Figure Weights, and Cancellation: Some Preliminary Hypotheses on the Functional and Neural Substrates of These Three New WAIS-IV Subtests

    PubMed Central

    McCrea, Simon M.; Robinson, Thomas P.

    2011-01-01

    In this study, five consecutive patients with focal strokes and/or cortical excisions were examined with the Wechsler Adult Intelligence Scale and Wechsler Memory Scale—Fourth Editions along with a comprehensive battery of other neuropsychological tasks. All five of the lesions were large and typically involved frontal, temporal, and/or parietal lobes and were lateralized to one hemisphere. The clinical case method was used to determine the cognitive neuropsychological correlates of mental rotation (Visual Puzzles), Piagetian balance beam (Figure Weights), and visual search (Cancellation) tasks. The pattern of results on Visual Puzzles and Figure Weights suggested that both subtests involve predominately right frontoparietal networks involved in visual working memory. It appeared that Visual Puzzles could also critically rely on the integrity of the left temporoparietal junction. The left temporoparietal junction could be involved in temporal ordering and integration of local elements into a nonverbal gestalt. In contrast, the Figure Weights task appears to critically involve the right temporoparietal junction involved in numerical magnitude estimation. Cancellation was sensitive to left frontotemporal lesions and not right posterior parietal lesions typical of other visual search tasks. In addition, the Cancellation subtest was sensitive to verbal search strategies and perhaps object-based attention demands, thereby constituting a unique task in comparison with previous visual search tasks. PMID:22389807

  5. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  6. Comparison of landmark-based and automatic methods for cortical surface registration

    PubMed Central

    Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao; Shattuck, David; Bernstein, Lynne E.; Damasio, Hanna; Leahy, Richard M.

    2009-01-01

    Group analysis of structure or function in cerebral cortex typically involves as a first step the alignment of the cortices. A surface based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and also that automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis. PMID:19796696

  7. Methods of Helium Injection and Removal for Heat Transfer Augmentation

    NASA Technical Reports Server (NTRS)

    Haight, Harlan; Kegley, Jeff; Bourdreaux, Meghan

    2008-01-01

    While augmentation of heat transfer from a test article by helium gas at low pressures is well known, the method is rarely employed during space simulation testing because the test objectives usually involve simulation of an orbital thermal environment. Test objectives of cryogenic optical testing at Marshall Space Flight Center's X-ray Cryogenic Facility (XRCF) have typically not been constrained by orbital environment parameters. As a result, several methods of helium injection have been utilized at the XRCF since 1999 to decrease thermal transition times. A brief synopsis of these injection (and removal) methods including will be presented.

  8. Applications of the Ultrasonic Serial Number Restoration Technique to Guns and Typical Stolen Articles

    NASA Technical Reports Server (NTRS)

    Young, S. G.

    1976-01-01

    An ultrasonic cavitation method for restoring obliterated serial numbers has been further explored by application to articles involved in police cases. The method was applied successfully to gun parts. In one case portions of numbers were restored after prior failure by other laboratories using chemical etching techniques. The ultrasonic method was not successful on a heavily obliterated and restamped automobile engine block, but it was partially successful on a motorcycle gear-case housing. Additional studies were made on the effect of a larger diameter ultrasonic probe, and on the method's ability to restore numbers obliterated by peening.

  9. Optically stimulated differential impedance spectroscopy

    DOEpatents

    Maxey, Lonnie C; Parks, II, James E; Lewis, Sr., Samuel A; Partridge, Jr., William P

    2014-02-18

    Methods and apparatuses for evaluating a material are described. Embodiments typically involve use of an impedance measurement sensor to measure the impedance of a sample of the material under at least two different states of illumination. The states of illumination may include (a) substantially no optical stimulation, (b) substantial optical stimulation, (c) optical stimulation at a first wavelength of light, (d) optical stimulation at a second wavelength of light, (e) a first level of light intensity, and (f) a second level of light intensity. Typically a difference in impedance between the impedance of the sample at the two states of illumination is measured to determine a characteristic of the material.

  10. Multiple testing and power calculations in genetic association studies.

    PubMed

    So, Hon-Cheong; Sham, Pak C

    2011-01-01

    Modern genetic association studies typically involve multiple single-nucleotide polymorphisms (SNPs) and/or multiple genes. With the development of high-throughput genotyping technologies and the reduction in genotyping cost, investigators can now assay up to a million SNPs for direct or indirect association with disease phenotypes. In addition, some studies involve multiple disease or related phenotypes and use multiple methods of statistical analysis. The combination of multiple genetic loci, multiple phenotypes, and multiple methods of evaluating associations between genotype and phenotype means that modern genetic studies often involve the testing of an enormous number of hypotheses. When multiple hypothesis tests are performed in a study, there is a risk of inflation of the type I error rate (i.e., the chance of falsely claiming an association when there is none). Several methods for multiple-testing correction are in popular use, and they all have strengths and weaknesses. Because no single method is universally adopted or always appropriate, it is important to understand the principles, strengths, and weaknesses of the methods so that they can be applied appropriately in practice. In this article, we review the three principle methods for multiple-testing correction and provide guidance for calculating statistical power.

  11. Nano-material and method of fabrication

    DOEpatents

    Menchhofer, Paul A; Seals, Roland D; Howe, Jane Y; Wang, Wei

    2015-02-03

    A fluffy nano-material and method of manufacture are described. At 2000.times. magnification the fluffy nanomaterial has the appearance of raw, uncarded wool, with individual fiber lengths ranging from approximately four microns to twenty microns. Powder-based nanocatalysts are dispersed in the fluffy nanomaterial. The production of fluffy nanomaterial typically involves flowing about 125 cc/min of organic vapor at a pressure of about 400 torr over powder-based nano-catalysts for a period of time that may range from approximately thirty minutes to twenty-four hours.

  12. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  13. Effect of Maxillary Osteotomy on Speech in Cleft Lip and Palate: Perceptual Outcomes of Velopharyngeal Function

    ERIC Educational Resources Information Center

    Pereira, Valerie J.; Sell, Debbie; Tuomainen, Jyrki

    2013-01-01

    Background: Abnormal facial growth is a well-known sequelae of cleft lip and palate (CLP) resulting in maxillary retrusion and a class III malocclusion. In 10-50% of cases, surgical correction involving advancement of the maxilla typically by osteotomy methods is required and normally undertaken in adolescence when facial growth is complete.…

  14. Short hold times in dynamic vapor sorption measurements mischaracterize the equilibrium moisture content of wood

    Treesearch

    Samuel V. Glass; Charles R. Boardman; Samuel L. Zelinka

    2017-01-01

    Recently, the dynamic vapor sorption (DVS) technique has been used to measure sorption isotherms and develop moisture-mechanics models for wood and cellulosic materials. This method typically involves measuring the time-dependent mass response of a sample following step changes in relative humidity (RH), fitting a kinetic model to the data, and extrapolating the...

  15. Implementing the Project Approach: A Case Study of Hybrid Pedagogy in a Hong Kong Kindergarten

    ERIC Educational Resources Information Center

    Chen, Jennifer J.; Li, Hui; Wang, Jing-ying

    2017-01-01

    The Project Approach has been promoted in Hong Kong kindergartens since the 1990s. However, the dynamic processes and underlying mechanisms involved in the teachers' implementation of this pedagogical method there have not yet been fully investigated. This case study of one typical kindergarten in Hong Kong documented how and why eight teachers…

  16. Biologically Inspired Purification and Dispersion of SWCNTs

    NASA Technical Reports Server (NTRS)

    Feeback, Daniel L.; Clarke, Mark S.; Nikolaev, Pavel

    2009-01-01

    A biologically inspired method has been developed for (1) separating single-wall carbon nanotubes (SWCNTs) from other materials (principally, amorphous carbon and metal catalysts) in raw production batches and (2) dispersing the SWCNTs as individual particles (in contradistinction to ropes and bundles) in suspension, as required for a number of applications. Prior methods of purification and dispersal of SWCNTs involve, variously, harsh physical processes (e.g., sonication) or harsh chemical processes (e.g., acid reflux). These processes do not completely remove the undesired materials and do not disperse bundles and ropes into individual suspended SWCNTs. Moreover, these processes cut long SWCNTs into shorter pieces, yielding typical nanotube lengths between 150 and 250 nm. In contrast, the present method does not involve harsh physical or chemical processes. The method involves the use of biologically derived dispersal agents (BDDAs) in an aqueous solution that is mechanically homogenized (but not sonicated) and centrifuged. The dense solid material remaining after centrifugation is resuspended by vortexing in distilled water, yielding an aqueous suspension of individual, separated SWCNTs having lengths from about 10 to about 15 microns.

  17. Treatment of category generation and retrieval in aphasia: Effect of typicality of category items.

    PubMed Central

    Kiran, Swathi; Sandberg, Chaleece; Sebastian, Rajani

    2011-01-01

    Purpose: Kiran and colleagues (Kiran, 2007, 2008; Kiran & Johnson, 2008; Kiran & Thompson, 2003) have previously suggested that training atypical examples within a semantic category is a more efficient treatment approach to facilitating generalization within the category than training typical examples. The present study extended our previous work examining the notion of semantic complexity within goal-derived (ad-hoc) categories in individuals with aphasia. Methods: Six individuals with fluent aphasia (range = 39-84 years) and varying degrees of naming deficits and semantic impairments were involved. Thirty typical and atypical items each from two categories were selected after an extensive stimulus norming task. Generative naming for the two categories was tested during baseline and treatment. Results: As predicted, training atypical examples in the category resulted in generalization to untrained typical examples in five out the five patient-treatment conditions. In contrast, training typical examples (which was in examined three conditions) produced mixed results. One patient showed generalization to untrained atypical examples, whereas two patients did not show generalization to untrained atypical examples. Conclusions: Results of the present study supplement our existing data on the effect of a semantically based treatment for lexical retrieval by manipulating the typicality of category exemplars. PMID:21173393

  18. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  19. Intelligent monitoring and control of semiconductor manufacturing equipment

    NASA Technical Reports Server (NTRS)

    Murdock, Janet L.; Hayes-Roth, Barbara

    1991-01-01

    The use of AI methods to monitor and control semiconductor fabrication in a state-of-the-art manufacturing environment called the Rapid Thermal Multiprocessor is described. Semiconductor fabrication involves many complex processing steps with limited opportunities to measure process and product properties. By applying additional process and product knowledge to that limited data, AI methods augment classical control methods by detecting abnormalities and trends, predicting failures, diagnosing, planning corrective action sequences, explaining diagnoses or predictions, and reacting to anomalous conditions that classical control systems typically would not correct. Research methodology and issues are discussed, and two diagnosis scenarios are examined.

  20. Utilizing typical color appearance models to represent perceptual brightness and colorfulness for digital images

    NASA Astrophysics Data System (ADS)

    Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao

    2016-12-01

    This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.

  1. Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection

    NASA Astrophysics Data System (ADS)

    Brasche, L. J. H.; Lopez, R.; Eisenmann, D.

    2006-03-01

    Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.

  2. High strength air-dried aerogels

    DOEpatents

    Coronado, Paul R.; Satcher, Jr., Joe H.

    2012-11-06

    A method for the preparation of high strength air-dried organic aerogels. The method involves the sol-gel polymerization of organic gel precursors, such as resorcinol with formaldehyde (RF) in aqueous solvents with R/C ratios greater than about 1000 and R/F ratios less than about 1:2.1. Using a procedure analogous to the preparation of resorcinol-formaldehyde (RF) aerogels, this approach generates wet gels that can be air dried at ambient temperatures and pressures. The method significantly reduces the time and/or energy required to produce a dried aerogel compared to conventional methods using either supercritical solvent extraction. The air dried gel exhibits typically less than 5% shrinkage.

  3. Efficient estimation of the maximum metabolic productivity of batch systems

    DOE PAGES

    St. John, Peter C.; Crowley, Michael F.; Bomble, Yannick J.

    2017-01-31

    Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumptionmore » that all fluxes in the cell are free to vary is a challenging numerical task. Here, previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable.« less

  4. Structural Code Considerations for Solar Rooftop Installations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dwyer, Stephen F.; Dwyer, Brian P.; Sanchez, Alfred

    2014-12-01

    Residential rooftop solar panel installations are limited in part by the high cost of structural related code requirements for field installation. Permitting solar installations is difficult because there is a belief among residential permitting authorities that typical residential rooftops may be structurally inadequate to support the additional load associated with a photovoltaic (PV) solar installation. Typical engineering methods utilized to calculate stresses on a roof structure involve simplifying assumptions that render a complex non-linear structure to a basic determinate beam. This method of analysis neglects the composite action of the entire roof structure, yielding a conservative analysis based on amore » rafter or top chord of a truss. Consequently, the analysis can result in an overly conservative structural analysis. A literature review was conducted to gain a better understanding of the conservative nature of the regulations and codes governing residential construction and the associated structural system calculations.« less

  5. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  6. Enzymes involved in the biodegradation of hexachlorocyclohexane: a mini review.

    PubMed

    Camacho-Pérez, Beni; Ríos-Leal, Elvira; Rinderknecht-Seijas, Noemí; Poggi-Varaldo, Héctor M

    2012-03-01

    The scope of this paper encompasses the following subjects: (i) aerobic and anaerobic degradation pathways of γ-hexachlorocyclohexane (HCH); (ii) important genes and enzymes involved in the metabolic pathways of γ-HCH degradation; (iii) the instrumental methods for identifying and quantifying intermediate metabolites, such as gas chromatography coupled to mass spectrometry (GC-MS) and other techniques. It can be concluded that typical anaerobic and aerobic pathways of γ-HCH are well known for a few selected microbial strains, although less is known for anaerobic consortia where the possibility of synergism, antagonism, and mutualism can lead to more particular routes and more effective degradation of γ-HCH. Conversion and removals in the range 39%-100% and 47%-100% have been reported for aerobic and anaerobic cultures, respectively. Most common metabolites reported for aerobic degradation of lindane are γ-pentachlorocyclohexene (γ-PCCH), 2,5-dichlorobenzoquinone (DCBQ), Chlorohydroquinone (CHQ), chlorophenol, and phenol, whereas PCCH, isomers of trichlorobenzene (TCB), chlorobenzene, and benzene are the most typical metabolites found in anaerobic pathways. Enzyme and genetic characterization of the involved molecular mechanisms are in their early infancy; more work is needed to elucidate them in the future. Advances have been made on identification of enzymes of Sphingomonas paucimobilis where the gene LinB codifies for the enzyme haloalkane dehalogenase that acts on 1,3,4,6-tetrachloro 1,4-cyclohexadiene, thus debottlenecking the pathway. Other more common enzymes such as phenol hydroxylase, catechol 1,2-dioxygenase, catechol 2,3-dioxygenase are also involved since they attack intermediate metabolites of lindane such as catechol and less substituted chlorophenols. Chromatography coupled to mass spectrometric detector, especially GC-MS, is the most used technique for resolving for γ-HCH metabolites, although there is an increased participation of HPLC-MS methods. Scintillation methods are very useful to assess final degradation of γ-HCH. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Assay Development Process | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    Typical steps involved in the development of a  mass spectrometry-based targeted assay include: (1) selection of surrogate or signature peptides corresponding to the targeted protein or modification of interest; (2) iterative optimization of instrument and method parameters for optimal detection of the selected peptide; (3) method development for protein extraction from biological matrices such as tissue, whole cell lysates, or blood plasma/serum and proteolytic digestion of proteins (usually with trypsin); (4) evaluation of the assay in the intended biological matrix to determine if e

  8. Photoactivated methods for enabling cartilage-to-cartilage tissue fixation

    NASA Astrophysics Data System (ADS)

    Sitterle, Valerie B.; Roberts, David W.

    2003-06-01

    The present study investigates whether photoactivated attachment of cartilage can provide a viable method for more effective repair of damaged articular surfaces by providing an alternative to sutures, barbs, or fibrin glues for initial fixation. Unlike artificial materials, biological constructs do not possess the initial strength for press-fitting and are instead sutured or pinned in place, typically inducing even more tissue trauma. A possible alternative involves the application of a photosensitive material, which is then photoactivated with a laser source to attach the implant and host tissues together in either a photothermal or photochemical process. The photothermal version of this method shows potential, but has been almost entirely applied to vascularized tissues. Cartilage, however, exhibits several characteristics that produce appreciable differences between applying and refining these techniques when compared to previous efforts involving vascularized tissues. Preliminary investigations involving photochemical photosensitizers based on singlet oxygen and electron transfer mechanisms are discussed, and characterization of the photodynamic effects on bulk collagen gels as a simplified model system using FTIR is performed. Previous efforts using photothermal welding applied to cartilaginous tissues are reviewed.

  9. 12 CFR 1070.22 - Fees for processing requests for CFPB records.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... of grades typically involved may be established. This charge shall include transportation of...), an average rate for the range of grades typically involved may be established. Fees shall be charged... research. (iii) Non-commercial scientific institution refers to an institution that is not operated on a...

  10. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    NASA Astrophysics Data System (ADS)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  11. Assessment of Robotic Patient Simulators for Training in Manual Physical Therapy Examination Techniques

    PubMed Central

    Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji

    2015-01-01

    Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard. PMID:25923719

  12. Conformal mapping for multiple terminals

    PubMed Central

    Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao

    2016-01-01

    Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746

  13. Choosing a Surgeon: An Exploratory Study of Factors Influencing Selection of a Gender Affirmation Surgeon.

    PubMed

    Ettner, Randi; Ettner, Frederic; White, Tonya

    2016-01-01

    Purpose: Selecting a healthcare provider is often a complicated process. Many factors appear to govern the decision as to how to select the provider in the patient-provider relationship. While the possibility of changing primary care physicians or specialists exists, decisions regarding surgeons are immutable once surgery has been performed. This study is an attempt to assess the importance attached to various factors involved in selecting a surgeon to perform gender affirmation surgery (GAS). It was hypothesized that owing to the intimate nature of the surgery, the expense typically involved, the emotional meaning attached to the surgery, and other variables, decisions regarding choice of surgeon for this procedure would involve factors other than those that inform more typical healthcare provider selection or surgeon selection for other plastic/reconstructive procedures. Methods: Questionnaires were distributed to individuals who had undergone GAS and individuals who had undergone elective plastic surgery to assess decision-making. Results: The results generally confirm previous findings regarding how patients select providers. Conclusion: Choosing a surgeon to perform gender-affirming surgery is a challenging process, but patients are quite rational in their decision-making. Unlike prior studies, we did not find a preference for gender-concordant surgeons, even though the surgery involves the genital area. Providing strategies and resources for surgical selection can improve patient satisfaction.

  14. Propellant Mass Fraction Calculation Methodology for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Monk, Timothy S.

    2009-01-01

    Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between competing launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of a generic launch vehicle. This includes fundamental methods of pmf calculation which consider only the loaded propellant and the inert mass of the vehicle, more involved methods which consider the residuals and any other unusable propellant remaining in the vehicle, and other calculations which exclude large mass quantities such as the installed engine mass. Finally, a historic comparison is made between launch vehicles on the basis of the differing calculation methodologies.

  15. MEASUREMENT OF THE VISCOELASTIC PROPERTIES OF WATER-SATURATED CLAY SEDIMENTS.

    DTIC Science & Technology

    The complex shear modulus of both kaolin -water and bentonite-water mixtures has been determined in the laboratory. The method involved measuring the...range two to forty-three kHz. Dispersed sediments behaved like Newtonian liquids. Undispersed sediments, however, were viscoelastic in character, and...their shear moduli exhibited no dependence on frequency. For undispersed kaolin mixtures, a typical result is (21.6 + i 1.2) x 1,000 dynes per square

  16. Improving Plating by Use of Intense Acoustic Beams

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.; Denofrio, Charles

    2003-01-01

    An improved method of selective plating of metals and possibly other materials involves the use of directed high-intensity acoustic beams. The beams, typically in the ultrasonic frequency range, can be generated by fixed-focus transducers (see figure) or by phased arrays of transducers excited, variously, by continuous waves, tone bursts, or single pulses. The nonlinear effects produced by these beams are used to alter plating processes in ways that are advantageous.

  17. Reducing covert self-injurious behavior maintained by automatic reinforcement through a variable momentary DRO procedure.

    PubMed

    Toussaint, Karen A; Tiger, Jeffrey H

    2012-01-01

    Covert self-injurious behavior (i.e., behavior that occurs in the absence of other people) can be difficult to treat. Traditional treatments typically have involved sophisticated methods of observation and often have employed positive punishment procedures. The current study evaluated the effectiveness of a variable momentary differential reinforcement contingency in the treatment of covert self-injury. Neither positive punishment nor extinction was required to produce decreased skin picking.

  18. Window of Opportunity: Mitigating Threats from Disruptive Technologies Before Widespread Adoption

    DTIC Science & Technology

    2014-09-01

    involved. A contemporary example of this is the emergence of online peer-to-peer payment systems like Bitcoin . Bitcoin was left to proliferate...collapse of Mt. Gox, a bitcoin exchange.195 The problem with this method of protection is that governmental based actions are typically a tediously slow...195 Mt. Gox was one of the foremost bitcoin exchange networks, and it collapsed when it was hacked and lost the

  19. Applications of hybrid genetic algorithms in seismic tomography

    NASA Astrophysics Data System (ADS)

    Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos

    2011-11-01

    Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.

  20. "What Brings Him Here Today?": Medical Problem Presentation Involving Children with Autism Spectrum Disorders and Typically Developing Children

    ERIC Educational Resources Information Center

    Solomon, Olga; Heritage, John; Yin, Larry; Maynard, Douglas W.; Bauman, Margaret L.

    2016-01-01

    Conversation and discourse analyses were used to examine medical problem presentation in pediatric care. Healthcare visits involving children with ASD and typically developing children were analyzed. We examined how children's communicative and epistemic capabilities, and their opportunities to be socialized into a competent patient role are…

  1. Multi-criteria clinical decision support: A primer on the use of multiple criteria decision making methods to promote evidence-based, patient-centered healthcare.

    PubMed

    Dolan, James G

    2010-01-01

    Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers.Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine "hard data" with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings.The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP).

  2. Multi-criteria clinical decision support: A primer on the use of multiple criteria decision making methods to promote evidence-based, patient-centered healthcare

    PubMed Central

    Dolan, James G.

    2010-01-01

    Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers. Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine “hard data” with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings. The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP) PMID:21394218

  3. Easi-CRISPR for creating knock-in and conditional knockout mouse models using long ssDNA donors.

    PubMed

    Miura, Hiromi; Quadros, Rolen M; Gurumurthy, Channabasavaiah B; Ohtsuka, Masato

    2018-01-01

    CRISPR/Cas9-based genome editing can easily generate knockout mouse models by disrupting the gene sequence, but its efficiency for creating models that require either insertion of exogenous DNA (knock-in) or replacement of genomic segments is very poor. The majority of mouse models used in research involve knock-in (reporters or recombinases) or gene replacement (e.g., conditional knockout alleles containing exons flanked by LoxP sites). A few methods for creating such models have been reported that use double-stranded DNA as donors, but their efficiency is typically 1-10% and therefore not suitable for routine use. We recently demonstrated that long single-stranded DNAs (ssDNAs) serve as very efficient donors, both for insertion and for gene replacement. We call this method efficient additions with ssDNA inserts-CRISPR (Easi-CRISPR) because it is a highly efficient technology (efficiency is typically 30-60% and reaches as high as 100% in some cases). The protocol takes ∼2 months to generate the founder mice.

  4. Applying Knowledge of Species-Typical Scavenging Behavior to the Search and Recovery of Mammalian Skeletal Remains.

    PubMed

    Young, Alexandria; Stillman, Richard; Smith, Martin J; Korstjens, Amanda H

    2016-03-01

    Forensic investigations involving animal scavenging of human remains require a physical search of the scene and surrounding areas. However, there is currently no standard procedure in the U.K. for physical searches of scavenged human remains. The Winthrop and grid search methods used by police specialist searchers for scavenged remains were examined through the use of mock red fox (Vulpes vulpes) scatter scenes. Forty-two police specialist searchers from two different regions within the U.K. were divided between those briefed and not briefed with fox-typical scavenging information. Briefing searchers with scavenging information significantly affected the recovery of scattered bones (χ(2) = 11.45, df = 1, p = 0.001). Searchers briefed with scavenging information were 2.05 times more likely to recover bones. Adaptions to search methods used by searchers were evident on a regional level, such that searchers more accustom to a peri-urban to rural region recovered a higher percentage of scattered bones (58.33%, n = 84). © 2015 American Academy of Forensic Sciences.

  5. Systematic synthesis of barriers and facilitators to service user-led care planning

    PubMed Central

    Bee, Penny; Price, Owen; Baker, John; Lovell, Karina

    2015-01-01

    Background Service user (patient) involvement in care planning is a principle enshrined by mental health policy yet often attracts criticism from patients and carers in practice. Aims To examine how user-involved care planning is operationalised within mental health services and to establish where, how and why challenges to service user involvement occur. Method Systematic evidence synthesis. Results Synthesis of data from 117 studies suggests that service user involvement fails because the patients' frame of reference diverges from that of providers. Service users and carers attributed highest value to the relational aspects of care planning. Health professionals inconsistently acknowledged the quality of the care planning process, tending instead to define service user involvement in terms of quantifiable service-led outcomes. Conclusions Service user-involved care planning is typically operationalised as a series of practice-based activities compliant with auditor standards. Meaningful involvement demands new patient-centred definitions of care planning quality. New organisational initiatives should validate time spent with service users and display more tangible and flexible commitments to meeting their needs. PMID:26243762

  6. Is dream recall underestimated by retrospective measures and enhanced by keeping a logbook? A review.

    PubMed

    Aspy, Denholm J; Delfabbro, Paul; Proeve, Michael

    2015-05-01

    There are two methods commonly used to measure dream recall in the home setting. The retrospective method involves asking participants to estimate their dream recall in response to a single question and the logbook method involves keeping a daily record of one's dream recall. Until recently, the implicit assumption has been that these measures are largely equivalent. However, this is challenged by the tendency for retrospective measures to yield significantly lower dream recall rates than logbooks. A common explanation for this is that retrospective measures underestimate dream recall. Another is that keeping a logbook enhances it. If retrospective measures underestimate dream recall and if logbooks enhance it they are both unlikely to reflect typical dream recall rates and may be confounded with variables associated with the underestimation and enhancement effects. To date, this issue has received insufficient attention. The present review addresses this gap in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. High bandwidth magnetically isolated signal transmission circuit

    NASA Technical Reports Server (NTRS)

    Repp, John Donald (Inventor)

    2005-01-01

    Many current electronic systems incorporate expensive or sensitive electrical components. Because electrical energy is often generated or transmitted at high voltages, the power supplies to these electronic systems must be carefully designed. Power supply design must ensure that the electrical system being supplied with power is not exposed to excessive voltages or currents. In order to isolate power supplies from electrical equipment, many methods have been employed. These methods typically involve control systems or signal transfer methods. However, these methods are not always suitable because of their drawbacks. The present invention relates to transmitting information across an interface. More specifically, the present invention provides an apparatus for transmitting both AC and DC information across a high bandwidth magnetic interface with low distortion.

  8. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Complement Coercion: The Joint Effects of Type and Typicality.

    PubMed

    Zarcone, Alessandra; McRae, Ken; Lenci, Alessandro; Padó, Sebastian

    2017-01-01

    Complement coercion ( begin a book → reading ) involves a type clash between an event-selecting verb and an entity-denoting object, triggering a covert event ( reading ). Two main factors involved in complement coercion have been investigated: the semantic type of the object (event vs. entity), and the typicality of the covert event ( the author began a book → writing ). In previous research, reading times have been measured at the object. However, the influence of the typicality of the subject-object combination on processing an aspectual verb such as begin has not been studied. Using a self-paced reading study, we manipulated semantic type and subject-object typicality, exploiting German word order to measure reading times at the aspectual verb. These variables interacted at the target verb. We conclude that both type and typicality probabilistically guide expectations about upcoming input. These results are compatible with an expectation-based view of complement coercion and language comprehension more generally in which there is rapid interaction between what is typically viewed as linguistic knowledge, and what is typically viewed as domain general knowledge about how the world works.

  10. Complement Coercion: The Joint Effects of Type and Typicality

    PubMed Central

    Zarcone, Alessandra; McRae, Ken; Lenci, Alessandro; Padó, Sebastian

    2017-01-01

    Complement coercion (begin a book →reading) involves a type clash between an event-selecting verb and an entity-denoting object, triggering a covert event (reading). Two main factors involved in complement coercion have been investigated: the semantic type of the object (event vs. entity), and the typicality of the covert event (the author began a book →writing). In previous research, reading times have been measured at the object. However, the influence of the typicality of the subject–object combination on processing an aspectual verb such as begin has not been studied. Using a self-paced reading study, we manipulated semantic type and subject–object typicality, exploiting German word order to measure reading times at the aspectual verb. These variables interacted at the target verb. We conclude that both type and typicality probabilistically guide expectations about upcoming input. These results are compatible with an expectation-based view of complement coercion and language comprehension more generally in which there is rapid interaction between what is typically viewed as linguistic knowledge, and what is typically viewed as domain general knowledge about how the world works. PMID:29225585

  11. Design of a Pictorial Program Reference Language.

    DTIC Science & Technology

    1984-08-01

    be fixed during the typing process. If the manuscript is given to an English teacher moonlighting as a typist, the IThe effort and expense involved in...objects with stereotyped pur- these programming cliches automatically. In the poses. These are called typical programming pat- second method, the user uses...fixed. If you gave it to alm options found in many tools today. Modeling large 0 K.glmi m teacher moonlighting as a typiA, you bodies of facts and

  12. An incremental strategy for calculating consistent discrete CFD sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.

    1992-01-01

    In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.

  13. Basal Cell Carcinoma

    PubMed Central

    Lanoue, Julien

    2016-01-01

    Basal cell carcinoma is the most commonly occurring cancer in the world and overall incidence is still on the rise. While typically a slow-growing tumor for which metastases is rare, basal cell carcinoma can be locally destructive and disfiguring. Given the vast prevalence of this disease, there is a significant overall burden on patient well-being and quality of life. The current mainstay of basal cell carcinoma treatment involves surgical modalities, such as electrodessication and curettage, excision, cryosurgery, and Mohs micrographic surgery. Such methods are typically reserved for localized basal cell carcinoma and offer high five-year cure rates, but come with the risk of functional impairment, disfigurement, and scarring. Here, the authors review the evidence and indications for nonsurgical treatment modalities in cases where surgery is impractical, contraindicated, or simply not desired by the patient. PMID:27386043

  14. Trace elements determination in seawater by ICP-MS with on-line pre-concentration on a Chelex-100 column using a ‘standard’ instrument setup.

    PubMed Central

    Søndergaard, Jens; Asmund, Gert; Larsen, Martin M.

    2015-01-01

    Trace element determination in seawater is analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. A common way to address the challenge is to pre-concentrate the trace elements on a chelating resin, then rinse the matrix elements from the resin and subsequently elute and detect the trace elements using inductively coupled plasma mass spectrometry (ICP-MS). This technique typically involves time-consuming pre-treatment of the samples for ‘off-line’ analyses or complicated sample introduction systems involving several pumps and valves for ‘on-line’ analyses. As an alternative, the following method offers a simple method for ‘on-line’ analyses of seawater by ICP-MS. As opposed to previous methods, excess seawater was pumped through the nebulizer of the ICP-MS during the pre-concentration step but the gas flow was adjusted so that the seawater was pumped out as waste without being sprayed into the instrument. Advantages of the method include: • Simple and convenient analyses of seawater requiring no changes to the ‘standard’ sample introduction system except from a resin-filled micro-column connected to the sample tube. The ‘standard’ sample introduction system refers to that used for routine digest-solution analyses of biota and sediment by ICP-MS using only one peristaltic pump; and • Accurate determination of the elements V, Mn, Co, Ni, Cu, Zn, Cd and Pb in a range of different seawater matrices verified by participation in 6 successive rounds of the international laboratory intercalibration program QUASIMEME. PMID:26258050

  15. Improved retention of phosphorus donors in germanium using a non-amorphizing fluorine co-implantation technique

    NASA Astrophysics Data System (ADS)

    Monmeyran, Corentin; Crowe, Iain F.; Gwilliam, Russell M.; Heidelberger, Christopher; Napolitani, Enrico; Pastor, David; Gandhi, Hemi H.; Mazur, Eric; Michel, Jürgen; Agarwal, Anuradha M.; Kimerling, Lionel C.

    2018-04-01

    Co-doping with fluorine is a potentially promising method for defect passivation to increase the donor electrical activation in highly doped n-type germanium. However, regular high dose donor-fluorine co-implants, followed by conventional thermal treatment of the germanium, typically result in a dramatic loss of the fluorine, as a result of the extremely large diffusivity at elevated temperatures, partly mediated by the solid phase epitaxial regrowth. To circumvent this problem, we propose and experimentally demonstrate two non-amorphizing co-implantation methods; one involving consecutive, low dose fluorine implants, intertwined with rapid thermal annealing and the second, involving heating of the target wafer during implantation. Our study confirms that the fluorine solubility in germanium is defect-mediated and we reveal the extent to which both of these strategies can be effective in retaining large fractions of both the implanted fluorine and, critically, phosphorus donors.

  16. Colors of the Sublunar

    PubMed Central

    van Doorn, Andrea

    2017-01-01

    Generic red, green, and blue images can be regarded as data sources of coarse (three bins) local spectra, typical data volumes are 104 to 107 spectra. Image data bases often yield hundreds or thousands of images, yielding data sources of 109 to 1010 spectra. There is usually no calibration, and there often are various nonlinear image transformations involved. However, we argue that sheer numbers make up for such ambiguity. We propose a model of spectral data mining that applies to the sublunar realm, spectra due to the scattering of daylight by objects from the generic terrestrial environment. The model involves colorimetry and ecological physics. Whereas the colorimetry is readily dealt with, one needs to handle the ecological physics with heuristic methods. The results suggest evolutionary causes of the human visual system. We also suggest effective methods to generate red, green, and blue color gamuts for various terrains. PMID:28989697

  17. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditionalmore » AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.« less

  18. Practical Aspects of Designing and Conducting Validation Studies Involving Multi-study Trials.

    PubMed

    Coecke, Sandra; Bernasconi, Camilla; Bowe, Gerard; Bostroem, Ann-Charlotte; Burton, Julien; Cole, Thomas; Fortaner, Salvador; Gouliarmou, Varvara; Gray, Andrew; Griesinger, Claudius; Louhimies, Susanna; Gyves, Emilio Mendoza-de; Joossens, Elisabeth; Prinz, Maurits-Jan; Milcamps, Anne; Parissis, Nicholaos; Wilk-Zasadna, Iwona; Barroso, João; Desprez, Bertrand; Langezaal, Ingrid; Liska, Roman; Morath, Siegfried; Reina, Vittorio; Zorzoli, Chiara; Zuang, Valérie

    This chapter focuses on practical aspects of conducting prospective in vitro validation studies, and in particular, by laboratories that are members of the European Union Network of Laboratories for the Validation of Alternative Methods (EU-NETVAL) that is coordinated by the EU Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM). Prospective validation studies involving EU-NETVAL, comprising a multi-study trial involving several laboratories or "test facilities", typically consist of two main steps: (1) the design of the validation study by EURL ECVAM and (2) the execution of the multi-study trial by a number of qualified laboratories within EU-NETVAL, coordinated and supported by EURL ECVAM. The approach adopted in the conduct of these validation studies adheres to the principles described in the OECD Guidance Document on the Validation and International Acceptance of new or updated test methods for Hazard Assessment No. 34 (OECD 2005). The context and scope of conducting prospective in vitro validation studies is dealt with in Chap. 4 . Here we focus mainly on the processes followed to carry out a prospective validation of in vitro methods involving different laboratories with the ultimate aim of generating a dataset that can support a decision in relation to the possible development of an international test guideline (e.g. by the OECD) or the establishment of performance standards.

  19. Genetic Algorithm Optimization of a Film Cooling Array on a Modern Turbine Inlet Vane

    DTIC Science & Technology

    2012-09-01

    heater is typically higher than the test section temperature since there is a lag due to heat transfer to the piping between the heater and test... flexible substrate 301 used 50 microns thick and the gauges themselves are a platinum metal layer 500-Å thick. When subjected to a change in heat ...more advanced gas turbine cooling design methods that factor in the 3-D flowfield and heat transfer characteristics, this study involves the

  20. Scrutinizing UML Activity Diagrams

    NASA Astrophysics Data System (ADS)

    Al-Fedaghi, Sabah

    Building an information system involves two processes: conceptual modeling of the “real world domain” and designing the software system. Object-oriented methods and languages (e.g., UML) are typically used for describing the software system. For the system analysis process that produces the conceptual description, object-oriented techniques or semantics extensions are utilized. Specifically, UML activity diagrams are the “flow charts” of object-oriented conceptualization tools. This chapter proposes an alternative to UML activity diagrams through the development of a conceptual modeling methodology based on the notion of flow.

  1. Age-Related Brain Activation Changes during Rule Repetition in Word-Matching.

    PubMed

    Methqal, Ikram; Pinsard, Basile; Amiri, Mahnoush; Wilson, Maximiliano A; Monchi, Oury; Provost, Jean-Sebastien; Joanette, Yves

    2017-01-01

    Objective: The purpose of this study was to explore the age-related brain activation changes during a word-matching semantic-category-based task, which required either repeating or changing a semantic rule to be applied. In order to do so, a word-semantic rule-based task was adapted from the Wisconsin Sorting Card Test, involving the repeated feedback-driven selection of given pairs of words based on semantic category-based criteria. Method: Forty healthy adults (20 younger and 20 older) performed a word-matching task while undergoing a fMRI scan in which they were required to pair a target word with another word from a group of three words. The required pairing is based on three word-pair semantic rules which correspond to different levels of semantic control demands: functional relatedness, moderately typical-relatedness (which were considered as low control demands), and atypical-relatedness (high control demands). The sorting period consisted of a continuous execution of the same sorting rule and an inferred trial-by-trial feedback was given. Results: Behavioral performance revealed increases in response times and decreases of correct responses according to the level of semantic control demands (functional vs. typical vs. atypical) for both age groups (younger and older) reflecting graded differences in the repetition of the application of a given semantic rule. Neuroimaging findings of significant brain activation showed two main results: (1) Greater task-related activation changes for the repetition of the application of atypical rules relative to typical and functional rules, and (2) Changes (older > younger) in the inferior prefrontal regions for functional rules and more extensive and bilateral activations for typical and atypical rules. Regarding the inter-semantic rules comparison, only task-related activation differences were observed for functional > typical (e.g., inferior parietal and temporal regions bilaterally) and atypical > typical (e.g., prefrontal, inferior parietal, posterior temporal, and subcortical regions). Conclusion: These results suggest that healthy cognitive aging relies on the adaptive changes of inferior prefrontal resources involved in the repetitive execution of semantic rules, thus reflecting graded differences in support of task demands.

  2. Case for diagnosis. Systemic light chain amyloidosis with cutaneous involvement*

    PubMed Central

    Gontijo, João Renato Vianna; Pinto, Jackson Machado; de Paula, Maysa Carla

    2017-01-01

    Systemic light chain amiloydosis is a rare disease. Due to its typical cutaneous lesions, dermatologists play an essential role in its diagnosis. Clinical manifestations vary according to the affected organ and are often unspecific. Definitive diagnosis is achieved through biopsy. We report a patient with palpebral amyloidosis, typical bilateral ecchymoses and cardiac involvement, without plasma cell dyscrasia or lymphomas. The patient died shortly after the diagnosis. PMID:29166521

  3. Finite difference time domain calculation of transients in antennas with nonlinear loads

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent

    1991-01-01

    Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.

  4. Dual-mode nested search method for categorical uncertain multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Tang, Long; Wang, Hu

    2016-10-01

    Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.

  5. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  6. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  7. Modeling Longitudinal Data Containing Non-Normal Within Subject Errors

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan; Glenn, Nancy L.

    2013-01-01

    The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.

  8. Resilient Actions in the Diagnostic Process and System Performance

    PubMed Central

    Smith, Michael W.; Giardina, Traber Davis; Murphy, Daniel R.; Laxmisan, Archana; Singh, Hardeep

    2013-01-01

    Objectives Systemic issues can adversely affect the diagnostic process. Many system-related barriers can be masked by ‘resilient’ actions of frontline providers (ie, actions supporting the safe delivery of care in the presence of pressures that the system cannot readily adapt to). We explored system barriers and resilient actions of primary care providers (PCPs) in the diagnostic evaluation of cancer. Methods We conducted a secondary data analysis of interviews of PCPs involved in diagnostic evaluation of 29 lung and colorectal cancer cases. Cases covered a range of diagnostic timeliness and were analyzed to identify barriers for rapid diagnostic evaluation, and PCPs’ actions involving elements of resilience addressing those barriers. We rated these actions according to whether they were usual or extraordinary for typical PCP work. Results Resilient actions and associated barriers were found in 59% of the cases, in all ranges of timeliness, with 40% involving actions rated as beyond typical. Most of the barriers were related to access to specialty services and coordination with patients. Many of the resilient actions involved using additional communication channels to solicit cooperation from other participants in the diagnostic process. Discussion Diagnostic evaluation of cancer involves several resilient actions by PCPs targeted at system deficiencies. PCPs’ actions can sometimes mitigate system barriers to diagnosis, and thereby impact the sensitivity of ‘downstream’ measures (eg, delays) in detecting barriers. While resilient actions might enable providers to mitigate system deficiencies in the short run, they can be resource intensive and potentially unsustainable. They complement, rather than substitute for, structural remedies to improve system performance. Measures to detect and fix system performance issues targeted by these resilient actions could facilitate diagnostic safety. PMID:23813210

  9. Traffic Flow Management Using Aggregate Flow Models and the Development of Disaggregation Methods

    NASA Technical Reports Server (NTRS)

    Sun, Dengfeng; Sridhar, Banavar; Grabbe, Shon

    2010-01-01

    A linear time-varying aggregate traffic flow model can be used to develop Traffic Flow Management (tfm) strategies based on optimization algorithms. However, there are no methods available in the literature to translate these aggregate solutions into actions involving individual aircraft. This paper describes and implements a computationally efficient disaggregation algorithm, which converts an aggregate (flow-based) solution to a flight-specific control action. Numerical results generated by the optimization method and the disaggregation algorithm are presented and illustrated by applying them to generate TFM schedules for a typical day in the U.S. National Airspace System. The results show that the disaggregation algorithm generates control actions for individual flights while keeping the air traffic behavior very close to the optimal solution.

  10. A Method for Direct Fabrication of a Lingual Splint for Management of Pediatric Mandibular Fractures

    PubMed Central

    Davies, Sarah; Costello, Bernard J.

    2013-01-01

    Summary: Pediatric mandibular fractures have successfully been managed in various ways. The use of a lingual splint is one such option. The typical indirect method for acrylic lingual splint fabrication involves obtaining dental impressions. Dental models are produced from those impressions so that model surgery may be performed. The splint is then made on those models using resin powder and liquid monomer in a wet laboratory and transferred to the patient. Obvious limitations to this technique exist for both patient and operator. We present a technique for direct, intraoperative, fabrication of a splint using commercially available light-cured material that avoids some of the shortcomings of the indirect method. Recommendations are made based on available material safety information. PMID:25289246

  11. [Local involvement of the optic nerve by acute lymphoblastic leukemia].

    PubMed

    Bernardczyk-Meller, Jadwiga; Stefańska, Katarzyna

    2005-01-01

    The leucemias quite commonly involve the eyes and adnexa. In some cases it causes visual complants. Both, the anterior chamber of the eye and the posterior portion of the globe may sites of acute or chronic leukemia and leucemic relapse. We report an unique case of a 14 years old leucemic patient who suffered visual loss and papilloedema, due to a unilateral local involvement within optic nerve, during second relapse of acute lymphocytic leuemia. In spite of typical treatment of main disease, the boy had died. The authors present typical ophthalmic features of the leucemia, too.

  12. Brief Report: Methods for Acquiring Structural MRI Data in Very Young Children with Autism Without the Use of Sedation

    PubMed Central

    Simon, Tony J.; Zierhut, Cynthia; Solomon, Marjorie; Rogers, Sally J.; Amaral, David G.

    2016-01-01

    We describe a protocol with which we achieved a 93% success rate in acquiring high quality MRI scans without the use of sedation in 2.5–4.5 year old children with autism, developmental delays, and typical development. Our main strategy was to conduct MRIs during natural nocturnal sleep in the evenings after the child's normal bedtime. Alternatively, with some older and higher functioning children, the MRI was conducted while the child was awake and watching a video. Both strategies relied heavily on the creation of a child and family friendly MRI environment and the involvement of parents as collaborators in the project. Scanning very young children with autism, typical development, and developmental delays without the use of sedation or anesthesia was possible in the majority of cases. PMID:18157624

  13. Determination of thermoelastic material properties by differential heterodyne detection of impulsive stimulated thermal scattering

    PubMed Central

    Verstraeten, B.; Sermeus, J.; Salenbien, R.; Fivez, J.; Shkerdin, G.; Glorieux, C.

    2015-01-01

    The underlying working principle of detecting impulsive stimulated scattering signals in a differential configuration of heterodyne diffraction detection is unraveled by involving optical scattering theory. The feasibility of the method for the thermoelastic characterization of coating-substrate systems is demonstrated on the basis of simulated data containing typical levels of noise. Besides the classical analysis of the photoacoustic part of the signals, which involves fitting surface acoustic wave dispersion curves, the photothermal part of the signals is analyzed by introducing thermal wave dispersion curves to represent and interpret their grating wavelength dependence. The intrinsic possibilities and limitations of both inverse problems are quantified by making use of least and most squares analysis. PMID:26236643

  14. Herpes zoster - typical and atypical presentations.

    PubMed

    Dayan, Roy Rafael; Peleg, Roni

    2017-08-01

    Varicella- zoster virus infection is an intriguing medical entity that involves many medical specialties including infectious diseases, immunology, dermatology, and neurology. It can affect patients from early childhood to old age. Its treatment requires expertise in pain management and psychological support. While varicella is caused by acute viremia, herpes zoster occurs after the dormant viral infection, involving the cranial nerve or sensory root ganglia, is re-activated and spreads orthodromically from the ganglion, via the sensory nerve root, to the innervated target tissue (skin, cornea, auditory canal, etc.). Typically, a single dermatome is involved, although two or three adjacent dermatomes may be affected. The lesions usually do not cross the midline. Herpes zoster can also present with unique or atypical clinical manifestations, such as glioma, zoster sine herpete and bilateral herpes zoster, which can be a challenging diagnosis even for experienced physicians. We discuss the epidemiology, pathophysiology, diagnosis and management of Herpes Zoster, typical and atypical presentations.

  15. A hybrid finite element-transfer matrix model for vibroacoustic systems with flat and homogeneous acoustic treatments.

    PubMed

    Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck

    2015-02-01

    Practical vibroacoustic systems involve passive acoustic treatments consisting of highly dissipative media such as poroelastic materials. The numerical modeling of such systems at low to mid frequencies typically relies on substructuring methodologies based on finite element models. Namely, the master subsystems (i.e., structural and acoustic domains) are described by a finite set of uncoupled modes, whereas condensation procedures are typically preferred for the acoustic treatments. However, although accurate, such methodology is computationally expensive when real life applications are considered. A potential reduction of the computational burden could be obtained by approximating the effect of the acoustic treatment on the master subsystems without introducing physical degrees of freedom. To do that, the treatment has to be assumed homogeneous, flat, and of infinite lateral extent. Under these hypotheses, simple analytical tools like the transfer matrix method can be employed. In this paper, a hybrid finite element-transfer matrix methodology is proposed. The impact of the limiting assumptions inherent within the analytical framework are assessed for the case of plate-cavity systems involving flat and homogeneous acoustic treatments. The results prove that the hybrid model can capture the qualitative behavior of the vibroacoustic system while reducing the computational effort.

  16. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children.

    PubMed

    Djalal, Farah Mutiasari; Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children's category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults.

  17. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children

    PubMed Central

    Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371

  18. Comparison of Low-Thrust Control Laws for Application in Planetocentric Space

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Sjauw, Waldy K.; Smith, David A.

    2014-01-01

    Recent interest at NASA for the application of solar electric propulsion for the transfer of significant payloads in cislunar space has led to the development of high-fidelity simulations of such missions. With such transfers involving transfer times on the order of months, simulation time can be significant. In the past, the examination of such missions typically began with the use of lower-fidelity trajectory optimization tools such as SEPSPOT to develop and tune guidance laws which delivered optimal or near- optimal trajectories, where optimal is generally defined as minimizing propellant expenditure or time of flight. The transfer of these solutions to a high-fidelity simulation is typically an iterative process whereby the initial solution may nearly, but not precisely, meet mission objectives. Further tuning of the guidance algorithm is typically necessary when accounting for high-fidelity perturbations such as those due to more detailed gravity models, secondary-body effects, solar radiation pressure, etc. While trajectory optimization is a useful method for determining optimal performance metrics, algorithms which deliver nearly optimal performance with minimal tuning are an attractive alternative.

  19. Social and Non-Social Cueing of Visuospatial Attention in Autism and Typical Development

    ERIC Educational Resources Information Center

    Pruett, John R.; LaMacchia, Angela; Hoertel, Sarah; Squire, Emma; McVey, Kelly; Todd, Richard D.; Constantino, John N.; Petersen, Steven E.

    2011-01-01

    Three experiments explored attention to eye gaze, which is incompletely understood in typical development and is hypothesized to be disrupted in autism. Experiment 1 (n = 26 typical adults) involved covert orienting to box, arrow, and gaze cues at two probabilities and cue-target times to test whether reorienting for gaze is endogenous, exogenous,…

  20. Oncologists' and oncology nurses' attitudes and practices towards family involvement in cancer consultations.

    PubMed

    Laidsaar-Powell, R; Butow, P; Bu, S; Fisher, A; Juraskova, I

    2017-01-01

    Family members (FMs) regularly attend cancer consultations with patients, may assume an array of roles (e.g. emotional, informational) and their involvement may result in benefits and/or challenges. Little is currently known about how oncology health professionals (HPs) view FMs who accompany a patient in consultations. This study aimed to explore the attitudes and practices of Australian oncologists and oncology nurses regarding family involvement in consultations. Eleven oncologists and 10 nurses from a range of subspecialties and tumour streams participated in semi-structured interviews. Interviews were transcribed and qualitatively analysed using framework analysis methods. Five relevant themes were identified: (1) the varied and dynamic nature of family roles during consultations; (2) positivity towards FMs; (3) the benefits of family involvement to the FM themselves; (4) current HP practices to facilitate positive family involvement; and (5) the challenges of family involvement in consultations and HP practices to manage them. Overall, participants held mostly positive attitudes towards family involvement. Although they identified a number of challenges which can arise when family are involved, many noted these situations are the exception, that there are strategies which can help to overcome the challenges, and that the benefits of family involvement typically outweigh the costs. © 2016 John Wiley & Sons Ltd.

  1. Strength tests for elite rowers: low- or high-repetition?

    PubMed

    Lawton, Trent W; Cronin, John B; McGuigan, Michael R

    2014-01-01

    The purpose of this project was to evaluate the utility of low- and high-repetition maximum (RM) strength tests used to assess rowers. Twenty elite heavyweight males (age 23.7 ± 4.0 years) performed four tests (5 RM, 30 RM, 60 RM and 120 RM) using leg press and seated arm pulling exercise on a dynamometer. Each test was repeated on two further occasions; 3 and 7 days from the initial trial. Per cent typical error (within-participant variation) and intraclass correlation coefficients (ICCs) were calculated using log-transformed repeated-measures data. High-repetition tests (30 RM, 60 RM and 120 RM), involving seated arm pulling exercise are not recommended to be included in an assessment battery, as they had unsatisfactory measurement precision (per cent typical error > 5% or ICC < 0.9). Conversely, low-repetition tests (5 RM) involving leg press and seated arm pulling exercises could be used to assess elite rowers (per cent typical error ≤ 5% and ICC ≥ 0.9); however, only 5 RM leg pressing met criteria (per cent typical error = 2.7%, ICC = 0.98) for research involving small samples (n = 20). In summary, low-repetition 5 RM strength testing offers greater utility as assessments of rowers, as they can be used to measure upper- and lower-body strength; however, only the leg press exercise is recommended for research involving small squads of elite rowers.

  2. An evaluation of the effects of high visual taskload on the separate behaviors involved in complex monitoring performance.

    DOT National Transportation Integrated Search

    1988-01-01

    Operational monitoring situations, in contrast to typical laboratory vigilance tasks, generally involve more than just stimulus detection and recognition. They frequently involve complex multidimensional discriminations, interpretations of significan...

  3. Measurement of Density, Sound Velocity, Surface Tension, and Viscosity of Freely Suspended Supercooled Liquids

    NASA Technical Reports Server (NTRS)

    Trinh, E. H.

    1995-01-01

    Non-contact methods have been implemented in conjunction with levitation techniques to carry out the measurement of the macroscopic properties of liquids significantly cooled below their nominal melting point. Free suspension of the sample and remote methods allow the deep excursion into the metastable liquid state and the determination of its thermophysical properties. We used this approach to investigate common substances such as water, o-terphenyl, succinonitrile, as well as higher temperature melts such as molten indium, aluminum and other metals. Although these techniques have thus far involved ultrasonic, electromagnetic, and more recently electrostatic levitation, we restrict our attention to ultrasonic methods in this paper. The resulting magnitude of maximum thermal supercooling achieved have ranged between 10 and 15% of the absolute temperature of the melting point for the materials mentioned above. The physical properties measurement methods have been mostly novel approaches, and the typical accuracy achieved have not yet matched their standard equivalent techniques involving contained samples and invasive probing. They are currently being refined, however, as the levitation techniques become more widespread, and as we gain a better understanding of the physics of levitated liquid samples.

  4. Measurement of density, sound velocity, surface tension, and viscosity of freely suspended supercooled liquids

    NASA Astrophysics Data System (ADS)

    Trinh, E. H.; Ohsaka, K.

    1995-03-01

    Noncontact methods have been implemented in conjunction with levitation techniques to carry out the measurement of the macroscopic properties of liquids significantly cooled below their nominal melting point. Free suspension of the sample and remote methods allow the deep excursion into the metastable liquid state and the determination of its thermophysical properties. We used this approach to investigate common substances such as water, v-terphenyl. succinonitrile, as well as higher temperature melts such as molten indium, aluminum, and other metals. Although these techniques have thus far involved ultrasonic, eletromagnetic, and more recently electrostatic levitation, we restrict our attention to ultrasonic methods in this paper. The resulting magnitude of maximum thermal supercooling achieved has ranged between 10% and 15% of the absolute temperature of the melting point for the materials mentioned above. The methods for measuring the physical properties have been mostly novel approaches, and the typical accuracy achieved has not yet matched the standard equivalent techniques involving contained samples and invasive probing. They are currently being refined, however, as the levitation techniques become more widespread and as we gain a better understanding of the physics of levitated liquid samples.

  5. Advanced Methods for Determining Prediction Uncertainty in Model-Based Prognostics with Application to Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Sankararaman, Shankar

    2013-01-01

    Prognostics is centered on predicting the time of and time until adverse events in components, subsystems, and systems. It typically involves both a state estimation phase, in which the current health state of a system is identified, and a prediction phase, in which the state is projected forward in time. Since prognostics is mainly a prediction problem, prognostic approaches cannot avoid uncertainty, which arises due to several sources. Prognostics algorithms must both characterize this uncertainty and incorporate it into the predictions so that informed decisions can be made about the system. In this paper, we describe three methods to solve these problems, including Monte Carlo-, unscented transform-, and first-order reliability-based methods. Using a planetary rover as a case study, we demonstrate and compare the different methods in simulation for battery end-of-discharge prediction.

  6. Low-Cost Blast Wave Generator for Studies of Hearing Loss and Brain Injury: Blast Wave Effects in Closed Spaces

    PubMed Central

    Newman, Andrew J.; Hayes, Sarah H.; Rao, Abhiram S.; Allman, Brian L.; Manohar, Senthilvelan; Ding, Dalian; Stolzberg, Daniel; Lobarinas, Edward; Mollendorf, Joseph C.; Salvi, Richard

    2015-01-01

    Background Military personnel and civilians living in areas of armed conflict have increased risk of exposure to blast overpressures that can cause significant hearing loss and/or brain injury. The equipment used to simulate comparable blast overpressures in animal models within laboratory settings is typically very large and prohibitively expensive. New Method To overcome the fiscal and space limitations introduced by previously reported blast wave generators, we developed a compact, low-cost blast wave generator to investigate the effects of blast exposures on the auditory system and brain. Results The blast wave generator was constructed largely from off the shelf components, and reliably produced blasts with peak sound pressures of up to 198 dB SPL (159.3 kPa) that were qualitatively similar to those produced from muzzle blasts or explosions. Exposure of adult rats to 3 blasts of 188 dB peak SPL (50.4 kPa) resulted in significant loss of cochlear hair cells, reduced outer hair cell function and a decrease in neurogenesis in the hippocampus. Comparison to existing methods Existing blast wave generators are typically large, expensive, and are not commercially available. The blast wave generator reported here provides a low-cost method of generating blast waves in a typical laboratory setting. Conclusions This compact blast wave generator provides scientists with a low cost device for investigating the biological mechanisms involved in blast wave injury to the rodent cochlea and brain that may model many of the damaging effects sustained by military personnel and civilians exposed to intense blasts. PMID:25597910

  7. A general method for copper-catalyzed arene cross-dimerization.

    PubMed

    Do, Hien-Quang; Daugulis, Olafs

    2011-08-31

    A general method for a highly regioselective copper-catalyzed cross-coupling of two aromatic compounds using iodine as an oxidant has been developed. The reactions involve an initial iodination of one arene followed by arylation of the most acidic C-H bond of the other coupling component. Cross-coupling of electron-rich arenes, electron-poor arenes, and five- and six-membered heterocycles is possible in many combinations. Typically, a 1/1.5 to 1/3 ratio of coupling components is used, in contrast to existing methodology that often employs a large excess of one of the arenes. Common functionalities such as ester, ketone, aldehyde, ether, nitrile, nitro, and amine are well-tolerated.

  8. A General Method for Copper-Catalyzed Arene Cross-Dimerization

    PubMed Central

    Do, Hien-Quang; Daugulis, Olafs

    2011-01-01

    A general method for a highly regioselective, copper-catalyzed cross-coupling of two aromatic compounds by using iodine oxidant has been developed. The reactions involve an initial iodination of one arene followed by arylation of the most acidic C-H bond of the other coupling component. Cross-coupling of electron-rich arenes, electron-poor arenes, five- and six-membered heterocycles is possible in many combinations. Typically, 1/1.5 to 1/3 ratio of coupling components is used in contrast to existing methodology that often employs a large excess of one of the arenes. Common functionalities such as ester, ketone, aldehyde, ether, nitrile, nitro, and amine are well-tolerated. PMID:21823581

  9. Mental chronometry with simple linear regression.

    PubMed

    Chen, J Y

    1997-10-01

    Typically, mental chronometry is performed by means of introducing an independent variable postulated to affect selectively some stage of a presumed multistage process. However, the effect could be a global one that spreads proportionally over all stages of the process. Currently, there is no method to test this possibility although simple linear regression might serve the purpose. In the present study, the regression approach was tested with tasks (memory scanning and mental rotation) that involved a selective effect and with a task (word superiority effect) that involved a global effect, by the dominant theories. The results indicate (1) the manipulation of the size of a memory set or of angular disparity affects the intercept of the regression function that relates the times for memory scanning with different set sizes or for mental rotation with different angular disparities and (2) the manipulation of context affects the slope of the regression function that relates the times for detecting a target character under word and nonword conditions. These ratify the regression approach as a useful method for doing mental chronometry.

  10. A Chebyshev Collocation Method for Moving Boundaries, Heat Transfer, and Convection During Directional Solidification

    NASA Technical Reports Server (NTRS)

    Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.

    1994-01-01

    Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.

  11. Propellant Mass Fraction Calculation Methodology for Launch Vehicles and Application to Ares Vehicles

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Monk, Timothy S.

    2009-01-01

    Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between candidate launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of launch vehicles. This includes fundamental methods of pmf calculation that consider only the total propellant mass and the dry mass of the vehicle; more involved methods that consider the residuals, reserves and any other unusable propellant remaining in the vehicle; and calculations excluding large mass quantities such as the installed engine mass. Finally, a historical comparison is made between launch vehicles on the basis of the differing calculation methodologies, while the unique mission and design requirements of the Ares V Earth Departure Stage (EDS) are examined in terms of impact to pmf.

  12. DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef

    2016-12-01

    Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.

  13. Using Trained Pixel Classifiers to Select Images of Interest

    NASA Technical Reports Server (NTRS)

    Mazzoni, D.; Wagstaff, K.; Castano, R.

    2004-01-01

    We present a machine-learning-based approach to ranking images based on learned priorities. Unlike previous methods for image evaluation, which typically assess the value of each image based on the presence of predetermined specific features, this method involves using two levels of machine-learning classifiers: one level is used to classify each pixel as belonging to one of a group of rather generic classes, and another level is used to rank the images based on these pixel classifications, given some example rankings from a scientist as a guide. Initial results indicate that the technique works well, producing new rankings that match the scientist's rankings significantly better than would be expected by chance. The method is demonstrated for a set of images collected by a Mars field-test rover.

  14. Processing of energy materials in electromagnetic field

    NASA Astrophysics Data System (ADS)

    Rodzevich, A. P.; Kuzmina, L. V.; Gazenaur, E. G.; Krasheninin, V. I.

    2015-09-01

    This paper presents the research results of complex impact of mechanical stress and electromagnetic field on the defect structure of energy materials. As the object of research quite a typical energy material - silver azide was chosen, being a model in chemistry of solids. According to the experiments co-effect of magnetic field and mechanical stress in silver azide crystals furthers multiplication, stopper breakaway, shift of dislocations, and generation of superlattice dislocations - micro-cracks. A method of mechanical and electric strengthening has been developed and involves changing the density of dislocations in whiskers.

  15. Inferring population history with DIY ABC: a user-friendly approach to approximate Bayesian computation.

    PubMed

    Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A; Robert, Christian P; Marin, Jean-Michel; Balding, David J; Guillemaud, Thomas; Estoup, Arnaud

    2008-12-01

    Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc.

  16. Spatially Interpolated Disease Prevalence Estimation Using Collateral Indicators of Morbidity and Ecological Risk

    PubMed Central

    Congdon, Peter

    2013-01-01

    This paper considers estimation of disease prevalence for small areas (neighbourhoods) when the available observations on prevalence are for an alternative partition of a region, such as service areas. Interpolation to neighbourhoods uses a kernel method extended to take account of two types of collateral information. The first is morbidity and service use data, such as hospital admissions, observed for neighbourhoods. Variations in morbidity and service use are expected to reflect prevalence. The second type of collateral information is ecological risk factors (e.g., pollution indices) that are expected to explain variability in prevalence in service areas, but are typically observed only for neighbourhoods. An application involves estimating neighbourhood asthma prevalence in a London health region involving 562 neighbourhoods and 189 service (primary care) areas. PMID:24129116

  17. Spatially interpolated disease prevalence estimation using collateral indicators of morbidity and ecological risk.

    PubMed

    Congdon, Peter

    2013-10-14

    This paper considers estimation of disease prevalence for small areas (neighbourhoods) when the available observations on prevalence are for an alternative partition of a region, such as service areas. Interpolation to neighbourhoods uses a kernel method extended to take account of two types of collateral information. The first is morbidity and service use data, such as hospital admissions, observed for neighbourhoods. Variations in morbidity and service use are expected to reflect prevalence. The second type of collateral information is ecological risk factors (e.g., pollution indices) that are expected to explain variability in prevalence in service areas, but are typically observed only for neighbourhoods. An application involves estimating neighbourhood asthma prevalence in a London health region involving 562 neighbourhoods and 189 service (primary care) areas.

  18. Fast Enzymatic Processing of Proteins for MS Detection with a Flow-through Microreactor

    PubMed Central

    Lazar, Iulia M.; Deng, Jingren; Smith, Nicole

    2016-01-01

    The vast majority of mass spectrometry (MS)-based protein analysis methods involve an enzymatic digestion step prior to detection, typically with trypsin. This step is necessary for the generation of small molecular weight peptides, generally with MW < 3,000-4,000 Da, that fall within the effective scan range of mass spectrometry instrumentation. Conventional protocols involve O/N enzymatic digestion at 37 ºC. Recent advances have led to the development of a variety of strategies, typically involving the use of a microreactor with immobilized enzymes or of a range of complementary physical processes that reduce the time necessary for proteolytic digestion to a few minutes (e.g., microwave or high-pressure). In this work, we describe a simple and cost-effective approach that can be implemented in any laboratory for achieving fast enzymatic digestion of a protein. The protein (or protein mixture) is adsorbed on C18-bonded reversed-phase high performance liquid chromatography (HPLC) silica particles preloaded in a capillary column, and trypsin in aqueous buffer is infused over the particles for a short period of time. To enable on-line MS detection, the tryptic peptides are eluted with a solvent system with increased organic content directly in the MS ion source. This approach avoids the use of high-priced immobilized enzyme particles and does not necessitate any aid for completing the process. Protein digestion and complete sample analysis can be accomplished in less than ~3 min and ~30 min, respectively. PMID:27078683

  19. Fast Enzymatic Processing of Proteins for MS Detection with a Flow-through Microreactor.

    PubMed

    Lazar, Iulia M; Deng, Jingren; Smith, Nicole

    2016-04-06

    The vast majority of mass spectrometry (MS)-based protein analysis methods involve an enzymatic digestion step prior to detection, typically with trypsin. This step is necessary for the generation of small molecular weight peptides, generally with MW < 3,000-4,000 Da, that fall within the effective scan range of mass spectrometry instrumentation. Conventional protocols involve O/N enzymatic digestion at 37 ºC. Recent advances have led to the development of a variety of strategies, typically involving the use of a microreactor with immobilized enzymes or of a range of complementary physical processes that reduce the time necessary for proteolytic digestion to a few minutes (e.g., microwave or high-pressure). In this work, we describe a simple and cost-effective approach that can be implemented in any laboratory for achieving fast enzymatic digestion of a protein. The protein (or protein mixture) is adsorbed on C18-bonded reversed-phase high performance liquid chromatography (HPLC) silica particles preloaded in a capillary column, and trypsin in aqueous buffer is infused over the particles for a short period of time. To enable on-line MS detection, the tryptic peptides are eluted with a solvent system with increased organic content directly in the MS ion source. This approach avoids the use of high-priced immobilized enzyme particles and does not necessitate any aid for completing the process. Protein digestion and complete sample analysis can be accomplished in less than ~3 min and ~30 min, respectively.

  20. Seeking Beta: Experimental Considerations and Theoretical Implications Regarding the Detection of Curvature in Dose-Response Relationships for Chromosome Aberrations.

    PubMed

    Shuryak, Igor; Loucas, Bradford D; Cornforth, Michael N

    2017-01-01

    The concept of curvature in dose-response relationships figures prominently in radiation biology, encompassing a wide range of interests including radiation protection, radiotherapy and fundamental models of radiation action. In this context, the ability to detect even small amounts of curvature becomes important. Standard (ST) statistical approaches used for this purpose typically involve least-squares regression, followed by a test on sums of squares. Because we have found that these methods are not particularly robust, we investigated an alternative information theoretic (IT) approach, which involves Poisson regression followed by information-theoretic model selection. Our first objective was to compare the performances of the ST and IT methods by using them to analyze mFISH data on gamma-ray-induced simple interchanges in human lymphocytes, and on Monte Carlo simulated data. Real and simulated data sets that contained small-to-moderate curvature were deliberately selected for this exercise. The IT method tended to detect curvature with higher confidence than the ST method. The finding of curvature in the dose response for true simple interchanges is discussed in the context of fundamental models of radiation action. Our second objective was to optimize the design of experiments aimed specifically at detecting curvature. We used Monte Carlo simulation to investigate the following parameters. Constrained by available resources (i.e., the total number of cells to be scored) these include: the optimal number of dose points to use; the best way to apportion the total number of cells among these dose points; and the spacing of dose intervals. Counterintuitively, our simulation results suggest that 4-5 radiation doses were typically optimal, whereas adding more dose points may actually prove detrimental. Superior results were also obtained by implementing unequal dose spacing and unequal distributions in the number of cells scored at each dose.

  1. Making Complex Electrically Conductive Patterns on Cloth

    NASA Technical Reports Server (NTRS)

    Chu, Andrew; Fink, Patrick W.; Dobbins, Justin A.; Lin, Greg Y.; Scully, Robert C.; Trevino, Robert

    2008-01-01

    A method for automated fabrication of flexible, electrically conductive patterns on cloth substrates has been demonstrated. Products developed using this method, or related prior methods, are instances of a technology known as 'e-textiles,' in which electrically conductive patterns ar formed in, and on, textiles. For many applications, including high-speed digital circuits, antennas, and radio frequency (RF) circuits, an e-textile method should be capable of providing high surface conductivity, tight tolerances for control of characteristic impedances, and geometrically complex conductive patterns. Unlike prior methods, the present method satisfies all three of these criteria. Typical patterns can include such circuit structures as RF transmission lines, antennas, filters, and other conductive patterns equivalent to those of conventional printed circuits. The present method overcomes the limitations of the prior methods for forming the equivalent of printed circuits on cloth. A typical fabrication process according to the present method involves selecting the appropriate conductive and non-conductive fabric layers to build the e-textile circuit. The present method uses commercially available woven conductive cloth with established surface conductivity specifications. Dielectric constant, loss tangent, and thickness are some of the parameters to be considered for the non-conductive fabric layers. The circuit design of the conductive woven fabric is secured onto a non-conductive fabric layer using sewing, embroidery, and/or adhesive means. The portion of the conductive fabric that is not part of the circuit is next cut from the desired circuit using an automated machine such as a printed-circuit-board milling machine or a laser cutting machine. Fiducials can be used to align the circuit and the cutting machine. Multilayer circuits can be built starting with the inner layer and using conductive thread to make electrical connections between layers.

  2. Spectra of conditionalization and typicality in the multiverse

    NASA Astrophysics Data System (ADS)

    Azhar, Feraz

    2016-02-01

    An approach to testing theories describing a multiverse, that has gained interest of late, involves comparing theory-generated probability distributions over observables with their experimentally measured values. It is likely that such distributions, were we indeed able to calculate them unambiguously, will assign low probabilities to any such experimental measurements. An alternative to thereby rejecting these theories, is to conditionalize the distributions involved by restricting attention to domains of the multiverse in which we might arise. In order to elicit a crisp prediction, however, one needs to make a further assumption about how typical we are of the chosen domains. In this paper, we investigate interactions between the spectra of available assumptions regarding both conditionalization and typicality, and draw out the effects of these interactions in a concrete setting; namely, on predictions of the total number of species that contribute significantly to dark matter. In particular, for each conditionalization scheme studied, we analyze how correlations between densities of different dark matter species affect the prediction, and explicate the effects of assumptions regarding typicality. We find that the effects of correlations can depend on the conditionalization scheme, and that in each case atypicality can significantly change the prediction. In doing so, we demonstrate the existence of overlaps in the predictions of different "frameworks" consisting of conjunctions of theory, conditionalization scheme and typicality assumption. This conclusion highlights the acute challenges involved in using such tests to identify a preferred framework that aims to describe our observational situation in a multiverse.

  3. A DFFD simulation method combined with the spectral element method for solid-fluid-interaction problems

    NASA Astrophysics Data System (ADS)

    Chen, Li-Chieh; Huang, Mei-Jiau

    2017-02-01

    A 2D simulation method for a rigid body moving in an incompressible viscous fluid is proposed. It combines one of the immersed-boundary methods, the DFFD (direct forcing fictitious domain) method with the spectral element method; the former is employed for efficiently capturing the two-way FSI (fluid-structure interaction) and the geometric flexibility of the latter is utilized for any possibly co-existing stationary and complicated solid or flow boundary. A pseudo body force is imposed within the solid domain to enforce the rigid body motion and a Lagrangian mesh composed of triangular elements is employed for tracing the rigid body. In particular, a so called sub-cell scheme is proposed to smooth the discontinuity at the fluid-solid interface and to execute integrations involving Eulerian variables over the moving-solid domain. The accuracy of the proposed method is verified through an observed agreement of the simulation results of some typical flows with analytical solutions or existing literatures.

  4. Development of an SPE/CE method for analyzing HAAs

    USGS Publications Warehouse

    Zhang, L.; Capel, P.D.; Hozalski, R.M.

    2007-01-01

    The haloacetic acid (HAA) analysis methods approved by the US Environmental Protection Agency involve extraction and derivatization of HAAs (typically to their methyl ester form) and analysis by gas chromatography (GC) with electron capture detection (ECD). Concerns associated with these methods include the time and effort of the derivatization process, use of potentially hazardous chemicals or conditions during methylation, poor recoveries because of low extraction efficiencies for some HAAs or matrix effects from sulfate, and loss of tribromoacetic acid because of decarboxylation. The HAA analysis method introduced here uses solid-phase extraction (SPE) followed by capillary electrophoresis (CE) analysis. The method is accurate, reproducible, sensitive, relatively safe, and easy to perform, and avoids the use of large amounts of solvent for liquid-liquid extraction and the potential hazards and hassles of derivatization. The cost of analyzing HAAs using this method should be lower than the currently approved methods, and utilities with a GC/ECD can perform the analysis in-house.

  5. Typical Toddlers' Participation in “Just-in-Time” Programming of Vocabulary for Visual Scene Display Augmentative and Alternative Communication Apps on Mobile Technology: A Descriptive Study

    PubMed Central

    Drager, Kathryn; Light, Janice; Caron, Jessica Gosnell

    2017-01-01

    Purpose Augmentative and alternative communication (AAC) promotes communicative participation and language development for young children with complex communication needs. However, the motor, linguistic, and cognitive demands of many AAC technologies restrict young children's operational use of and influence over these technologies. The purpose of the current study is to better understand young children's participation in programming vocabulary “just in time” on an AAC application with minimized demands. Method A descriptive study was implemented to highlight the participation of 10 typically developing toddlers (M age: 16 months, range: 10–22 months) in just-in-time vocabulary programming in an AAC app with visual scene displays. Results All 10 toddlers participated in some capacity in adding new visual scene displays and vocabulary to the app just in time. Differences in participation across steps were observed, suggesting variation in the developmental demands of controls involved in vocabulary programming. Conclusions Results from the current study provide clinical insights toward involving young children in AAC programming just in time and steps that may allow for more independent participation or require more scaffolding. Technology designed to minimize motor, cognitive, and linguistic demands may allow children to participate in programming devices at a younger age. PMID:28586825

  6. Aerodynamic Performance Predictions of Single and Twin Jet Afterbodies

    NASA Technical Reports Server (NTRS)

    Carlson, John R.; Pao, S. Paul; Abdol-Hamid, Khaled S.; Jones, William T.

    1995-01-01

    The multiblock three-dimensional Navier-Stokes method PAB3D was utilized by the Component Integration Branch (formerly Propulsion Aerodynamics Branch) at the NASA-Langley Research Center in an international study sponsored by AGARD Working Group #17 for the assessment of the state-of-the-art of propulsion-airframe integration testing techniques and CFD prediction technologies. Three test geometries from ONERA involving fundamental flow physics and four geometries from NASA-LaRC involving realistic flow interactions of wing, body, tail, and jet plumes were chosen by the Working Group. An overview of results on four (1 ONERA and 3 LaRC) of the seven test cases is presented. External static pressures, integrated pressure drag and total drag were calculated for the Langley test cases and jet plume velocity profiles and turbulent viscous stresses were calculated for the ONERA test case. Only selected data from these calculations are presented in this paper. The complete data sets calculated by the participants will be presented in an AGARD summary report. Predicted surface static pressures compared favorably with experimental data for the Langley geometries. Predicted afterbody drag compared well with experiment. Predicted nozzle drag was typically low due to over-compression of the flow near the trailing edge. Total drag was typically high. Predicted jet plume quantities on the ONERA case compared generally well with data.

  7. Multidisciplinary, interdisciplinary, or dysfunctional? Team working in mixed-methods research.

    PubMed

    O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon

    2008-11-01

    Combining qualitative and quantitative methods in a single study-otherwise known as mixed-methods research-is common. In health research these projects can be delivered by research teams. A typical scenario, for example, involves medical sociologists delivering qualitative components and researchers from medicine or health economics delivering quantitative components. We undertook semistructured interviews with 20 researchers who had worked on mixed-methods studies in health services research to explore the facilitators of and barriers to exploiting the potential of this approach. Team working emerged as a key issue, with three models of team working apparent: multidisciplinary, interdisciplinary, and dysfunctional. Interdisciplinary research was associated with integration of data or findings from the qualitative and quantitative components in both the final reports and the peer-reviewed publications. Methodological respect between team members and a principal investigator who valued integration emerged as essential to achieving integrated research outcomes.

  8. A variable pressure method for characterizing nanoparticle surface charge using pore sensors.

    PubMed

    Vogel, Robert; Anderson, Will; Eldridge, James; Glossop, Ben; Willmott, Geoff

    2012-04-03

    A novel method using resistive pulse sensors for electrokinetic surface charge measurements of nanoparticles is presented. This method involves recording the particle blockade rate while the pressure applied across a pore sensor is varied. This applied pressure acts in a direction which opposes transport due to the combination of electro-osmosis, electrophoresis, and inherent pressure. The blockade rate reaches a minimum when the velocity of nanoparticles in the vicinity of the pore approaches zero, and the forces on typical nanoparticles are in equilibrium. The pressure applied at this minimum rate can be used to calculate the zeta potential of the nanoparticles. The efficacy of this variable pressure method was demonstrated for a range of carboxylated 200 nm polystyrene nanoparticles with different surface charge densities. Results were of the same order as phase analysis light scattering (PALS) measurements. Unlike PALS results, the sequence of increasing zeta potential for different particle types agreed with conductometric titration.

  9. Pretreatment of Cellulose By Electron Beam Irradiation Method

    NASA Astrophysics Data System (ADS)

    Jusri, N. A. A.; Azizan, A.; Ibrahim, N.; Salleh, R. Mohd; Rahman, M. F. Abd

    2018-05-01

    Pretreatment process of lignocellulosic biomass (LCB) to produce biofuel has been conducted by using various methods including physical, chemical, physicochemical as well as biological. The conversion of bioethanol process typically involves several steps which consist of pretreatment, hydrolysis, fermentation and separation. In this project, microcrystalline cellulose (MCC) was used in replacement of LCB since cellulose has the highest content of LCB for the purpose of investigating the effectiveness of new pretreatment method using radiation technology. Irradiation with different doses (100 kGy to 1000 kGy) was conducted by using electron beam accelerator equipment at Agensi Nuklear Malaysia. Fourier Transform Infrared Spectroscopy (FTIR) and X-Ray Diffraction (XRD) analyses were studied to further understand the effect of the suggested pretreatment step to the content of MCC. Through this method namely IRR-LCB, an ideal and optimal condition for pretreatment prior to the production of biofuel by using LCB may be introduced.

  10. Pitt-Hopkins Syndrome: A Review of Current Literature, Clinical Approach, and 23-Patient Case Series.

    PubMed

    Goodspeed, Kimberly; Newsom, Cassandra; Morris, Mary Ann; Powell, Craig; Evans, Patricia; Golla, Sailaja

    2018-03-01

    Pitt-Hopkins syndrome (PTHS) is a rare, genetic disorder caused by a molecular variant of TCF4 which is involved in embryologic neuronal differentiation. PTHS is characterized by syndromic facies, psychomotor delay, and intellectual disability. Other associated features include early-onset myopia, seizures, constipation, and hyperventilation-apneic spells. Many also meet criteria for autism spectrum disorder. Here the authors present a series of 23 PTHS patients with molecularly confirmed TCF4 variants and describe 3 unique individuals. The first carries a small deletion but does not exhibit the typical facial features nor the typical pattern of developmental delay. The second exhibits typical facial features, but has attained more advanced motor and verbal skills than other reported cases to date. The third displays typical features of PTHS, however inherited a large chromosomal duplication involving TCF4 from his unaffected father with somatic mosaicism. To the authors' knowledge, this is the first chromosomal duplication case reported to date.

  11. Risk determinants of small and medium-sized manufacturing enterprises (SMEs) - an exploratory study in New Zealand

    NASA Astrophysics Data System (ADS)

    Islam, Ariful; Tedford, Des

    2012-08-01

    The smooth running of small and medium-sized manufacturing enterprises (SMEs) presents a significant challenge irrespective of the technological and human resources they may have at their disposal. SMEs continuously encounter daily internal and external undesirable events and unwanted setbacks to their operations that detract from their business performance. These are referred to as `disturbances' in our research study. Among the disturbances, some are likely to create risks to the enterprises in terms of loss of production, manufacturing capability, human resource, market share, and, of course, economic losses. These are finally referred to as `risk determinant' on the basis of their correlation with some risk indicators, which are linked to operational, occupational, and economic risks. To deal with these risk determinants effectively, SMEs need a systematic method of approach to identify and treat their potential effects along with an appropriate set of tools. However, initially, a strategic approach is required to identify typical risk determinants and their linkage with potential business risks. In this connection, we conducted this study to explore the answer to the research question: what are the typical risk determinants encountered by SMEs? We carried out an empirical investigation with a multi-method research approach (a combination of a questionnaire-based mail survey involving 212 SMEs and five in-depth case studies) in New Zealand. This paper presents a set of typical internal and external risk determinants, which need special attention to be dealt with to minimize operational risks of an SME.

  12. The Role of Frequency in Learning Morphophonological Alternations: Implications for Children With Specific Language Impairment

    PubMed Central

    Demuth, Katherine; Petocz, Peter

    2017-01-01

    Purpose The aim of this article was to explore how the type of allomorph (e.g., past tense buzz[d] vs. nod[əd]) influences the ability to perceive and produce grammatical morphemes in children with typical development and with specific language impairment (SLI). Method The participants were monolingual Australian English–speaking children. The SLI group included 13 participants (mean age = 5;7 [years;months]); the control group included 19 children with typical development (mean age = 5;4). Both groups performed a grammaticality judgment and elicited production task with the same set of nonce verbs in third-person singular and past tense forms. Results Five-year-old children are still learning to generalize morphophonological patterns to novel verbs, and syllabic /əz/ and /əd/ allomorphs are significantly more challenging to produce, particularly for the SLI group. The greater phonetic content of these syllabic forms did not enhance perception. Conclusions Acquisition of morphophonological patterns involving low-frequency allomorphs is still underway in 5-year-old children with typical development, and it is even more protracted in SLI populations, despite these patterns being highly predictable. Children with SLI will therefore benefit from targeted intervention with low-frequency allomorphs. PMID:28510615

  13. Biological Remediation of Petroleum Contaminants

    NASA Astrophysics Data System (ADS)

    Kuhad, Ramesh Chander; Gupta, Rishi

    Large volumes of hazardous wastes are generated in the form of oily sludges and contaminated soils during crude oil transportation and processing. Although many physical, chemical and biological treatment technologies are available for petroleum contaminants petroleum contaminants in soil, biological methods have been considered the most cost-effective. Practical biological remediation methods typically involve direct use of the microbes naturally occurring in the contaminated environment and/or cultured indigenous or modified microorganisms. Environmental and nutritional factors, including the properties of the soil, the chemical structure of the hydrocarbon(s), oxygen, water, nutrient availability, pH, temperature, and contaminant bioavailability, can significantly affect the rate and the extent of hydrocarbon biodegradation hydrocarbon biodegradation by microorganisms in contaminated soils. This chapter concisely discusses the major aspects of bioremediation of petroleum contaminants.

  14. Numerical investigation of finite-volume effects for the HVP

    NASA Astrophysics Data System (ADS)

    Boyle, Peter; Gülpers, Vera; Harrison, James; Jüttner, Andreas; Portelli, Antonin; Sachrajda, Christopher

    2018-03-01

    It is important to correct for finite-volume (FV) effects in the presence of QED, since these effects are typically large due to the long range of the electromagnetic interaction. We recently made the first lattice calculation of electromagnetic corrections to the hadronic vacuum polarisation (HVP). For the HVP, an analytical derivation of FV corrections involves a two-loop calculation which has not yet been carried out. We instead calculate the universal FV corrections numerically, using lattice scalar QED as an effective theory. We show that this method gives agreement with known analytical results for scalar mass FV effects, before applying it to calculate FV corrections for the HVP. This method for numerical calculation of FV effects is also widely applicable to quantities beyond the HVP.

  15. Different hunting strategies select for different weights in red deer.

    PubMed

    Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; San Miguel, Alfonso

    2005-09-22

    Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data.

  16. Involving Teachers in Charter School Governance: A Guide for State Policymakers

    ERIC Educational Resources Information Center

    Sam, Cecilia

    2008-01-01

    This guide for state policymakers examines teacher involvement in charter school governance. Teacher involvement is defined to include the gamut of decision-making roles not typically afforded teachers in traditional public schools, including founding schools, serving on governing boards, and engaging in site-based collective bargaining. Different…

  17. SPATIO-TEMPORAL MODELING OF AGRICULTURAL YIELD DATA WITH AN APPLICATION TO PRICING CROP INSURANCE CONTRACTS

    PubMed Central

    Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo

    2009-01-01

    This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450

  18. Agent-Based Modeling in Molecular Systems Biology.

    PubMed

    Soheilypour, Mohammad; Mofrad, Mohammad R K

    2018-07-01

    Molecular systems orchestrating the biology of the cell typically involve a complex web of interactions among various components and span a vast range of spatial and temporal scales. Computational methods have advanced our understanding of the behavior of molecular systems by enabling us to test assumptions and hypotheses, explore the effect of different parameters on the outcome, and eventually guide experiments. While several different mathematical and computational methods are developed to study molecular systems at different spatiotemporal scales, there is still a need for methods that bridge the gap between spatially-detailed and computationally-efficient approaches. In this review, we summarize the capabilities of agent-based modeling (ABM) as an emerging molecular systems biology technique that provides researchers with a new tool in exploring the dynamics of molecular systems/pathways in health and disease. © 2018 WILEY Periodicals, Inc.

  19. What methods are used to apply positive deviance within healthcare organisations? A systematic review

    PubMed Central

    Baxter, Ruth; Taylor, Natalie; Kellar, Ian; Lawton, Rebecca

    2016-01-01

    Background The positive deviance approach focuses on those who demonstrate exceptional performance, despite facing the same constraints as others. ‘Positive deviants’ are identified and hypotheses about how they succeed are generated. These hypotheses are tested and then disseminated within the wider community. The positive deviance approach is being increasingly applied within healthcare organisations, although limited guidance exists and different methods, of varying quality, are used. This paper systematically reviews healthcare applications of the positive deviance approach to explore how positive deviance is defined, the quality of existing applications and the methods used within them, including the extent to which staff and patients are involved. Methods Peer-reviewed articles, published prior to September 2014, reporting empirical research on the use of the positive deviance approach within healthcare, were identified from seven electronic databases. A previously defined four-stage process for positive deviance in healthcare was used as the basis for data extraction. Quality assessments were conducted using a validated tool, and a narrative synthesis approach was followed. Results 37 of 818 articles met the inclusion criteria. The positive deviance approach was most frequently applied within North America, in secondary care, and to address healthcare-associated infections. Research predominantly identified positive deviants and generated hypotheses about how they succeeded. The approach and processes followed were poorly defined. Research quality was low, articles lacked detail and comparison groups were rarely included. Applications of positive deviance typically lacked staff and/or patient involvement, and the methods used often required extensive resources. Conclusion Further research is required to develop high quality yet practical methods which involve staff and patients in all stages of the positive deviance approach. The efficacy and efficiency of positive deviance must be assessed and compared with other quality improvement approaches. PROSPERO registration number CRD42014009365. PMID:26590198

  20. Initiating heavy-atom-based phasing by multi-dimensional molecular replacement.

    PubMed

    Pedersen, Bjørn Panyella; Gourdon, Pontus; Liu, Xiangyu; Karlsen, Jesper Lykkegaard; Nissen, Poul

    2016-03-01

    To obtain an electron-density map from a macromolecular crystal the phase problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitant heavy-atom substructure determination. This is typically performed by dual-space methods, direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available. This is often the case for, for example, membrane proteins. Here, an approach for heavy-atom site identification based on a molecular-replacement parameter matrix (MRPM) is presented. It involves an n-dimensional search to test a wide spectrum of molecular-replacement parameters, such as different data sets and search models with different conformations. Results are scored by the ability to identify heavy-atom positions from anomalous difference Fourier maps. The strategy was successfully applied in the determination of a membrane-protein structure, the copper-transporting P-type ATPase CopA, when other methods had failed to determine the heavy-atom substructure. MRPM is well suited to proteins undergoing large conformational changes where multiple search models should be considered, and it enables the identification of weak but correct molecular-replacement solutions with maximum contrast to prime experimental phasing efforts.

  1. Initiating heavy-atom-based phasing by multi-dimensional molecular replacement

    PubMed Central

    Pedersen, Bjørn Panyella; Gourdon, Pontus; Liu, Xiangyu; Karlsen, Jesper Lykkegaard; Nissen, Poul

    2016-01-01

    To obtain an electron-density map from a macromolecular crystal the phase problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitant heavy-atom substructure determination. This is typically performed by dual-space methods, direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available. This is often the case for, for example, membrane proteins. Here, an approach for heavy-atom site identification based on a molecular-replacement parameter matrix (MRPM) is presented. It involves an n-dimensional search to test a wide spectrum of molecular-replacement parameters, such as different data sets and search models with different conformations. Results are scored by the ability to identify heavy-atom positions from anomalous difference Fourier maps. The strategy was successfully applied in the determination of a membrane-protein structure, the copper-transporting P-type ATPase CopA, when other methods had failed to determine the heavy-atom substructure. MRPM is well suited to proteins undergoing large conformational changes where multiple search models should be considered, and it enables the identification of weak but correct molecular-replacement solutions with maximum contrast to prime experimental phasing efforts. PMID:26960131

  2. Biogeographical characterization of Saccharomyces cerevisiae wine yeast by molecular methods

    PubMed Central

    Tofalo, Rosanna; Perpetuini, Giorgia; Schirone, Maria; Fasoli, Giuseppe; Aguzzi, Irene; Corsetti, Aldo; Suzzi, Giovanna

    2013-01-01

    Biogeography is the descriptive and explanatory study of spatial patterns and processes involved in the distribution of biodiversity. Without biogeography, it would be difficult to study the diversity of microorganisms because there would be no way to visualize patterns in variation. Saccharomyces cerevisiae, “the wine yeast,” is the most important species involved in alcoholic fermentation, and in vineyard ecosystems, it follows the principle of “everything is everywhere.” Agricultural practices such as farming (organic versus conventional) and floor management systems have selected different populations within this species that are phylogenetically distinct. In fact, recent ecological and geographic studies highlighted that unique strains are associated with particular grape varieties in specific geographical locations. These studies also highlighted that significant diversity and regional character, or ‘terroir,’ have been introduced into the winemaking process via this association. This diversity of wild strains preserves typicity, the high quality, and the unique flavor of wines. Recently, different molecular methods were developed to study population dynamics of S. cerevisiae strains in both vineyards and wineries. In this review, we will provide an update on the current molecular methods used to reveal the geographical distribution of S. cerevisiae wine yeast. PMID:23805132

  3. Non-radioactive TRF assay modifications to improve telomeric DNA detection efficiency in plants

    PubMed Central

    Nigmatullina, Liliia R.; Sharipova, Margarita R.; Shakirov, Eugene V.

    2016-01-01

    The length of telomeric DNA is often considered a cellular biomarker of aging and general health status. Several telomere length measuring assays have been developed, of which the most common is the Telomere Restriction Fragment (TRF) analysis, which typically involves the use of radioactively labeled oligonucleotide probes. While highly effective, this method potentially poses substantial health concerns and generates radioactive waste. Digoxigenin (DIG) alternatives to radioactive probes have been developed and used successfully in a number of assays. Here we optimize the DIG protocol to measure telomere length in the model plant Arabidopsis thaliana and present evidence that this approach can be used successfully to efficiently and accurately measure telomere length in plants. Specifically, hybridization temperature of 42 °C instead of the typical 55 °C appears to generate stronger signals. In addition, DIG incorporation at 5′-end instead of 3′-end of the labeled oligonucleotide greatly enhances signal. We conclude that non-radioactive TRF assays can be as efficient as radioactive methods in detecting and measuring telomere length in plants, making this assay suitable for medical and research laboratories unable to utilize radioactivity due to hazardous waste disposal and safety concerns. PMID:28133587

  4. Tested Demonstrations.

    ERIC Educational Resources Information Center

    Gilbert, George L., Ed.

    1985-01-01

    Background information, procedures, and typical results obtained are provided for two demonstrations. The first involves the colorful complexes of copper(II). The second involves reverse-phase separation of Food, Drug, and Cosmetic (FD & C) dyes using a solvent gradient. (JN)

  5. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  6. Risk-based prioritization method for the classification of groundwater pesticide pollution from agricultural regions.

    PubMed

    Yang, Yu; Lian, Xin-Ying; Jiang, Yong-Hai; Xi, Bei-Dou; He, Xiao-Song

    2017-11-01

    Agricultural regions are a significant source of groundwater pesticide pollution. To ensure that agricultural regions with a significantly high risk of groundwater pesticide contamination are properly managed, a risk-based ranking method related to groundwater pesticide contamination is needed. In the present paper, a risk-based prioritization method for the classification of groundwater pesticide pollution from agricultural regions was established. The method encompasses 3 phases, including indicator selection, characterization, and classification. In the risk ranking index system employed here, 17 indicators involving the physicochemical properties, environmental behavior characteristics, pesticide application methods, and inherent vulnerability of groundwater in the agricultural region were selected. The boundary of each indicator was determined using K-means cluster analysis based on a survey of a typical agricultural region and the physical and chemical properties of 300 typical pesticides. The total risk characterization was calculated by multiplying the risk value of each indicator, which could effectively avoid the subjectivity of index weight calculation and identify the main factors associated with the risk. The results indicated that the risk for groundwater pesticide contamination from agriculture in a region could be ranked into 4 classes from low to high risk. This method was applied to an agricultural region in Jiangsu Province, China, and it showed that this region had a relatively high risk for groundwater contamination from pesticides, and that the pesticide application method was the primary factor contributing to the relatively high risk. The risk ranking method was determined to be feasible, valid, and able to provide reference data related to the risk management of groundwater pesticide pollution from agricultural regions. Integr Environ Assess Manag 2017;13:1052-1059. © 2017 SETAC. © 2017 SETAC.

  7. Affinity+: Semi-Structured Brainstorming on Large Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burtner, Edwin R.; May, Richard A.; Scarberry, Randall E.

    2013-04-27

    Affinity diagraming is a powerful method for encouraging and capturing lateral thinking in a group environment. The Affinity+ Concept was designed to improve the collaborative brainstorm process through the use of large display surfaces in conjunction with mobile devices like smart phones and tablets. The system works by capturing the ideas digitally and allowing users to sort and group them on a large touch screen manually. Additionally, Affinity+ incorporates theme detection, topic clustering, and other processing algorithms that help bring structured analytic techniques to the process without requiring explicit leadership roles and other overhead typically involved in these activities.

  8. An expert systems approach to automated fault management in a regenerative life support subsystem

    NASA Technical Reports Server (NTRS)

    Malin, J. T.; Lance, N., Jr.

    1986-01-01

    This paper describes FIXER, a prototype expert system for automated fault management in a regenerative life support subsystem typical of Space Station applications. The development project provided an evaluation of the use of expert systems technology to enhance controller functions in space subsystems. The software development approach permitted evaluation of the effectiveness of direct involvement of the expert in design and development. The approach also permitted intensive observation of the knowledge and methods of the expert. This paper describes the development of the prototype expert system and presents results of the evaluation.

  9. An enrichment, amplification, and sequence-based typing (EAST) approach for foodborne pathogen detection and surveillance

    USDA-ARS?s Scientific Manuscript database

    Introduction: Detection of foodborne pathogens typically involves microbiological enrichment with subsequent isolation and identification of a pure culture. This is typically followed by strain typing, which provides information critical to outbreak and source investigations. In the early 1990’s pul...

  10. Gaining Access.

    ERIC Educational Resources Information Center

    Wand, Sean; Thermos, Adam C.

    1998-01-01

    Explains the issues to consider before a college decides to purchase a card-access system. The benefits of automation, questions involving implementation, the criteria for technology selection, what typical card technology involves, privacy concerns, and the placement of card readers are discussed. (GR)

  11. An fMRI Study of Parietal Cortex Involvement in the Visual Guidance of Locomotion

    ERIC Educational Resources Information Center

    Billington, Jac; Field, David T.; Wilkie, Richard M.; Wann, John P.

    2010-01-01

    Locomoting through the environment typically involves anticipating impending changes in heading trajectory in addition to maintaining the current direction of travel. We explored the neural systems involved in the "far road" and "near road" mechanisms proposed by Land and Horwood (1995) using simulated forward or backward travel where participants…

  12. Inferring population history with DIY ABC: a user-friendly approach to approximate Bayesian computation

    PubMed Central

    Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A.; Robert, Christian P.; Marin, Jean-Michel; Balding, David J.; Guillemaud, Thomas; Estoup, Arnaud

    2008-01-01

    Summary: Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. Availability: The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc. Contact: j.cornuet@imperial.ac.uk Supplementary information: Supplementary data are also available at http://www.montpellier.inra.fr/CBGP/diyabc PMID:18842597

  13. Integration of Gas Chromatography Mass Spectrometry Methods for Differentiating Ricin Preparation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wunschel, David S.; Melville, Angela M.; Ehrhardt, Christopher J.

    2012-05-17

    The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of the castor plant Ricinus communis. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatographicmore » - mass spectrometric (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method and independent of the seed source. In particular the abundance of mannose, arabinose, fucose, ricinoleic acid and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation.« less

  14. Processes involved in the development of latent fingerprints using the cyanoacrylate fuming method.

    PubMed

    Lewis, L A; Smithwick, R W; Devault, G L; Bolinger, B; Lewis, S A

    2001-03-01

    Chemical processes involved in the development of latent fingerprints using the cyanoacrylate fuming method have been studied. Two major types of latent prints have been investigated-clean and oily prints. Scanning electron microscopy (SEM) has been used as a tool for determining the morphology of the polymer developed separately on clean and oily prints after cyanoacrylate fuming. A correlation between the chemical composition of an aged latent fingerprint, prior to development, and the quality of a developed fingerprint has been observed in the morphology. The moisture in the print prior to fuming has been found to be more important than the moisture in the air during fuming for the development of a useful latent print. In addition, the amount of time required to develop a high quality latent print has been found to be within 2 min. The cyanoacrylate polymerization process is extremely rapid. When heat is used to accelerate the fuming process, typically a period of 2 min is required to develop the print. The optimum development time depends upon the concentration of cyanoacrylate vapors within the enclosure.

  15. Security of patient data when decommissioning ultrasound systems.

    PubMed

    Moggridge, James

    2017-02-01

    Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system's own delete function; full reinstallation of system software; the manufacturer's own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system's own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer's wiping service or the open source wiping software. The typical method of reinstalling an ultrasound system's software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system's hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state.

  16. "It's the Most Important Thing--I Mean, the Schooling": Father Involvement in the Education of Children with Autism

    ERIC Educational Resources Information Center

    Potter, Carol

    2016-01-01

    Father involvement in education has been shown to result in a range of positive outcomes for typically developing children. However, the nature of paternal involvement in the education of children with disabilities and especially autism has been under-researched and is little understood. This study aimed to explore the nature of the involvement of…

  17. Implicit level set algorithms for modelling hydraulic fracture propagation.

    PubMed

    Peirce, A

    2016-10-13

    Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture 'tip screen-out'; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).

  18. Implicit level set algorithms for modelling hydraulic fracture propagation

    PubMed Central

    2016-01-01

    Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture ‘tip screen-out’; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research.  This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597787

  19. Scaling up of renewable chemicals.

    PubMed

    Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A

    2016-04-01

    The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. How Ag Nanospheres Are Transformed into AgAu Nanocages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreau, Liane M.; Schurman, Charles A.; Kewalramani, Sumit

    Bimetallic hollow, porous noble metal nanoparticles are of broad interest for biomedical, optical and catalytic applications. The most straightforward method for preparing such structures involves the reaction between HAuCl4 and well-formed Ag particles, typically spheres, cubes, or triangular prisms, yet the mechanism underlying their formation is poorly understood at the atomic scale. By combining in situ nanoscopic and atomic-scale characterization techniques (XAFS, SAXS, XRF, and electron microscopy) to follow the process, we elucidate a plausible reaction pathway for the conversion of citrate-capped Ag nanospheres to AgAu nanocages; importantly, the hollowing event cannot be explained by the nanoscale Kirkendall effect, normore » by Galvanic exchange alone, two processes that have been previously proposed. We propose a modification of the bulk Galvanic exchange process that takes into account considerations that can only occur with nanoscale particles. This nanoscale Galvanic exchange process explains the novel morphological and chemical changes associated with the typically observed hollowing process.« less

  1. Autism spectrum disorder traits in typically developing emerging adults and associated parenting: A person-centered approach.

    PubMed

    McKinney, Cliff; Gadke, Daniel L; Malkin, Mallory L

    2018-02-15

    Research on parenting children with autism spectrum disorder (ASD) indicates these children receive parenting tailored to their condition. However, little is known about ASD in adulthood, especially in emerging adults at college, and how they are parented. The current study examined how emerging adults in a non-clinical typically-developing sample differed in their current perceptions of parenting as a function of ASD traits. Participants completed questionnaires about their current perceptions of parenting and self-reported ASD traits. Parenting characteristics assessed included parenting style, discipline, parent-child relationship quality, and parental distress. Results indicated that higher levels of self-reported ASD traits were associated with increasingly ineffective parenting characteristics including lower authoritative style, harsher discipline, poorer parent-child relationship quality (e.g., lower involvement), and higher parental distress. Researchers are encouraged to extend ASD research into adulthood by validating diagnostic methods with adults and investigating processes in adulthood that have been well-established in the childhood ASD literature.

  2. Security of patient data when decommissioning ultrasound systems

    PubMed Central

    2017-01-01

    Background Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. Methods In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system’s own delete function; full reinstallation of system software; the manufacturer’s own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. Results All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system’s own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer’s wiping service or the open source wiping software. Conclusion The typical method of reinstalling an ultrasound system’s software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system’s hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state. PMID:28228821

  3. Characteristics, Similarities, and Differences among Four-Year Cooperative Engineering Programs in the United States

    ERIC Educational Resources Information Center

    Egbert, Robert I.; Stone, Lorene H.; Adams, David L.

    2011-01-01

    Four-year cooperative engineering programs are becoming more common in the United States. Cooperative engineering programs typically involve a "parent" institution with an established engineering program and one or more "satellite" institutions which typically have few or no engineering programs and are located in an area where…

  4. Distributed Group Design Process: Lessons Learned.

    ERIC Educational Resources Information Center

    Eseryel, Deniz; Ganesan, Radha

    A typical Web-based training development team consists of a project manager, an instructional designer, a subject-matter expert, a graphic artist, and a Web programmer. The typical scenario involves team members working together in the same setting during the entire design and development process. What happens when the team is distributed, that is…

  5. Injection Locking Techniques for Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Gathma, Timothy D.; Buckwalter, James F.

    2011-04-01

    Wideband spectrum analysis supports future communication systems that reconfigure and adapt to the capacity of the spectral environment. While test equipment manufacturers offer wideband spectrum analyzers with excellent sensitivity and resolution, these spectrum analyzers typically cannot offer acceptable size, weight, and power (SWAP). CMOS integrated circuits offer the potential to fully integrate spectrum analysis capability with analog front-end circuitry and digital signal processing on a single chip. Unfortunately, CMOS lacks high-Q passives and wideband resonator tunability that is necessary for heterodyne implementations of spectrum analyzers. As an alternative to the heterodyne receiver architectures, two nonlinear methods for performing wideband, low-power spectrum analysis are presented. The first method involves injecting the spectrum of interest into an array of injection-locked oscillators. The second method employs the closed loop dynamics of both injection locking and phase locking to independently estimate the injected frequency and power.

  6. Variable Structure Control of a Hand-Launched Glider

    NASA Technical Reports Server (NTRS)

    Anderson, Mark R.; Waszak, Martin R.

    2005-01-01

    Variable structure control system design methods are applied to the problem of aircraft spin recovery. A variable structure control law typically has two phases of operation. The reaching mode phase uses a nonlinear relay control strategy to drive the system trajectory to a pre-defined switching surface within the motion state space. The sliding mode phase involves motion along the surface as the system moves toward an equilibrium or critical point. Analysis results presented in this paper reveal that the conventional method for spin recovery can be interpreted as a variable structure controller with a switching surface defined at zero yaw rate. Application of Lyapunov stability methods show that deflecting the ailerons in the direction of the spin helps to insure that this switching surface is stable. Flight test results, obtained using an instrumented hand-launched glider, are used to verify stability of the reaching mode dynamics.

  7. Flexible and stackable terahertz metamaterials via silver-nanoparticle inkjet printing

    NASA Astrophysics Data System (ADS)

    Kashiwagi, K.; Xie, L.; Li, X.; Kageyama, T.; Miura, M.; Miyashita, H.; Kono, J.; Lee, S.-S.

    2018-04-01

    There is presently much interest in tunable, flexible, or reconfigurable metamaterial structures that work in the terahertz frequency range. They can be useful for a range of applications, including spectroscopy, sensing, imaging, and communications. Various methods based on microelectromechanical systems have been used for fabricating terahertz metamaterials, but they typically require high-cost facilities and involve a number of time-consuming and intricate processes. Here, we demonstrate a simple, robust, and cost-effective method for fabricating flexible and stackable multiresonant terahertz metamaterials, using silver nanoparticle inkjet printing. Using this method, we designed and fabricated two arrays of split-ring resonators (SRRs) having different resonant frequencies on separate sheets of paper and then combined the two arrays by stacking. Through terahertz time-domain spectroscopy, we observed resonances at the frequencies expected for the individual SRR arrays as well as at a new frequency due to coupling between the two SRR arrays.

  8. ASPECTS: an automation-assisted SPE method development system.

    PubMed

    Li, Ming; Chou, Judy; King, Kristopher W; Yang, Liyu

    2013-07-01

    A typical conventional SPE method development (MD) process usually involves deciding the chemistry of the sorbent and eluent based on information about the analyte; experimentally preparing and trying out various combinations of adsorption chemistry and elution conditions; quantitatively evaluating the various conditions; and comparing quantitative results from all combination of conditions to select the best condition for method qualification. The second and fourth steps have mostly been performed manually until now. We developed an automation-assisted system that expedites the conventional SPE MD process by automating 99% of the second step, and expedites the fourth step by automatically processing the results data and presenting it to the analyst in a user-friendly format. The automation-assisted SPE MD system greatly saves the manual labor in SPE MD work, prevents analyst errors from causing misinterpretation of quantitative results, and shortens data analysis and interpretation time.

  9. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  10. Resting-state fMRI data reflects default network activity rather than null data: A defense of commonly employed methods to correct for multiple comparisons.

    PubMed

    Slotnick, Scott D

    2017-07-01

    Analysis of functional magnetic resonance imaging (fMRI) data typically involves over one hundred thousand independent statistical tests; therefore, it is necessary to correct for multiple comparisons to control familywise error. In a recent paper, Eklund, Nichols, and Knutsson used resting-state fMRI data to evaluate commonly employed methods to correct for multiple comparisons and reported unacceptable rates of familywise error. Eklund et al.'s analysis was based on the assumption that resting-state fMRI data reflect null data; however, their 'null data' actually reflected default network activity that inflated familywise error. As such, Eklund et al.'s results provide no basis to question the validity of the thousands of published fMRI studies that have corrected for multiple comparisons or the commonly employed methods to correct for multiple comparisons.

  11. Method of improving the performance of lenses for use in thermal infrared

    NASA Astrophysics Data System (ADS)

    McDowell, M. W.; Klee, H. W.

    1980-10-01

    A method is described whereby the field performance of an all-germanium triplet, as used for imaging radiation in the 8 to 13 micron waveband, can be improved. The principle of the method, which could also be used to improve the performance of achromatic triplets or aspherized doublets, involves the use of a field flattener lens which replaces the germanium window of the detector. The curvature of this lens can be optimized to minimize field curvature, which together with chromatic aberration is one of the most serious residuals of thermal imaging systems with relative apertures of around f/0.7. It is also shown that for such relative apertures, and for typical fields of less than 15 degrees, at 100 mm focal length, the location of the aperture stop is not a significant design parameter. This arises as a result of the intrinsic optical properties of germanium.

  12. Constraining compensated isocurvature perturbations using the CMB

    NASA Astrophysics Data System (ADS)

    Smith, Tristan L.; Rhiannon Smith, Kyle Yee, Julian Munoz, Daniel Grin

    2017-01-01

    Compensated isocurvature perturbations (CIPs) are variations in the cosmic baryon fraction which leave the total non-relativistic matter (and radiation) density unchanged. They are predicted by models of inflation which involve more than one scalar field, such as the curvaton scenario. At linear order, they leave the CMB two-point correlation function nearly unchanged: this is why existing constraints to CIPs are so much more permissive than constraints to typical isocurvature perturbations. Recent work articulated an efficient way to calculate the second order CIP effects on the CMB two-point correlation. We have implemented this method in order to explore constraints to the CIP amplitude using current Planck temperature and polarization data. In addition, we have computed the contribution of CIPs to the CMB lensing estimator which provides us with a novel method to use CMB data to place constraints on CIPs. We find that Planck data places a constraint to the CIP amplitude which is competitive with other methods.

  13. Image based method for aberration measurement of lithographic tools

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  14. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  15. Atmospheric turbulence characterization with the Keck adaptive optics systems. I. Open-loop data.

    PubMed

    Schöck, Matthias; Le Mignant, David; Chanan, Gary A; Wizinowich, Peter L; van Dam, Marcos A

    2003-07-01

    We present a detailed investigation of different methods of the characterization of atmospheric turbulence with the adaptive optics systems of the W. M. Keck Observatory. The main problems of such a characterization are the separation of instrumental and atmospheric effects and the accurate calibration of the devices involved. Therefore we mostly describe the practical issues of the analysis. We show that two methods, the analysis of differential image motion structure functions and the Zernike decomposition of the wave-front phase, produce values of the atmospheric coherence length r0 that are in excellent agreement with results from long-exposure images. The main error source is the calibration of the wave-front sensor. Values determined for the outer scale L0 are consistent between the methods and with typical L0 values found at other sites, that is, of the order of tens of meters.

  16. Different hunting strategies select for different weights in red deer

    PubMed Central

    Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; Miguel, Alfonso San

    2005-01-01

    Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data. PMID:17148205

  17. Metallic superhydrophobic surfaces via thermal sensitization

    NASA Astrophysics Data System (ADS)

    Vahabi, Hamed; Wang, Wei; Popat, Ketul C.; Kwon, Gibum; Holland, Troy B.; Kota, Arun K.

    2017-06-01

    Superhydrophobic surfaces (i.e., surfaces extremely repellent to water) allow water droplets to bead up and easily roll off from the surface. While a few methods have been developed to fabricate metallic superhydrophobic surfaces, these methods typically involve expensive equipment, environmental hazards, or multi-step processes. In this work, we developed a universal, scalable, solvent-free, one-step methodology based on thermal sensitization to create appropriate surface texture and fabricate metallic superhydrophobic surfaces. To demonstrate the feasibility of our methodology and elucidate the underlying mechanism, we fabricated superhydrophobic surfaces using ferritic (430) and austenitic (316) stainless steels (representative alloys) with roll off angles as low as 4° and 7°, respectively. We envision that our approach will enable the fabrication of superhydrophobic metal alloys for a wide range of civilian and military applications.

  18. High-Accuracy Readout Electronics for Piezoresistive Tactile Sensors

    PubMed Central

    Vidal-Verdú, Fernando

    2017-01-01

    The typical layout in a piezoresistive tactile sensor arranges individual sensors to form an array with M rows and N columns. While this layout reduces the wiring involved, it does not allow the values of the sensor resistors to be measured individually due to the appearance of crosstalk caused by the nonidealities of the array reading circuits. In this paper, two reading methods that minimize errors resulting from this phenomenon are assessed by designing an electronic system for array reading, and the results are compared to those obtained using the traditional method, obviating the nonidealities of the reading circuit. The different models were compared by testing the system with an array of discrete resistors. The system was later connected to a tactile sensor with 8 × 7 taxels. PMID:29104229

  19. Non-destructive component separation using infrared radiant energy

    DOEpatents

    Simandl, Ronald F [Knoxville, TN; Russell, Steven W [Knoxville, TN; Holt, Jerrid S [Knoxville, TN; Brown, John D [Harriman, TN

    2011-03-01

    A method for separating a first component and a second component from one another at an adhesive bond interface between the first component and second component. Typically the method involves irradiating the first component with infrared radiation from a source that radiates substantially only short wavelengths until the adhesive bond is destabilized, and then separating the first component and the second component from one another. In some embodiments an assembly of components to be debonded is placed inside an enclosure and the assembly is illuminated from an IR source that is external to the enclosure. In some embodiments an assembly of components to be debonded is simultaneously irradiated by a multi-planar array of IR sources. Often the IR radiation is unidirectional. In some embodiments the IR radiation is narrow-band short wavelength infrared radiation.

  20. Partition of unity finite element method for quantum mechanical materials calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pask, J. E.; Sukumar, N.

    The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less

  1. Partition of unity finite element method for quantum mechanical materials calculations

    DOE PAGES

    Pask, J. E.; Sukumar, N.

    2016-11-09

    The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less

  2. The impact of design-based modeling instruction on seventh graders' spatial abilities and model-based argumentation

    NASA Astrophysics Data System (ADS)

    McConnell, William J.

    Due to the call of current science education reform for the integration of engineering practices within science classrooms, design-based instruction is receiving much attention in science education literature. Although some aspect of modeling is often included in well-known design-based instructional methods, it is not always a primary focus. The purpose of this study was to better understand how design-based instruction with an emphasis on scientific modeling might impact students' spatial abilities and their model-based argumentation abilities. In the following mixed-method multiple case study, seven seventh grade students attending a secular private school in the Mid-Atlantic region of the United States underwent an instructional intervention involving design-based instruction, modeling and argumentation. Through the course of a lesson involving students in exploring the interrelatedness of the environment and an animal's form and function, students created and used multiple forms of expressed models to assist them in model-based scientific argument. Pre/post data were collected through the use of The Purdue Spatial Visualization Test: Rotation, the Mental Rotation Test and interviews. Other data included a spatial activities survey, student artifacts in the form of models, notes, exit tickets, and video recordings of students throughout the intervention. Spatial abilities tests were analyzed using descriptive statistics while students' arguments were analyzed using the Instrument for the Analysis of Scientific Curricular Arguments and a behavior protocol. Models were analyzed using content analysis and interviews and all other data were coded and analyzed for emergent themes. Findings in the area of spatial abilities included increases in spatial reasoning for six out of seven participants, and an immense difference in the spatial challenges encountered by students when using CAD software instead of paper drawings to create models. Students perceived 3D printed models to better assist them in scientific argumentation over paper drawing models. In fact, when given a choice, students rarely used paper drawing to assist in argument. There was also a difference in model utility between the two different model types. Participants explicitly used 3D printed models to complete gestural modeling, while participants rarely looked at 2D models when involved in gestural modeling. This study's findings added to current theory dealing with the varied spatial challenges involved in different modes of expressed models. This study found that depth, symmetry and the manipulation of perspectives are typically spatial challenges students will attend to using CAD while they will typically ignore them when drawing using paper and pencil. This study also revealed a major difference in model-based argument in a design-based instruction context as opposed to model-based argument in a typical science classroom context. In the context of design-based instruction, data revealed that design process is an important part of model-based argument. Due to the importance of design process in model-based argumentation in this context, trusted methods of argument analysis, like the coding system of the IASCA, was found lacking in many respects. Limitations and recommendations for further research were also presented.

  3. Vocabulary Acquisition through Cloze Exercises, Sentence-Writing and Composition-Writing: Extending the Evaluation Component of the Involvement Load Hypothesis

    ERIC Educational Resources Information Center

    Zou, Di

    2017-01-01

    This research inspects the allocation of involvement load to the evaluation component of the involvement load hypothesis, examining how three typical approaches to evaluation (cloze-exercises, sentence-writing, and composition-writing) promote word learning. The results of this research were partially consistent with the predictions of the…

  4. Human exposures to monomers resulting from consumer contact with polymers.

    PubMed

    Leber, A P

    2001-06-01

    Many consumer products are composed completely, or in part, of polymeric materials. Direct or indirect human contact results in potential exposures to monomers as a result of migrations of trace amounts from the polymeric matrix into foods, into the skin or other bodily surfaces. Typically, residual monomer levels in these polymers are <100 p.p.m., and represent exposures well below those observable in traditional toxicity testing. These product applications thus require alternative methods for evaluating health risks relating to monomer exposures. A typical approach includes: (a) assessment of potential human contacts for specific polymer uses; (b) utilization of data from toxicity testing of pure monomers, e.g. cancer bioassay results; and (c) mathematical risk assessment methods. Exposure potentials are measured in one of two analytical procedures: (1) migration of monomer from polymer into a simulant solvent (e.g. alcohol, acidic water, vegetable oil) appropriate for the intended use of the product (e.g. beer cans, food jars, packaging adhesive, dairy hose); or (2) total monomer content of the polymer, providing worse-case values for migratable monomer. Application of toxicity data typically involves NOEL or benchmark values for non-cancer endpoints, or tumorigenicity potencies for monomers demonstrated to be carcinogens. Risk assessments provide exposure 'safety margin' ratios between levels that: (1) are projected to be safe according to toxicity information, and (2) are potential monomer exposures posed by the intended use of the consumer product. This paper includes an example of a health risk assessment for a chewing gum polymer for which exposures to trace levels of butadiene monomer occur.

  5. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  6. Predictive modeling of multicellular structure formation by using Cellular Particle Dynamics simulations

    NASA Astrophysics Data System (ADS)

    McCune, Matthew; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan

    2014-03-01

    Cellular Particle Dynamics (CPD) is an effective computational method for describing and predicting the time evolution of biomechanical relaxation processes of multicellular systems. A typical example is the fusion of spheroidal bioink particles during post bioprinting structure formation. In CPD cells are modeled as an ensemble of cellular particles (CPs) that interact via short-range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through integration of their equations of motion. CPD was successfully applied to describe and predict the fusion of 3D tissue construct involving identical spherical aggregates. Here, we demonstrate that CPD can also predict tissue formation involving uneven spherical aggregates whose volumes decrease during the fusion process. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  7. Developmental changes in face visual scanning in autism spectrum disorder as assessed by data-based analysis

    PubMed Central

    Amestoy, Anouck; Guillaud, Etienne; Bouvard, Manuel P.; Cazalets, Jean-René

    2015-01-01

    Individuals with autism spectrum disorder (ASD) present reduced visual attention to faces. However, contradictory conclusions have been drawn about the strategies involved in visual face scanning due to the various methodologies implemented in the study of facial screening. Here, we used a data-driven approach to compare children and adults with ASD subjected to the same free viewing task and to address developmental aspects of face scanning, including its temporal patterning, in healthy children, and adults. Four groups (54 subjects) were included in the study: typical adults, typically developing children, and adults and children with ASD. Eye tracking was performed on subjects viewing unfamiliar faces. Fixations were analyzed using a data-driven approach that employed spatial statistics to provide an objective, unbiased definition of the areas of interest. Typical adults expressed a spatial and temporal strategy for visual scanning that differed from the three other groups, involving a sequential fixation of the right eye (RE), left eye (LE), and mouth. Typically developing children, adults and children with autism exhibited similar fixation patterns and they always started by looking at the RE. Children (typical or with ASD) subsequently looked at the LE or the mouth. Based on the present results, the patterns of fixation for static faces that mature from childhood to adulthood in typical subjects are not found in adults with ASD. The atypical patterns found after developmental progression and experience in ASD groups appear to remain blocked in an immature state that cannot be differentiated from typical developmental child patterns of fixation. PMID:26236264

  8. Beyond Euler's Method: Implicit Finite Differences in an Introductory ODE Course

    ERIC Educational Resources Information Center

    Kull, Trent C.

    2011-01-01

    A typical introductory course in ordinary differential equations (ODEs) exposes students to exact solution methods. However, many differential equations must be approximated with numerical methods. Textbooks commonly include explicit methods such as Euler's and Improved Euler's. Implicit methods are typically introduced in more advanced courses…

  9. Integration of gas chromatography mass spectrometry methods for differentiating ricin preparation methods.

    PubMed

    Wunschel, David S; Melville, Angela M; Ehrhardt, Christopher J; Colburn, Heather A; Victry, Kristin D; Antolick, Kathryn C; Wahl, Jon H; Wahl, Karen L

    2012-05-07

    The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of Ricinus communis, commonly known as the castor plant. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatography-mass spectrometry (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid, as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods, starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid, or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method, independent of the seed source. In particular, the abundance of mannose, arabinose, fucose, ricinoleic acid, and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation than would be possible using a single analytical method.

  10. Macrophage polarization in virus-host interactions

    USDA-ARS?s Scientific Manuscript database

    Macrophage involvement in viral infections and antiviral states is common. However, this involvement has not been well-studied in the paradigm of macrophage polarization, which typically has been categorized by the dichotomy of classical (M1) and alternative (M2) statuses. Recent studies have reveal...

  11. Sensemaking Handoffs: Why? How? and When?

    ERIC Educational Resources Information Center

    Sharma, Nikhil

    2010-01-01

    Sensemaking tasks are challenging and typically involve collecting, organizing and understanding information. Sensemaking often involves a handoff where a subsequent recipient picks up work done by a provider. Sensemaking handoffs are very challenging because handoffs introduce discontinuity in sensemaking. This dissertation attempts to explore…

  12. Anisotropy of the Coulomb Interaction between Folded Proteins: Consequences for Mesoscopic Aggregation of Lysozyme

    PubMed Central

    Chan, Ho Yin; Lankevich, Vladimir; Vekilov, Peter G.; Lubchenko, Vassiliy

    2012-01-01

    Toward quantitative description of protein aggregation, we develop a computationally efficient method to evaluate the potential of mean force between two folded protein molecules that allows for complete sampling of their mutual orientation. Our model is valid at moderate ionic strengths and accounts for the actual charge distribution on the surface of the molecules, the dielectric discontinuity at the protein-solvent interface, and the possibility of protonation or deprotonation of surface residues induced by the electric field due to the other protein molecule. We apply the model to the protein lysozyme, whose solutions exhibit both mesoscopic clusters of protein-rich liquid and liquid-liquid separation; the former requires that protein form complexes with typical lifetimes of approximately milliseconds. We find the electrostatic repulsion is typically lower than the prediction of the Derjaguin-Landau-Verwey-Overbeek theory. The Coulomb interaction in the lowest-energy docking configuration is nonrepulsive, despite the high positive charge on the molecules. Typical docking configurations barely involve protonation or deprotonation of surface residues. The obtained potential of mean force between folded lysozyme molecules is consistent with the location of the liquid-liquid coexistence, but produces dimers that are too short-lived for clusters to exist, suggesting lysozyme undergoes conformational changes during cluster formation. PMID:22768950

  13. Universal and idiosyncratic characteristic lengths in bacterial genomes

    NASA Astrophysics Data System (ADS)

    Junier, Ivan; Frémont, Paul; Rivoire, Olivier

    2018-05-01

    In condensed matter physics, simplified descriptions are obtained by coarse-graining the features of a system at a certain characteristic length, defined as the typical length beyond which some properties are no longer correlated. From a physics standpoint, in vitro DNA has thus a characteristic length of 300 base pairs (bp), the Kuhn length of the molecule beyond which correlations in its orientations are typically lost. From a biology standpoint, in vivo DNA has a characteristic length of 1000 bp, the typical length of genes. Since bacteria live in very different physico-chemical conditions and since their genomes lack translational invariance, whether larger, universal characteristic lengths exist is a non-trivial question. Here, we examine this problem by leveraging the large number of fully sequenced genomes available in public databases. By analyzing GC content correlations and the evolutionary conservation of gene contexts (synteny) in hundreds of bacterial chromosomes, we conclude that a fundamental characteristic length around 10–20 kb can be defined. This characteristic length reflects elementary structures involved in the coordination of gene expression, which are present all along the genome of nearly all bacteria. Technically, reaching this conclusion required us to implement methods that are insensitive to the presence of large idiosyncratic genomic features, which may co-exist along these fundamental universal structures.

  14. Training Class Inclusion Responding in Typically Developing Children and Individuals with Autism

    ERIC Educational Resources Information Center

    Ming, Siri; Mulhern, Teresa; Stewart, Ian; Moran, Laura; Bynum, Kellie

    2018-01-01

    In a "class inclusion" task, a child must respond to stimuli as being involved in two different though hierarchically related categories. This study used a Relational Frame Theory (RFT) paradigm to assess and train this ability in three typically developing preschoolers and three individuals with autism spectrum disorder, all of whom had…

  15. Applying ecological concepts to the management of widespread grass invasions [Chapter 7

    Treesearch

    Carla M. D' Antonio; Jeanne C. Chambers; Rhonda Loh; J. Tim Tunison

    2009-01-01

    The management of plant invasions has typically focused on the removal of invading populations or control of existing widespread species to unspecified but lower levels. Invasive plant management typically has not involved active restoration of background vegetation to reduce the likelihood of invader reestablishment. Here, we argue that land managers could benefit...

  16. Reading under the Skin: Physiological Activation during Reading in Children with Dyslexia and Typical Readers

    ERIC Educational Resources Information Center

    Tobia, Valentina; Bonifacci, Paola; Ottaviani, Cristina; Borsato, Thomas; Marzocchi, Gian Marco

    2016-01-01

    The aim of this study was to investigate physiological activation during reading and control tasks in children with dyslexia and typical readers. Skin conductance response (SCR) recorded during four tasks involving reading aloud, reading silently, and describing illustrated stories aloud and silently was compared for children with dyslexia (n =…

  17. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE PAGES

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...

    2018-04-30

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  18. The ReaxFF reactive force-field: Development, applications, and future directions

    DOE PAGES

    Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...

    2016-03-04

    The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less

  19. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  20. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  1. Applications of computer algebra to distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Storch, Joel A.

    1993-01-01

    In the analysis of vibrations of continuous elastic systems, one often encounters complicated transcendental equations with roots directly related to the system's natural frequencies. Typically, these equations contain system parameters whose values must be specified before a numerical solution can be obtained. The present paper presents a method whereby the fundamental frequency can be obtained in analytical form to any desired degree of accuracy. The method is based upon truncation of rapidly converging series involving inverse powers of the system natural frequencies. A straightforward method to developing these series and summing them in closed form is presented. It is demonstrated how Computer Algebra can be exploited to perform the intricate analytical procedures which otherwise would render the technique difficult to apply in practice. We illustrate the method by developing two analytical approximations to the fundamental frequency of a vibrating cantilever carrying a rigid tip body. The results are compared to the numerical solution of the exact (transcendental) frequency equation over a range of system parameters.

  2. Regenerative Medicine Strategies for Esophageal Repair

    PubMed Central

    Londono, Ricardo

    2015-01-01

    Pathologies that involve the structure and/or function of the esophagus can be life-threatening. The esophagus is a complex organ comprising nonredundant tissue that does not have the ability to regenerate. Currently available interventions for esophageal pathology have limited success and are typically associated with significant morbidity. Hence, there is currently an unmet clinical need for effective methods of esophageal repair. The present article presents a review of esophageal disease along with the anatomic and functional consequences of each pathologic process, the shortcomings associated with currently available therapies, and the latest advancements in the field of regenerative medicine with respect to strategies for esophageal repair from benchtop to bedside. PMID:25813694

  3. Successes and problems in family planning administration: experiences in two districts of Kerala, India.

    PubMed

    Valsan, E H

    1977-06-01

    The administrative organizations of the Ernakulam and Malappuram Districts' family planning programs during 1970-74 and the ways they dealt with typical problems of program organization are examined. Lack of personnel, poor staff morale, inadequate supplies, and political and religious opposition to various contraceptive methods, especially sterilization, existed to varying degrees in both districts' programs. The Ernakulam experience, involving mass sterilization camps that were part of an overall district development program, documents the effectiveness of a strong central leader. The Malappuram program illustrates, in contrast, the handicaps of poor areas where development programs were just beginning and administrative resources were overtaxed.

  4. Toxoplasma lymphadenitis diagnosed by fine-needle aspiration cytology: a rare finding.

    PubMed

    Hosokawa, S; Kusama, Y; Ono, T; Mineta, H

    2014-06-01

    There are only very few reports of cervical toxoplasma lymphadenitis being diagnosed exclusively via fine-needle aspiration cytology (with serology). We describe a case of toxoplasma lymphadenitis that was successfully diagnosed by fine-needle aspiration cytology. The case involved a male patient who was immunocompromised as a result of recurrent acute myelogenous leukaemia with cervical lymphadenopathy. The biopsy showed typical features of a well-defined pseudocyst containing Toxoplasma gondii tachyzoites. Toxoplasma lymphadenitis is a common cause of lymph node enlargement. Fine-needle aspiration cytology is a useful method for diagnosing and differentiating toxoplasma lymphadenitis from more serious causes of lymphadenopathy, such as metastatic lymphadenopathy or lymphoma.

  5. Optical fiber Fabry-Perot interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Anbo

    2014-06-01

    Fiber Fabry-Perot (FP) interferometry is one of the most important tools for harsh environment sensing because of its great flexibility of sensor material selection, superior long-­-term stability, and nature of remote passive operation. Virginia Tech's Center for Photonics Technology has been involved in the research of this field for many years. After a quick review of the typical methods for the construction of F-P sensors, emphasis will be placed on the whitelight interferometry, which is perhaps the most robust interferometric sensor demodulation technique today. The recent discovery of an additional phase will be presented and its significance to the sensor demodulation will be discussed.

  6. IR-IR Conformation Specific Spectroscopy of Na+(Glucose) Adducts

    NASA Astrophysics Data System (ADS)

    Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.; Garand, Etienne

    2018-01-01

    We report an IR-IR double resonance study of the structural landscape present in the Na+(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na+(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctive Na+ coordination. [Figure not available: see fulltext.

  7. Coulomb explosion: a novel approach to separate single-walled carbon nanotubes from their bundle.

    PubMed

    Liu, Guangtong; Zhao, Yuanchun; Zheng, Kaihong; Liu, Zheng; Ma, Wenjun; Ren, Yan; Xie, Sishen; Sun, Lianfeng

    2009-01-01

    A novel approach based on Coulomb explosion has been developed to separate single-walled carbon nanotubes (SWNTs) from their bundle. With this technique, we can readily separate a bundle of SWNTs into smaller bundles with uniform diameter as well as some individual SWNTs. The separated SWNTs have a typical length of several microns and form a nanotree at one end of the original bundle. More importantly, this separating procedure involves no surfactant and includes only one-step physical process. The separation method offers great conveniences for the subsequent individual SWNT or multiterminal SWNTs device fabrication and their physical properties studies.

  8. Adequacy of the Regular Early Education Classroom Environment for Students with Visual Impairment

    ERIC Educational Resources Information Center

    Brown, Cherylee M.; Packer, Tanya L.; Passmore, Anne

    2013-01-01

    This study describes the classroom environment that students with visual impairment typically experience in regular Australian early education. Adequacy of the classroom environment (teacher training and experience, teacher support, parent involvement, adult involvement, inclusive attitude, individualization of the curriculum, physical…

  9. Treating Families of Bone Marrow Recipients and Donors

    ERIC Educational Resources Information Center

    Cohen, Marie; And Others

    1977-01-01

    Luekemia and aplastic anemia are beginning to be treated by bone marrow transplants, involving donors and recipients from the same family. Such intimate involvement in the patient's life and death struggles typically produces a family crisis and frequent maladaptive responses by various family members. (Author)

  10. NUTRIENT TRANSPORT DURING BIOREMEDIATION OF CONTAMINATED BEACHES: EVALUATION WITH LITHIUM AS A CONSERVATIVE TRACER

    EPA Science Inventory

    Bioremediation of oil-contaminated beaches typically involves fertilization with nutrients that are thought to limit the growth rate of hydrocarbon-degrading bacteria. Much of the available technology involves application of fertilizers that release nutrients in a water-soluble ...

  11. Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben

    2003-01-01

    A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.

  12. A Summary of Publications on the Development of Mode-of ...

    EPA Pesticide Factsheets

    Chemical contaminants are formed as a consequence of chemical disinfection of public drinking waters. Chemical disinfectants, which are used to kill harmful microorganisms, react with natural organic matter (NOM), bromide, iodide, and other compounds, forming complex mixtures of potentially toxic disinfection byproducts (DBPs). The types and concentrations of DBPs formed during disinfection and the relative proportions of the components vary depending on factors such as source water conditions (e.g., types of NOM present), disinfectant type (e.g., chlorine, ozone, chloramine), and treatment conditions (e.g., pH and temperature). To date, over 500 DBPs have been detected in treated waters. However, typically more than 50% of the organic halide mass produced by chlorination disinfection consists of unidentified chemicals, which are not measured by routine analyses of DBPs. The protocols and methods typically used to evaluate chemical mixtures are best applied to simple defined mixtures consisting of relatively few chemicals. These approaches rely on assumptions (e.g., common mode of action, independent toxic action) regarding the type of joint toxic action (e.g., dose-additivity, synergism) that might be observed. Such methods, used for site assessments or toxicological studies, are often not sufficient to estimate health risk for complex drinking water DBP mixtures. Actual drinking water exposures involve multiple chemicals, many of w

  13. Sensitivity to Morphosyntactic Information in 3-Year-Old Children With Typical Language Development: A Feasibility Study.

    PubMed

    Deevy, Patricia; Leonard, Laurence B; Marchman, Virginia A

    2017-03-01

    This study tested the feasibility of a method designed to assess children's sensitivity to tense/agreement information in fronted auxiliaries during online comprehension of questions (e.g., Are the nice little dogs running?). We expected that a group of children who were proficient in auxiliary use would show this sensitivity, indicating an awareness of the relation between the subject-verb sequence (e.g., dogs running) and preceding information (e.g., are). Failure to grasp this relation is proposed to play a role in the protracted inconsistency in auxiliary use in children with specific language impairment (SLI). Fifteen 3-year-old typically developing children who demonstrated proficiency in auxiliary use viewed pairs of pictures showing a single agent and multiple agents while hearing questions with or without an agreeing fronted auxiliary. Proportion looking to the target was measured. Children showed anticipatory looking on the basis of the number information contained in the auxiliary (is or are). The children tested in this study represent a group that frequently serves as a comparison for older children with SLI. Because the method successfully demonstrated their sensitivity to tense/agreement information in questions, future research that involves direct comparisons of these 2 groups is warranted.

  14. System analysis of vehicle active safety problem

    NASA Astrophysics Data System (ADS)

    Buznikov, S. E.

    2018-02-01

    The problem of the road transport safety affects the vital interests of the most of the population and is characterized by a global level of significance. The system analysis of problem of creation of competitive active vehicle safety systems is presented as an interrelated complex of tasks of multi-criterion optimization and dynamic stabilization of the state variables of a controlled object. Solving them requires generation of all possible variants of technical solutions within the software and hardware domains and synthesis of the control, which is close to optimum. For implementing the task of the system analysis the Zwicky “morphological box” method is used. Creation of comprehensive active safety systems involves solution of the problem of preventing typical collisions. For solving it, a structured set of collisions is introduced with its elements being generated also using the Zwicky “morphological box” method. The obstacle speed, the longitudinal acceleration of the controlled object and the unpredictable changes in its movement direction due to certain faults, the road surface condition and the control errors are taken as structure variables that characterize the conditions of collisions. The conditions for preventing typical collisions are presented as inequalities for physical variables that define the state vector of the object and its dynamic limits.

  15. Assimilating concentration observations for transport and dispersion modeling in a meandering wind field

    NASA Astrophysics Data System (ADS)

    Haupt, Sue Ellen; Beyer-Lout, Anke; Long, Kerrie J.; Young, George S.

    Assimilating concentration data into an atmospheric transport and dispersion model can provide information to improve downwind concentration forecasts. The forecast model is typically a one-way coupled set of equations: the meteorological equations impact the concentration, but the concentration does not generally affect the meteorological field. Thus, indirect methods of using concentration data to influence the meteorological variables are required. The problem studied here involves a simple wind field forcing Gaussian dispersion. Two methods of assimilating concentration data to infer the wind direction are demonstrated. The first method is Lagrangian in nature and treats the puff as an entity using feature extraction coupled with nudging. The second method is an Eulerian field approach akin to traditional variational approaches, but minimizes the error by using a genetic algorithm (GA) to directly optimize the match between observations and predictions. Both methods show success at inferring the wind field. The GA-variational method, however, is more accurate but requires more computational time. Dynamic assimilation of a continuous release modeled by a Gaussian plume is also demonstrated using the genetic algorithm approach.

  16. Cascade flutter analysis with transient response aerodynamics

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Mahajan, Aparajit J.; Keith, Theo G., Jr.; Stefko, George L.

    1991-01-01

    Two methods for calculating linear frequency domain aerodynamic coefficients from a time marching Full Potential cascade solver are developed and verified. In the first method, the Influence Coefficient, solutions to elemental problems are superposed to obtain the solutions for a cascade in which all blades are vibrating with a constant interblade phase angle. The elemental problem consists of a single blade in the cascade oscillating while the other blades remain stationary. In the second method, the Pulse Response, the response to the transient motion of a blade is used to calculate influence coefficients. This is done by calculating the Fourier Transforms of the blade motion and the response. Both methods are validated by comparison with the Harmonic Oscillation method and give accurate results. The aerodynamic coefficients obtained from these methods are used for frequency domain flutter calculations involving a typical section blade structural model. An eigenvalue problem is solved for each interblade phase angle mode and the eigenvalues are used to determine aeroelastic stability. Flutter calculations are performed for two examples over a range of subsonic Mach numbers.

  17. Titanium Hydroxide - a Volatile Species at High Temperature

    NASA Technical Reports Server (NTRS)

    Nguyen, QuynhGiao N.

    2010-01-01

    An alternative method of low-temperature plasma functionalization of carbon nanotubes provides for the simultaneous attachment of molecular groups of multiple (typically two or three) different species or different mixtures of species to carbon nanotubes at different locations within the same apparatus. This method is based on similar principles, and involves the use of mostly the same basic apparatus, as those of the methods described in "Low-Temperature Plasma Functionalization of Carbon Nanotubes" (ARC-14661-1), NASA Tech Briefs, Vol. 28, No. 5 (May 2004), page 45. The figure schematically depicts the basic apparatus used in the aforementioned method, with emphasis on features that distinguish the present alternative method from the other. In this method, one exploits the fact that the composition of the deposition plasma changes as the plasma flows from its source in the precursor chamber toward the nanotubes in the target chamber. As a result, carbon nanotubes mounted in the target chamber at different flow distances (d1, d2, d3 . . .) from the precursor chamber become functionalized with different species or different mixtures of species.

  18. Method of fabricating a uranium-bearing foil

    DOEpatents

    Gooch, Jackie G [Seymour, TN; DeMint, Amy L [Kingston, TN

    2012-04-24

    Methods of fabricating a uranium-bearing foil are described. The foil may be substantially pure uranium, or may be a uranium alloy such as a uranium-molybdenum alloy. The method typically includes a series of hot rolling operations on a cast plate material to form a thin sheet. These hot rolling operations are typically performed using a process where each pass reduces the thickness of the plate by a substantially constant percentage. The sheet is typically then annealed and then cooled. The process typically concludes with a series of cold rolling passes where each pass reduces the thickness of the plate by a substantially constant thickness amount to form the foil.

  19. Airborne EM survey in volcanoes : Application to a volcanic hazards assessment

    NASA Astrophysics Data System (ADS)

    Mogi, T.

    2010-12-01

    Airborne electromagnetics (AEM) is a useful tool for investigating subsurface structures of volcanoes because it can survey large areas involving inaccessible areas. Disadvantages include lower accuracy and limited depth of investigation. AEM has been widely used in mineral exploration in frontier areas, and have been applying to engineering and environmental fields, particularly in studies involving active volcanoes. AEM systems typically comprise a transmitter and a receiver on an aircraft or in a towed bird, and although effective for surveying large areas, their penetration depth is limited because the distance between the transmitter and receiver is small and higher-frequency signals are used. To explore deeper structures using AEM, a semi-airborne system called GRounded Electrical source Airborne Transient ElectroMagnetics (GREATEM) has been developed. The system uses a grounded-electrical-dipole as the transmitter and generates horizontal electric fields. The GREATEM technology, first proposed by Mogi et al. (1998), has recently been improved and used in practical surveys (Mogi et al., 2009). The GREATEM survey system was developed to increase the depth of investigation possible using AEM. The method was tested in some volcanoes at 2004-2005. Here I will talk about some results of typical AEM surveys and GREATEM surveys in some volcanoes in Japan to mitigate hazards associated with volcano eruption. Geologic hazards caused by volcanic eruptions can be mitigated by a combination of prediction, preparedness and land-use control. Risk management depends on the identification of hazard zones and forecasting of eruptions. Hazard zoning involves the mapping of deposits which have formed during particular phases of volcanic activity and their extrapolation to identify the area which would be likely to suffer a similar hazard at some future time. The mapping is usually performed by surface geological surveys of volcanic deposits. Resistivity mapping by AEM is useful tool to identify each volcanic deposit on the surface and at shallower depth as well. This suggests that more efficient hazard map involving subsurface information can be supplied by AEM resistivity mapping.

  20. SOCIAL AND NON-SOCIAL CUEING OF VISUOSPATIAL ATTENTION IN AUTISM AND TYPICAL DEVELOPMENT

    PubMed Central

    Pruett, John R.; LaMacchia, Angela; Hoertel, Sarah; Squire, Emma; McVey, Kelly; Todd, Richard D.; Constantino, John N.; Petersen, Steven E.

    2013-01-01

    Three experiments explored attention to eye gaze, which is incompletely understood in typical development and is hypothesized to be disrupted in autism. Experiment 1 (n=26 typical adults) involved covert orienting to box, arrow, and gaze cues at two probabilities and cue-target times to test whether reorienting for gaze is endogenous, exogenous, or unique; experiment 2 (total n=80: male and female children and adults) studied age and sex effects on gaze cueing. Gaze cueing appears endogenous and may strengthen in typical development. Experiment 3 tested exogenous, endogenous, and/or gaze-based orienting in 25 typical and 27 Autistic Spectrum Disorder (ASD) children. ASD children made more saccades, slowing their reaction times; however, exogenous and endogenous orienting, including gaze cueing, appear intact in ASD. PMID:20809377

  1. At the forefront of thought: the effect of media exposure on airplane typicality.

    PubMed

    Novick, Laura R

    2003-12-01

    The terrorist attacks of September 11, 2001 provided a unique opportunity to investigate the causal status of frequency on typicality for one exemplar of a common conceptual category--namely, the typicality of airplane as a member of the category of vehicles. The extensive media coverage following the attacks included numerous references to the hijacked airplanes and to the consequences of suspending air travel to and from the United States for several days. The present study, involving 152 undergraduates, assessed airplane typicality at three time points ranging from 5 h to 1 month after the attacks and then again at 4.5 months after the attacks. Airplane was judged to be a more typical vehicle for 1 month following the attacks, relative to a baseline calculated from data collected yearly for 5 years preceding the attacks. By 4.5 months, however, typicality was back to baseline.

  2. Moderation and Mediation in Structural Equation Modeling: Applications for Early Intervention Research

    ERIC Educational Resources Information Center

    Hopwood, Christopher J.

    2007-01-01

    Second-generation early intervention research typically involves the specification of multivariate relations between interventions, outcomes, and other variables. Moderation and mediation involve variables or sets of variables that influence relations between interventions and outcomes. Following the framework of Baron and Kenny's (1986) seminal…

  3. Relative Age Effects in a Cognitive Task: A Case Study of Youth Chess

    ERIC Educational Resources Information Center

    Helsen, Werner F.; Baker, Joseph; Schorer, Joerg; Steingröver, Christina; Wattie, Nick; Starkes, Janet L.

    2016-01-01

    The relative age effect (RAE) has been demonstrated in many youth and professional sports. In this study, we hypothesized that there would also be a RAE among youth chess players who are typically involved in a complex cognitive task without significant physical requirements. While typical RAEs have been observed in adult chess players, in this…

  4. A Model of Self-Explanation Strategies of Instructional Text and Examples in the Acquisition of Programming Skills.

    ERIC Educational Resources Information Center

    Recker, Margaret M.; Pirolli, Peter

    Students learning to program recursive LISP functions in a typical school-like lesson on recursion were observed. The typical lesson contains text and examples and involves solving a series of programming problems. The focus of this study is on students' learning strategies in new domains. In this light, a Soar computational model of…

  5. Testing and Evaluation of Passive Radiation Detection Equipment for Homeland Security

    DOE PAGES

    West, David L.; Wood, Nathan L.; Forrester, Christina D.

    2017-12-01

    This article is concerned with test and evaluation methods for passive radiation detection equipment used in homeland security applications. The different types of equipment used in these applications are briefly reviewed and then test and evaluation methods discussed. The primary emphasis is on the test and evaluation standards developed by the American National Standards Institute’s N42 committees. Commonalities among the standards are then reviewed as well as examples of unique aspects for specific equipment types. Throughout, sample test configurations and results from testing and evaluation at Oak Ridge National Laboratory are given. The article concludes with a brief discussion ofmore » typical tests and evaluations not covered by the N42 standards and some examples of test and evaluation that involve the end users of the equipment.« less

  6. NASA/FAA general aviation crash dynamics program

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Hayduk, R. J.; Carden, H. D.

    1981-01-01

    The program involves controlled full scale crash testing, nonlinear structural analyses to predict large deflection elastoplastic response, and load attenuating concepts for use in improved seat and subfloor structure. Both analytical and experimental methods are used to develop expertise in these areas. Analyses include simplified procedures for estimating energy dissipating capabilities and comprehensive computerized procedures for predicting airframe response. These analyses are developed to provide designers with methods for predicting accelerations, loads, and displacements on collapsing structure. Tests on typical full scale aircraft and on full and subscale structural components are performed to verify the analyses and to demonstrate load attenuating concepts. A special apparatus was built to test emergency locator transmitters when attached to representative aircraft structure. The apparatus is shown to provide a good simulation of the longitudinal crash pulse observed in full scale aircraft crash tests.

  7. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    NASA Technical Reports Server (NTRS)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  8. One-step patterning of hollow microstructures in paper by laser cutting to create microfluidic analytical devices.

    PubMed

    Nie, Jinfang; Liang, Yuanzhi; Zhang, Yun; Le, Shangwang; Li, Dunnan; Zhang, Songbai

    2013-01-21

    In this paper, we report a simple, low-cost method for rapid, highly reproductive fabrication of paper-based microfluidics by using a commercially available, minitype CO(2) laser cutting/engraving machine. This method involves only one operation of cutting a piece of paper by laser according to a predesigned pattern. The hollow microstructures formed in the paper are used as the 'hydrophobic barriers' to define the hydrophilic flowing paths. A typical paper device on a 4 cm × 4 cm piece of paper can be fabricated within ∼7-20 s; it is ready for use once the cutting process is finished. The main fabrication parameters such as the applied current and cutting rate of the laser were optimized. The fabrication resolution and multiplexed analytical capability of the hollow microstructure-patterned paper were also characterized.

  9. Residual zonal flows in tokamaks and stellarators at arbitrary wavelengths

    NASA Astrophysics Data System (ADS)

    Monreal, Pedro; Calvo, Iván; Sánchez, Edilberto; Parra, Félix I.; Bustos, Andrés; Könies, Axel; Kleiber, Ralf; Görler, Tobias

    2016-04-01

    In the linear collisionless limit, a zonal potential perturbation in a toroidal plasma relaxes, in general, to a non-zero residual value. Expressions for the residual value in tokamak and stellarator geometries, and for arbitrary wavelengths, are derived. These expressions involve averages over the lowest order particle trajectories, that typically cannot be evaluated analytically. In this work, an efficient numerical method for the evaluation of such expressions is reported. It is shown that this method is faster than direct gyrokinetic simulations performed with the Gene and EUTERPE codes. Calculations of the residual value in stellarators are provided for much shorter wavelengths than previously available in the literature. Electrons must be treated kinetically in stellarators because, unlike in tokamaks, kinetic electrons modify the residual value even at long wavelengths. This effect, that had already been predicted theoretically, is confirmed by gyrokinetic simulations.

  10. Analysis of internal ablation for the thermal control of aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Camberos, Jose A.; Roberts, Leonard

    1989-01-01

    A new method of thermal protection for transatmospheric vehicles is introduced. The method involves the combination of radiation, ablation and transpiration cooling. By placing an ablating material behind a fixed-shape, porous outer shield, the effectiveness of transpiration cooling is made possible while retaining the simplicity of a passive mechanism. A simplified one-dimensional approach is used to derive the governing equations. Reduction of these equations to non-dimensional form yields two parameters which characterize the thermal protection effectiveness of the shield and ablator combination for a given trajectory. The non-dimensional equations are solved numerically for a sample trajectory corresponding to glide re-entry. Four typical ablators are tested and compared with results obtained by using the thermal properties of water. For the present level of analysis, the numerical computations adequately support the analytical model.

  11. Testing and Evaluation of Passive Radiation Detection Equipment for Homeland Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, David L.; Wood, Nathan L.; Forrester, Christina D.

    This article is concerned with test and evaluation methods for passive radiation detection equipment used in homeland security applications. The different types of equipment used in these applications are briefly reviewed and then test and evaluation methods discussed. The primary emphasis is on the test and evaluation standards developed by the American National Standards Institute’s N42 committees. Commonalities among the standards are then reviewed as well as examples of unique aspects for specific equipment types. Throughout, sample test configurations and results from testing and evaluation at Oak Ridge National Laboratory are given. The article concludes with a brief discussion ofmore » typical tests and evaluations not covered by the N42 standards and some examples of test and evaluation that involve the end users of the equipment.« less

  12. Systematic Review of Cysteine-Sparing NOTCH3 Missense Mutations in Patients with Clinical Suspicion of CADASIL.

    PubMed

    Muiño, Elena; Gallego-Fabrega, Cristina; Cullell, Natalia; Carrera, Caty; Torres, Nuria; Krupinski, Jurek; Roquer, Jaume; Montaner, Joan; Fernández-Cadenas, Israel

    2017-09-13

    CADASIL (cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy) is caused by mutations in the NOTCH3 gene, affecting the number of cysteines in the extracellular domain of the receptor, causing protein misfolding and receptor aggregation. The pathogenic role of cysteine-sparing NOTCH3 missense mutations in patients with typical clinical CADASIL syndrome is unknown. The aim of this article is to describe these mutations to clarify if any could be potentially pathogenic. Articles on cysteine-sparing NOTCH3 missense mutations in patients with clinical suspicion of CADASIL were reviewed. Mutations were considered potentially pathogenic if patients had: (a) typical clinical CADASIL syndrome; (b) diffuse white matter hyperintensities; (c) the 33 NOTCH3 exons analyzed; (d) mutations that were not polymorphisms; and (e) Granular osmiophilic material (GOM) deposits in the skin biopsy. Twenty-five different mutations were listed. Four fulfill the above criteria: p.R61W; p.R75P; p.D80G; and p.R213K. Patients carrying these mutations had typical clinical CADASIL syndrome and diffuse white matter hyperintensities, mostly without anterior temporal pole involvement. Cysteine-sparing NOTCH3 missense mutations are associated with typical clinical CADASIL syndrome and typical magnetic resonance imaging (MRI) findings, although with less involvement of the anterior temporal lobe. Hence, these mutations should be further studied to confirm their pathological role in CADASIL.

  13. Method of making gas diffusion layers for electrochemical cells

    DOEpatents

    Frisk, Joseph William; Boand, Wayne Meredith; Larson, James Michael

    2002-01-01

    A method is provided for making a gas diffusion layer for an electrochemical cell comprising the steps of: a) combining carbon particles and one or more surfactants in a typically aqueous vehicle to make a preliminary composition, typically by high shear mixing; b) adding one or more highly fluorinated polymers to said preliminary composition by low shear mixing to make a coating composition; and c) applying the coating composition to an electrically conductive porous substrate, typically by a low shear coating method.

  14. Engaging Undergraduates in Social Science Research: The Taking the Pulse of Saskatchewan Project

    ERIC Educational Resources Information Center

    Berdahl, Loleen

    2014-01-01

    Although student involvement in research and inquiry can advance undergraduate learning, there are limited opportunities for undergraduate students to be directly involved in social science research. Social science faculty members typically work outside of laboratory settings, with the limited research assistance work being completed by graduate…

  15. Regional Demand Models for Water-Based Recreation: Combining Aggregate and Individual-Level Choice Data

    EPA Science Inventory

    Estimating the effect of changes in water quality on non-market values for recreation involves estimating a change in aggregate consumer surplus. This aggregate value typically involves estimating both a per-person, per-trip change in willingness to pay, as well as defining the m...

  16. 43 CFR 10005.12 - Policy regarding the scope of measures to be included in the plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the site of the impact typically involves restoration or replacement. Off-site mitigation might involve protection, restoration, or enhancement of a similar resource value at a different location... responsibilities, the Commission sees an obligation to give priority to protection and restoration activities that...

  17. Small School Ritual and Parent Involvement.

    ERIC Educational Resources Information Center

    Bushnell, Mary

    This paper examines the ritual socialization of parents into a school community. Rituals may be mundane or sacred and typically involve actions that have transformative potential. In the context of groups, rituals may serve the purposes of identifying and constructing group identity, maintaining cohesion, and constructing and communicating values.…

  18. Network methods to support user involvement in qualitative data analyses: an introduction to Participatory Theme Elicitation.

    PubMed

    Best, Paul; Badham, Jennifer; Corepal, Rekesh; O'Neill, Roisin F; Tully, Mark A; Kee, Frank; Hunter, Ruth F

    2017-11-23

    While Patient and Public Involvement (PPI) is encouraged throughout the research process, engagement is typically limited to intervention design and post-analysis stages. There are few approaches to participatory data analyses within complex health interventions. Using qualitative data from a feasibility randomised controlled trial (RCT), this proof-of-concept study tests the value of a new approach to participatory data analysis called Participatory Theme Elicitation (PTE). Forty excerpts were given to eight members of a youth advisory PPI panel to sort into piles based on their perception of related thematic content. Using algorithms to detect communities in networks, excerpts were then assigned to a thematic cluster that combined the panel members' perspectives. Network analysis techniques were also used to identify key excerpts in each grouping that were then further explored qualitatively. While PTE analysis was, for the most part, consistent with the researcher-led analysis, young people also identified new emerging thematic content. PTE appears promising for encouraging user led identification of themes arising from qualitative data collected during complex interventions. Further work is required to validate and extend this method. ClinicalTrials.gov, ID: NCT02455986 . Retrospectively Registered on 21 May 2015.

  19. Reason, emotion and decision-making: risk and reward computation with feeling.

    PubMed

    Quartz, Steven R

    2009-05-01

    Many models of judgment and decision-making posit distinct cognitive and emotional contributions to decision-making under uncertainty. Cognitive processes typically involve exact computations according to a cost-benefit calculus, whereas emotional processes typically involve approximate, heuristic processes that deliver rapid evaluations without mental effort. However, it remains largely unknown what specific parameters of uncertain decision the brain encodes, the extent to which these parameters correspond to various decision-making frameworks, and their correspondence to emotional and rational processes. Here, I review research suggesting that emotional processes encode in a precise quantitative manner the basic parameters of financial decision theory, indicating a reorientation of emotional and cognitive contributions to risky choice.

  20. Why Engineers Should Consider Formal Methods

    NASA Technical Reports Server (NTRS)

    Holloway, C. Michael

    1997-01-01

    This paper presents a logical analysis of a typical argument favoring the use of formal methods for software development, and suggests an alternative argument that is simpler and stronger than the typical one.

  1. Wavelength calibration of an imaging spectrometer based on Savart interferometer

    NASA Astrophysics Data System (ADS)

    Li, Qiwei; Zhang, Chunmin; Yan, Tingyu; Quan, Naicheng; Wei, Yutong; Tong, Cuncun

    2017-09-01

    The basic principle of Fourier-transform imaging spectrometer (FTIS) based on Savart interferometer is outlined. The un-identical distribution of the optical path difference which leads to the wavelength drift of each row of the interferogram is analyzed. Two typical methods for wavelength calibration of the presented system are described. The first method unifies different spectral intervals and maximum spectral frequencies of each row by a reference monochromatic light with known wavelength, and the dispersion compensation of Savart interferometer is also involved. The second approach is based on the least square fitting which builds the functional relation between recovered wavelength, row number and calibrated wavelength by concise equations. The effectiveness of the two methods is experimentally demonstrated with monochromatic lights and mixed light source across the detecting band of the system, and the results indicate that the first method has higher precision and the mean root-mean-square error of the recovered wavelengths is significantly reduced from 19.896 nm to 1.353 nm, while the second method is more convenient to implement and also has good precision of 2.709 nm.

  2. Polarisation in spin-echo experiments: Multi-point and lock-in measurements

    NASA Astrophysics Data System (ADS)

    Tamtögl, Anton; Davey, Benjamin; Ward, David J.; Jardine, Andrew P.; Ellis, John; Allison, William

    2018-02-01

    Spin-echo instruments are typically used to measure diffusive processes and the dynamics and motion in samples on ps and ns time scales. A key aspect of the spin-echo technique is to determine the polarisation of a particle beam. We present two methods for measuring the spin polarisation in spin-echo experiments. The current method in use is based on taking a number of discrete readings. The implementation of a new method involves continuously rotating the spin and measuring its polarisation after being scattered from the sample. A control system running on a microcontroller is used to perform the spin rotation and to calculate the polarisation of the scattered beam based on a lock-in amplifier. First experimental tests of the method on a helium spin-echo spectrometer show that it is clearly working and that it has advantages over the discrete approach, i.e., it can track changes of the beam properties throughout the experiment. Moreover, we show that real-time numerical simulations can perfectly describe a complex experiment and can be easily used to develop improved experimental methods prior to a first hardware implementation.

  3. Feature weight estimation for gene selection: a local hyperlinear learning approach

    PubMed Central

    2014-01-01

    Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071

  4. Rapid analysis of effluents generated by the dairy industry for fat determination by preconcentration in nylon membranes and attenuated total reflectance infrared spectroscopy measurement.

    PubMed

    Moliner Martínez, Y; Muñoz-Ortuño, M; Herráez-Hernández, R; Campíns-Falcó, P

    2014-02-01

    This paper describes a new approach for the determination of fat in the effluents generated by the dairy industry which is based on the retention of fat in nylon membranes and measurement of the absorbances on the membrane surface by ATR-IR spectroscopy. Different options have been evaluated for retaining fat in the membranes using milk samples of different origin and fat content. Based on the results obtained, a method is proposed for the determination of fat in effluents which involves the filtration of 1 mL of the samples through 0.45 µm nylon membranes of 13 mm diameter. The fat content is then determined by measuring the absorbance of band at 1745 cm(-1). The proposed method can be used for the direct estimation of fat at concentrations in the 2-12 mg/L interval with adequate reproducibility. The intraday precision, expressed as coefficients of variation CVs, were ≤ 11%, whereas the interday CVs were ≤ 20%. The method shows a good tolerance towards conditions typically found in the effluents generated by the dairy industry. The most relevant features of the proposed method are simplicity and speed as the samples can be characterized in a few minutes. Sample preparation does not involve either additional instrumentation (such as pumps or vacuum equipment) or organic solvents or other chemicals. Therefore, the proposed method can be considered a rapid, simple and cost-effective alternative to gravimetric methods for controlling fat content in these effluents during production or cleaning processes. © 2013 Published by Elsevier B.V.

  5. Child protective services utilization of child abuse pediatricians: A mixed methods study.

    PubMed

    Girardet, Rebecca; Bolton, Kelly; Hashmi, Syed; Sedlock, Emily; Khatri, Rachna; Lahoti, Nina; Lukefahr, James

    2018-02-01

    Several children's hospitals and medical schools across Texas have child abuse pediatricians (CAPs) who work closely with child protection workers to help ensure accurate assessments of the likelihood of maltreatment in cases of suspected abuse and neglect. Since the state does not mandate which cases should be referred to a CAP center, we were interested in studying factors that may influence workers' decisions to consult a CAP. We used a mixed methods study design consisting of a focus group followed by a survey. The focus group identified multiple factors that impact workers' decision-making, including several that involve medical providers. Responses from 436 completed surveys were compared to employees' number of years of employment and to the state region in which they worked. Focus group findings and survey responses revealed frustration among many workers when dealing with medical providers, and moderate levels of confidence in workers' abilities to make accurate determinations in cases involving medical information. Workers were more likely to refer cases involving serious physical injury than other types of cases. Among workers who reported prior interactions with a CAP, experiences and attitudes regarding CAPs were typically positive. The survey also revealed significant variability in referral practices by state region. Our results suggest that standard guidelines regarding CAP referrals may help workers who deal with cases involving medical information. Future research and quality improvement efforts to improve transfers of information and to better understand the qualities that CPS workers appreciate in CAP teams should improve CAP-CPS coordination. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies

    DTIC Science & Technology

    2006-07-01

    and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility

  7. The Relationships among Verbal Short-Term Memory, Phonological Awareness, and New Word Learning: Evidence from Typical Development and Down Syndrome

    ERIC Educational Resources Information Center

    Jarrold, Christopher; Thorn, Annabel S. C.; Stephens, Emma

    2009-01-01

    This study examined the correlates of new word learning in a sample of 64 typically developing children between 5 and 8 years of age and a group of 22 teenagers and young adults with Down syndrome. Verbal short-term memory and phonological awareness skills were assessed to determine whether learning new words involved accurately representing…

  8. Joint Attention in Parent-Child Dyads Involving Children with Selective Mutism: A Comparison between Anxious and Typically Developing Children

    ERIC Educational Resources Information Center

    Nowakowski, Matilda E.; Tasker, Susan L.; Cunningham, Charles E.; McHolm, Angela E.; Edison, Shannon; St. Pierre, Jeff; Boyle, Michael H.; Schmidt, Louis A.

    2011-01-01

    Although joint attention processes are known to play an important role in adaptive social behavior in typical development, we know little about these processes in clinical child populations. We compared early school age children with selective mutism (SM; n = 19) versus mixed anxiety (MA; n = 18) and community controls (CC; n = 26) on joint…

  9. Intraepidermal Merkel cell carcinoma: A case series of a rare entity with clinical follow up.

    PubMed

    Jour, George; Aung, Phyu P; Rozas-Muñoz, Eduardo; Curry, Johnathan L; Prieto, Victor; Ivan, Doina

    2017-08-01

    Merkel cell carcinoma (MCC) is a rare but aggressive cutaneous carcinoma. MCC typically involves dermis and although epidermotropism has been reported, MCC strictly intraepidermal or in situ (MCCIS) is exceedingly rare. Most of the cases of MCCIS described so far have other associated lesions, such as squamous or basal cell carcinoma, actinic keratosis and so on. Herein, we describe 3 patients with MCC strictly in situ, without a dermal component. Our patients were elderly. 2 of the lesions involved the head and neck area and 1 was on a finger. All tumors were strictly intraepidermal in the diagnostic biopsies, and had histomorphologic features and an immunohistochemical profile supporting the diagnosis of MCC. Excisional biopsies were performed in 2 cases and failed to reveal dermal involvement by MCC or other associated malignancies. Our findings raise the awareness that MCC strictly in situ does exist and it should be included in the differential diagnosis of Paget's or extramammary Paget's disease, pagetoid squamous cell carcinoma, melanoma and other neoplasms that typically show histologically pagetoid extension of neoplastic cells. Considering the limited number of cases reported to date, the diagnosis of isolated MCCIS should not warrant a change in management from the typical MCC. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Grip force coordination during bimanual tasks in unilateral cerebral palsy.

    PubMed

    Islam, Mominul; Gordon, Andrew M; Sköld, Annika; Forssberg, Hans; Eliasson, Ann-Christin

    2011-10-01

    The aim of the study was to investigate coordination of fingertip forces during an asymmetrical bimanual task in children with unilateral cerebral palsy (CP). Twelve participants (six males, six females; mean age 14y 4mo, SD 3.3y; range 9-20y;) with unilateral CP (eight right-sided, four left-sided) and 15 age-matched typically developing participants (five males, 10 females; mean age 14y 3mo, SD 2.9y; range 9-18y,) were included. Participants were instructed to hold custom-made grip devices in each hand and place one device on top of the other. The grip force and load force were recorded simultaneously in both hands. Temporal coordination between the two hands was impaired in the participants with CP (compared with that in typically developing participants), that is they initiated the task by decreasing grip force in the releasing hand before increasing the force in the holding hand. The grip force increase in the holding hand was also smaller in participants with CP (involved hand/non-dominant hand releasing, p<0.001; non-involved hand/dominant hand releasing, p=0.007), indicating deficient scaling of force amplitude. The impairment was greater when participants with CP used their non-involved hand as the holding hand. Temporal coordination and scaling of fingertip forces were impaired in both hands in participants with CP. The non-involved hand was strongly affected by activity in the involved hand, which may explain why children with unilateral CP prefer to use only one hand during tasks that are typically performed with both hands. © The Authors. Developmental Medicine & Child Neurology © 2011 Mac Keith Press.

  11. A Bayesian Model of the Memory Colour Effect.

    PubMed

    Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.

  12. A Bayesian Model of the Memory Colour Effect

    PubMed Central

    Olkkonen, Maria; Gegenfurtner, Karl R.

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874

  13. Determination of Microbial Extracellular Enzyme Activity in Waters, Soils, and Sediments using High Throughput Microplate Assays

    PubMed Central

    Jackson, Colin R.; Tyler, Heather L.; Millar, Justin J.

    2013-01-01

    Much of the nutrient cycling and carbon processing in natural environments occurs through the activity of extracellular enzymes released by microorganisms. Thus, measurement of the activity of these extracellular enzymes can give insights into the rates of ecosystem level processes, such as organic matter decomposition or nitrogen and phosphorus mineralization. Assays of extracellular enzyme activity in environmental samples typically involve exposing the samples to artificial colorimetric or fluorometric substrates and tracking the rate of substrate hydrolysis. Here we describe microplate based methods for these procedures that allow the analysis of large numbers of samples within a short time frame. Samples are allowed to react with artificial substrates within 96-well microplates or deep well microplate blocks, and enzyme activity is subsequently determined by absorption or fluorescence of the resulting end product using a typical microplate reader or fluorometer. Such high throughput procedures not only facilitate comparisons between spatially separate sites or ecosystems, but also substantially reduce the cost of such assays by reducing overall reagent volumes needed per sample. PMID:24121617

  14. Determination of microbial extracellular enzyme activity in waters, soils, and sediments using high throughput microplate assays.

    PubMed

    Jackson, Colin R; Tyler, Heather L; Millar, Justin J

    2013-10-01

    Much of the nutrient cycling and carbon processing in natural environments occurs through the activity of extracellular enzymes released by microorganisms. Thus, measurement of the activity of these extracellular enzymes can give insights into the rates of ecosystem level processes, such as organic matter decomposition or nitrogen and phosphorus mineralization. Assays of extracellular enzyme activity in environmental samples typically involve exposing the samples to artificial colorimetric or fluorometric substrates and tracking the rate of substrate hydrolysis. Here we describe microplate based methods for these procedures that allow the analysis of large numbers of samples within a short time frame. Samples are allowed to react with artificial substrates within 96-well microplates or deep well microplate blocks, and enzyme activity is subsequently determined by absorption or fluorescence of the resulting end product using a typical microplate reader or fluorometer. Such high throughput procedures not only facilitate comparisons between spatially separate sites or ecosystems, but also substantially reduce the cost of such assays by reducing overall reagent volumes needed per sample.

  15. Designing biopolymer microgels to encapsulate, protect and deliver bioactive components: Physicochemical aspects.

    PubMed

    McClements, David Julian

    2017-02-01

    Biopolymer microgels have considerable potential for their ability to encapsulate, protect, and release bioactive components. Biopolymer microgels are small particles (typically 100nm to 1000μm) whose interior consists of a three-dimensional network of cross-linked biopolymer molecules that traps a considerable amount of solvent. This type of particle is also sometimes referred to as a nanogel, hydrogel bead, biopolymer particles, or microsphere. Biopolymer microgels are typically prepared using a two-step process involving particle formation and particle gelation. This article reviews the major constituents and fabrication methods that can be used to prepare microgels, highlighting their advantages and disadvantages. It then provides an overview of the most important characteristics of microgel particles (such as size, shape, structure, composition, and electrical properties), and describes how these parameters can be manipulated to control the physicochemical properties and functional attributes of microgel suspensions (such as appearance, stability, rheology, and release profiles). Finally, recent examples of the utilization of biopolymer microgels to encapsulate, protect, or release bioactive agents, such as pharmaceuticals, nutraceuticals, enzymes, flavors, and probiotics is given. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Fast algorithm for spectral processing with application to on-line welding quality assurance

    NASA Astrophysics Data System (ADS)

    Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.

    2006-10-01

    A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.

  17. Fatal crash involvement and laws against alcohol-impaired driving.

    PubMed

    Zador, P L; Lund, A K; Fields, M; Weinberg, K

    1989-01-01

    It is estimated that in 1985 about 1,560 fewer drivers were involved in fatal crashes because of three types of drinking-driving laws. The laws studied were per se laws that define driving under the influence using blood alcohol concentration (BAC) thresholds; laws that provide for administrative license suspension or revocation prior to conviction for driving under the influence (often referred to as "administrative per se" laws); and laws that mandate jail or community service for first convictions of driving under the influence. It is estimated that if all 48 of the contiguous states adopted laws similar to those studied here, and if these new laws had effects comparable to those reported here, another 2,600 fatal driver involvements could be prevented each year. During hours when typically at least half of all fatally injured drivers have a BAC over 0.10 percent, administrative suspension/revocation is estimated to reduce the involvement of drivers in fatal crashes by about 9 percent; during the same hours, first offense mandatory jail/community service laws are estimated to have reduced driver involvement by about 6 percent. The effect of per se laws was estimated to be a 6 percent reduction during hours when fatal crashes typically are less likely to involve alcohol. These results are based on analyses of drivers involved in fatal crashes in the 48 contiguous states of the United States during the years 1978 to 1985.

  18. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  19. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  20. Minimizing Noise in Pediatric Task-Based functional MRI; Adolescents with Developmental Disabilities and Typical Development

    PubMed Central

    Fassbender, Catherine; Muhkerjee, Prerona; Schweitzer, Julie B.

    2017-01-01

    Functional Magnetic Resonance Imaging (fMRI) represents a powerful tool with which to examine brain functioning and development in typically developing pediatric groups as well as children and adolescents with clinical disorders. However, fMRI data can be highly susceptible to misinterpretation due to the effects of excessive levels of noise, often related to head motion. Imaging children, especially with developmental disorders, requires extra considerations related to hyperactivity, anxiety and the ability to perform and maintain attention to the fMRI paradigm. We discuss a number of methods that can be employed to minimize noise, in particular movement-related noise. To this end we focus on strategies prior to, during and following the data acquisition phase employed primarily within our own laboratory. We discuss the impact of factors such as experimental design, screening of potential participants and pre-scan training on head motion in our adolescents with developmental disorders and typical development. We make some suggestions that may minimize noise during data acquisition itself and finally we briefly discuss some current processing techniques that may help to identify and remove noise in the data. Many advances have been made in the field of pediatric imaging, particularly with regard to research involving children with developmental disorders. Mindfulness of issues such as those discussed here will ensure continued progress and greater consistency across studies. PMID:28130195

  1. Transrectal ultrasound-guided extraction of impacted prostatic urethral calculi: a simple alternative to endoscopy

    PubMed Central

    Amend, Gregory; Gandhi, Jason; Smith, Noel L.; Weissbart, Steven J.; Schulsinger, David A.; Joshi, Gargi

    2017-01-01

    Urethral stones can become impacted in the posterior urethra, typically presenting with varying degrees of acute urinary retention and lower urinary tract symptoms. These are traditionally treated in the inpatient setting, with external urethrotomy or endoscopic push-back of the calculus into the urinary bladder followed by cystolitholapaxy or cystolithotripsy. However, these methods are invasive, involve general anesthesia, and require radiation. In this report, we describe a simple, minimally invasive, and safe alternative technique to visualize and remove impacted prostatic urethral stones under the real-time guidance of transrectal ultrasonography (TRUS). The urologist can accomplish this procedure in the office, avoiding radiation exposure to the patient and hospital admission. PMID:28725602

  2. IR-IR Conformation Specific Spectroscopy of Na +(Glucose) Adducts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.

    Here in this paper we report an IR-IR double resonance study of the structural landscape present in the Na +(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na +(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctivemore » Na+ coordination.« less

  3. Efficacy-oriented compatibility for component-based Chinese medicine

    PubMed Central

    Zhang, Jun-hua; Zhu, Yan; Fan, Xiao-hui; Zhang, Bo-li

    2015-01-01

    Single-target drugs have not achieved satisfactory therapeutic effects for complex diseases involving multiple factors. Instead, innovations in recent drug research and development have revealed the emergence of compound drugs, such as cocktail therapies and “polypills”, as the frontier in new drug development. A traditional Chinese medicine (TCM) prescription that is usually composed of several medicinal herbs can serve a typical representative of compound medicines. Although the traditional compatibility theory of TCM cannot be well expressed using modern scientific language nowadays, the fundamental purpose of TCM compatibility can be understood as promoting efficacy and reducing toxicity. This paper introduces the theory and methods of efficacy-oriented compatibility for developing component-based Chinese medicines. PMID:25864650

  4. IR-IR Conformation Specific Spectroscopy of Na +(Glucose) Adducts

    DOE PAGES

    Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.; ...

    2017-09-27

    Here in this paper we report an IR-IR double resonance study of the structural landscape present in the Na +(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na +(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctivemore » Na+ coordination.« less

  5. An approach to optimal semi-active control of vibration energy harvesting based on MEMS

    NASA Astrophysics Data System (ADS)

    Rojas, Rafael A.; Carcaterra, Antonio

    2018-07-01

    In this paper the energy harvesting problem involving typical MEMS technology is reduced to an optimal control problem, where the objective function is the absorption of the maximum amount of energy in a given time interval from a vibrating environment. The interest here is to identify a physical upper bound for this energy storage. The mathematical tool is a new optimal control called Krotov's method, that has not yet been applied to engineering problems, except in quantum dynamics. This approach leads to identify new maximum bounds to the energy harvesting performance. Novel MEMS-based device control configurations for vibration energy harvesting are proposed with particular emphasis to piezoelectric, electromagnetic and capacitive circuits.

  6. The Dynamics of "Market-Making" in Higher Education

    ERIC Educational Resources Information Center

    Komljenovic, Janja; Robertson, Susan L.

    2016-01-01

    This paper examines what to some is a well-worked furrow; the processes and outcomes involved in what is typically referred to as "marketization" in the higher education sector. We do this through a case study of Newton University, where we reveal a rapid proliferation of market exchanges involving the administrative division of the…

  7. Impact of parental weight status on weight loss efforts in Hispanic children

    USDA-ARS?s Scientific Manuscript database

    Parents have been shown to play an important role in weight loss for children. Parents are typically involved either as models for change or as supporters of children's weight loss efforts. It is likely that overweight/obese parents will need to be involved in changing the environment for themselv...

  8. Parental Involvement in the Musical Education of Violin Students: Suzuki and "Traditional" Approaches Compared

    ERIC Educational Resources Information Center

    Bugeja, Clare

    2009-01-01

    This article investigates parental involvement in the musical education of violin students and the changing role of the parents' across the learning process. Two contexts were compared, one emphasising the Suzuki methodology and the other a "traditional" approach. Students learning "traditionally" are typically taught note reading from the…

  9. "…Their Opinions Mean Something": Care Staff's Attitudes to Health Research Involving People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Hall, Natalie; Durand, Marie-Anne; Mengoni, Silvana E.

    2017-01-01

    Background: Despite experiencing health inequalities, people with intellectual disabilities are under-represented in health research. Previous research has identified barriers but has typically focused on under-recruitment to specific studies. This study aimed to explore care staff's attitudes to health research involving people with intellectual…

  10. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE PAGES

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...

    2017-09-20

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  11. Determination of elastic stresses in gas-turbine disks

    NASA Technical Reports Server (NTRS)

    Manson, S S

    1947-01-01

    A method is presented for the calculation of elastic stresses in symmetrical disks typical of those of a high-temperature gas turbine. The method is essentially a finite-difference solution of the equilibrium and compatibility equations for elastic stresses in a symmetrical disk. Account can be taken of point-to-point variations in disk thickness, in temperature, in elastic modulus, in coefficient of thermal expansion, in material density, and in Poisson's ratio. No numerical integration or trial-and-error procedures are involved and the computations can be performed in rapid and routine fashion by nontechnical computers with little engineering supervision. Checks on problems for which exact mathematical solutions are known indicate that the method yields results of high accuracy. Illustrative examples are presented to show the manner of treating solid disks, disks with central holes, and disks constructed either of a single material or two or more welded materials. The effect of shrink fitting is taken into account by a very simple device.

  12. A new data processing technique for Rayleigh-Taylor instability growth experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Yongteng; Tu, Shaoyong; Miao, Wenyong

    Typical face-on experiments for Rayleigh-Taylor instability study involve the time-resolved radiography of an accelerated foil with line-of-sight of the radiography along the direction of motion. The usual method which derives perturbation amplitudes from the face-on images reverses the actual image transmission procedure, so the obtained results will have a large error in the case of large optical depth. In order to improve the accuracy of data processing, a new data processing technique has been developed to process the face-on images. This technique based on convolution theorem, refined solutions of optical depth can be achieved by solving equations. Furthermore, we discussmore » both techniques for image processing, including the influence of modulation transfer function of imaging system and the backlighter spatial profile. Besides, we use the two methods to the process the experimental results in Shenguang-II laser facility and the comparison shows that the new method effectively improve the accuracy of data processing.« less

  13. Battery powered thought: enhancement of attention, learning, and memory in healthy adults using transcranial direct current stimulation.

    PubMed

    Coffman, Brian A; Clark, Vincent P; Parasuraman, Raja

    2014-01-15

    This article reviews studies demonstrating enhancement with transcranial direct current stimulation (tDCS) of attention, learning, and memory processes in healthy adults. Given that these are fundamental cognitive functions, they may also mediate stimulation effects on other higher-order processes such as decision-making and problem solving. Although tDCS research is still young, there have been a variety of methods used and cognitive processes tested. While these different methods have resulted in seemingly contradictory results among studies, many consistent and noteworthy effects of tDCS on attention, learning, and memory have been reported. The literature suggests that although tDCS as typically applied may not be as useful for localization of function in the brain as some other methods of brain stimulation, tDCS may be particularly well-suited for practical applications involving the enhancement of attention, learning, and memory, in both healthy subjects and in clinical populations. © 2013 Elsevier Inc. All rights reserved.

  14. Reverse engineering of gene regulatory networks.

    PubMed

    Cho, K H; Choo, S M; Jung, S H; Kim, J R; Choi, H S; Kim, J

    2007-05-01

    Systems biology is a multi-disciplinary approach to the study of the interactions of various cellular mechanisms and cellular components. Owing to the development of new technologies that simultaneously measure the expression of genetic information, systems biological studies involving gene interactions are increasingly prominent. In this regard, reconstructing gene regulatory networks (GRNs) forms the basis for the dynamical analysis of gene interactions and related effects on cellular control pathways. Various approaches of inferring GRNs from gene expression profiles and biological information, including machine learning approaches, have been reviewed, with a brief introduction of DNA microarray experiments as typical tools for measuring levels of messenger ribonucleic acid (mRNA) expression. In particular, the inference methods are classified according to the required input information, and the main idea of each method is elucidated by comparing its advantages and disadvantages with respect to the other methods. In addition, recent developments in this field are introduced and discussions on the challenges and opportunities for future research are provided.

  15. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  16. Cross-validation to select Bayesian hierarchical models in phylogenetics.

    PubMed

    Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C

    2016-05-26

    Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.

  17. Exploring employment readiness through mock job interview and workplace role-play exercises: comparing youth with physical disabilities to their typically developing peers.

    PubMed

    Lindsay, Sally; McDougall, Carolyn; Sanford, Robyn; Menna-Dack, Dolly; Kingsnorth, Shauna; Adams, Tracey

    2015-01-01

    To assess performance differences in a mock job interview and workplace role-play exercise for youth with disabilities compared to their typically developing peers. We evaluated a purposive sample of 31 youth (15 with a physical disability and 16 typically developing) on their performance (content and delivery) in employment readiness role-play exercises. Our findings show significant differences between youth with disabilities compared to typically developing peers in several areas of the mock interview content (i.e. responses to the questions: "tell me about yourself", "how would you provide feedback to someone not doing their share" and a problem-solving scenario question) and delivery (i.e. voice clarity and mean latency). We found no significant differences in the workplace role-play performances of youth with and without disabilities. Youth with physical disabilities performed poorer in some areas of a job interview compared to their typically developing peers. They could benefit from further targeted employment readiness training. Clinicians should: Coach youth with physical disability on how to "sell" their abilities to potential employers and encourage youth to get involved in volunteer activities and employment readiness training programs. Consider using mock job interviews and other employment role-play exercises as assessment and training tools for youth with physical disabilities. Involve speech pathologists in the development of employment readiness programs that address voice clarity as a potential delivery issue.

  18. Estimation of Fine and Oversize Particle Ratio in a Heterogeneous Compound with Acoustic Emissions.

    PubMed

    Nsugbe, Ejay; Ruiz-Carcel, Cristobal; Starr, Andrew; Jennions, Ian

    2018-03-13

    The final phase of powder production typically involves a mixing process where all of the particles are combined and agglomerated with a binder to form a single compound. The traditional means of inspecting the physical properties of the final product involves an inspection of the particle sizes using an offline sieving and weighing process. The main downside of this technique, in addition to being an offline-only measurement procedure, is its inability to characterise large agglomerates of powders due to sieve blockage. This work assesses the feasibility of a real-time monitoring approach using a benchtop test rig and a prototype acoustic-based measurement approach to provide information that can be correlated to product quality and provide the opportunity for future process optimisation. Acoustic emission (AE) was chosen as the sensing method due to its low cost, simple setup process, and ease of implementation. The performance of the proposed method was assessed in a series of experiments where the offline quality check results were compared to the AE-based real-time estimations using data acquired from a benchtop powder free flow rig. A designed time domain based signal processing method was used to extract particle size information from the acquired AE signal and the results show that this technique is capable of estimating the required ratio in the washing powder compound with an average absolute error of 6%.

  19. Toward a framework for computer-mediated collaborative design in medical informatics.

    PubMed

    Patel, V L; Kaufman, D R; Allen, V G; Shortliffe, E H; Cimino, J J; Greenes, R A

    1999-09-01

    The development and implementation of enabling tools and methods that provide ready access to knowledge and information are among the central goals of medical informatics. The need for multi-institutional collaboration in the development of such tools and methods is increasingly being recognized. Collaboration involves communication, which typically involves individuals who work together at the same location. With the evolution of electronic modalities for communication, we seek to understand the role that such technologies can play in supporting collaboration, especially when the participants are geographically separated. Using the InterMed Collaboratory as a subject of study, we have analyzed their activities as an exercise in computer- and network-mediated collaborative design. We report on the cognitive, sociocultural, and logistical issues encountered when scientists from diverse organizations and backgrounds use communications technologies while designing and implementing shared products. Results demonstrate that it is important to match carefully the content with the mode of communication, identifying, for example, suitable uses of E-mail, conference calls, and face-to-face meetings. The special role of leaders in guiding and facilitating the group activities can also be seen, regardless of the communication setting in which the interactions occur. Most important is the proper use of technology to support the evolution of a shared vision of group goals and methods, an element that is clearly necessary before successful collaborative designs can proceed.

  20. Low-Melt Poly(Amic Acids) and Polyimides and Their Uses

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F. (Inventor); Jolley, Scott T. (Inventor); Gibson, Tracy L. (Inventor); Snyder, Sarah J. (Inventor); Williams, Martha K. (Inventor)

    2016-01-01

    Provided are low-melt polyimides and poly(amic acids) (PAAs) for use as adhesives, and methods of using the materials for attaching two substrates. The methods typically form an adhesive bond that is hermetically sealed to both substrates. Additionally, the method typically forms a cross-linked bonding material that is flexible.

  1. Fast large-scale clustering of protein structures using Gauss integrals.

    PubMed

    Harder, Tim; Borg, Mikael; Boomsma, Wouter; Røgen, Peter; Hamelryck, Thomas

    2012-02-15

    Clustering protein structures is an important task in structural bioinformatics. De novo structure prediction, for example, often involves a clustering step for finding the best prediction. Other applications include assigning proteins to fold families and analyzing molecular dynamics trajectories. We present Pleiades, a novel approach to clustering protein structures with a rigorous mathematical underpinning. The method approximates clustering based on the root mean square deviation by first mapping structures to Gauss integral vectors--which were introduced by Røgen and co-workers--and subsequently performing K-means clustering. Compared to current methods, Pleiades dramatically improves on the time needed to perform clustering, and can cluster a significantly larger number of structures, while providing state-of-the-art results. The number of low energy structures generated in a typical folding study, which is in the order of 50,000 structures, can be clustered within seconds to minutes.

  2. Seeking maximum linearity of transfer functions

    NASA Astrophysics Data System (ADS)

    Silva, Filipi N.; Comin, Cesar H.; Costa, Luciano da F.

    2016-12-01

    Linearity is an important and frequently sought property in electronics and instrumentation. Here, we report a method capable of, given a transfer function (theoretical or derived from some real system), identifying the respective most linear region of operation with a fixed width. This methodology, which is based on least squares regression and systematic consideration of all possible regions, has been illustrated with respect to both an analytical (sigmoid transfer function) and a simple situation involving experimental data of a low-power, one-stage class A transistor current amplifier. Such an approach, which has been addressed in terms of transfer functions derived from experimentally obtained characteristic surface, also yielded contributions such as the estimation of local constants of the device, as opposed to typically considered average values. The reported method and results pave the way to several further applications in other types of devices and systems, intelligent control operation, and other areas such as identifying regions of power law behavior.

  3. Techniques for Investigating Molecular Toxicology of Nanomaterials.

    PubMed

    Wang, Yanli; Li, Chenchen; Yao, Chenjie; Ding, Lin; Lei, Zhendong; Wu, Minghong

    2016-06-01

    Nanotechnology has been a rapidly developing field in the past few decades, resulting in the more and more exposure of nanomaterials to human. The increased applications of nanomaterials for industrial, commercial and life purposes, such as fillers, catalysts, semiconductors, paints, cosmetic additives and drug carriers, have caused both obvious and potential impacts on human health and environment. Nanotoxicology is used to study the safety of nanomaterials and has grown at the historic moment. Molecular toxicology is a new subdiscipline to study the interactions and impacts of materials at the molecular level. To better understand the relationship between the molecular toxicology and nanomaterials, this review summarizes the typical techniques and methods in molecular toxicology which are applied when investigating the toxicology of nanomaterials and include six categories: namely; genetic mutation detection, gene expression analysis, DNA damage detection, chromosomal aberration analysis, proteomics, and metabolomics. Each category involves several experimental techniques and methods.

  4. Domain Adaptation Methods for Improving Lab-to-field Generalization of Cocaine Detection using Wearable ECG.

    PubMed

    Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M

    2016-09-01

    Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data.

  5. Verification of low-Mach number combustion codes using the method of manufactured solutions

    NASA Astrophysics Data System (ADS)

    Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz

    2007-11-01

    Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.

  6. The surgical management of fibrous dysplasia of bone.

    PubMed

    Stanton, Robert P; Ippolito, Ernesto; Springfield, Dempsey; Lindaman, Lynn; Wientroub, Shlomo; Leet, Arabella

    2012-05-24

    The surgical management of Polyostotic Fibrous Dysplasia (FD) of bone is technically demanding. The most effective methods to manage the associated bone deformity remain unclear. The marked variation in the degree and pattern of bone involvement has made it difficult to acquire data to guide the surgeon's approach to these patients. In light of the paucity of data, but need for guidance, recognized experts in the management of these patients came together at the National Institutes of Health in Bethesda, Maryland as part of an International meeting to address issues related to fibrous dysplasia of bone to discuss and refine their recommendations regarding the surgical indications and preferred methods for the management of these challenging patients. The specific challenges, recommended approaches, and "lessons learned" are presented in hopes that surgeons faced with typical deformities can be guided in the surgical reconstruction of both children and adults with FD.

  7. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  8. The ability of flexible car bonnets to mitigate the consequences of frontal impact with pedestrians

    NASA Astrophysics Data System (ADS)

    Stanisławek, Sebastian; Niezgoda, Tadeusz

    2018-01-01

    The paper presents the results of numerical research on a vehicle representing a Toyota Yaris passenger sedan hitting a pedestrian. A flexible car body is suggested as an interesting way to increase safety. The authors present a simple low-cost bonnet buffer concept that may mitigate the effects of frontal impact. Computer simulation was the method chosen to solve the problem efficiently. The Finite Element Method (FEM) implemented in the LS-DYNA commercial code was used. The testing procedure was based on the Euro NCAP protocol. A flexible bonnet buffer shows its usefulness in preventing casualties in typical accidents. In the best scenario, the HIC15 parameter is only 380 when such a buffer is installed. In comparison, an accident involving a car without any protection produces an HIC15 of 970, which is very dangerous for pedestrians.

  9. Fracture surfaces of granular pastes.

    PubMed

    Mohamed Abdelhaye, Y O; Chaouche, M; Van Damme, H

    2013-11-01

    Granular pastes are dense dispersions of non-colloidal grains in a simple or a complex fluid. Typical examples are the coating, gluing or sealing mortars used in building applications. We study the cohesive rupture of thick mortar layers in a simple pulling test where the paste is initially confined between two flat surfaces. After hardening, the morphology of the fracture surfaces was investigated, using either the box counting method to analyze fracture profiles perpendicular to the mean fracture plane, or the slit-island method to analyze the islands obtained by cutting the fracture surfaces at different heights, parallel to the mean fracture plane. The fracture surfaces were shown to exhibit scaling properties over several decades. However, contrary to what has been observed in the brittle or ductile fracture of solid materials, the islands were shown to be mass fractals. This was related to the extensive plastic flow involved in the fracture process.

  10. Stimulant Paste Preparation and Bark Streak Tapping Technique for Pine Oleoresin Extraction.

    PubMed

    Füller, Thanise Nogueira; de Lima, Júlio César; de Costa, Fernanda; Rodrigues-Corrêa, Kelly C S; Fett-Neto, Arthur G

    2016-01-01

    Tapping technique comprises the extraction of pine oleoresin, a non-wood forest product consisting of a complex mixture of mono, sesqui, and diterpenes biosynthesized and exuded as a defense response to wounding. Oleoresin is used to produce gum rosin, turpentine, and their multiple derivatives. Oleoresin yield and quality are objects of interest in pine tree biotechnology, both in terms of environmental and genetic control. Monitoring these parameters in individual trees grown in the field provides a means to examine the control of terpene production in resin canals, as well as the identification of genetic-based differences in resinosis. A typical method of tapping involves the removal of bark and application of a chemical stimulant on the wounded area. Here we describe the methods for preparing the resin-stimulant paste with different adjuvants, as well as the bark streaking process in adult pine trees.

  11. Biological Matrix Effects in Quantitative Tandem Mass Spectrometry-Based Analytical Methods: Advancing Biomonitoring

    PubMed Central

    Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd

    2015-01-01

    The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585

  12. How to use concept mapping to identify barriers and facilitators of an electronic quality improvement intervention.

    PubMed

    van Engen-Verheul, Mariëtte; Peek, Niels; Vromen, Tom; Jaspers, Monique; de Keizer, Nicolette

    2015-01-01

    Systematic quality improvement (QI) interventions are increasingly used to change complex health care systems. Results of randomized clinical trials can provide quantitative evidence whether QI interventions were effective but they do not teach us why and how QI was (not) achieved. Qualitative research methods can answer these questions but typically involve only a small group of respondents against high resources. Concept mapping methodology overcomes these drawbacks by integrating results from qualitative group sessions with multivariate statistical analysis to represent ideas of diverse stakeholders visually on maps in an efficient way. This paper aims to describe how to use concept mapping to qualitatively gain insight into barriers and facilitators of an electronic QI intervention and presents experiences with the method from an ongoing case study to evaluate a QI system in the field of cardiac rehabilitation in the Netherlands.

  13. How I Attend--Not How Well Do I Attend: Rethinking Developmental Frameworks of Attention and Cognition in Autism Spectrum Disorder and Typical Development

    ERIC Educational Resources Information Center

    Burack, Jacob A.; Russo, Natalie; Kovshoff, Hannah; Palma Fernandes, Tania; Ringo, Jason; Landry, Oriane; Iarocci, Grace

    2016-01-01

    Evidence from the study of attention among persons with autism spectrum disorder (ASD) and typically developing (TD) children suggests a rethinking of the notion that performance inherently reflects disability, ability, or capacity in favor of a more nuanced story that involves an emphasis on styles and biases that reflect real-world attending. We…

  14. Neural correlates of retrieval-based memory enhancement: An fMRI study of the testing effect

    PubMed Central

    Wing, Erik A.; Marsh, Elizabeth J.; Cabeza, Roberto

    2013-01-01

    Restudying material is a common method for learning new information, but not necessarily an effective one. Research on the testing effect shows that practice involving retrieval from memory can facilitate later memory in contrast to passive restudy. Despite extensive behavioral work, the brain processes that make retrieval an effective learning strategy remain unclear. In the present experiment, we explored how initially retrieving items affected memory a day later as compared to a condition involving traditional restudy. In contrast to restudy, initial testing that contributed to future memory success was associated with engagement of several regions including the anterior hippocampus, lateral temporal cortices, and medial prefrontal cortex (PFC). Additionally, testing enhanced hippocampal connectivity with ventrolateral PFC and midline regions. These findings indicate that the testing effect may be contingent on processes that are typically thought to support memory success at encoding (e.g. relational binding, selection and elaboration of semantically-related information) in addition to those more often associated with retrieval (e.g. memory search). PMID:23607935

  15. Responses of biomass briquetting and pelleting to water-involved pretreatments and subsequent enzymatic hydrolysis.

    PubMed

    Li, Yang; Li, Xiaotong; Shen, Fei; Wang, Zhanghong; Yang, Gang; Lin, Lili; Zhang, Yanzong; Zeng, Yongmei; Deng, Shihuai

    2014-01-01

    Although lignocellulosic biomass has been extensively regarded as the most important resource for bioethanol, the wide application was seriously restricted by the high transportation cost of biomass. Currently, biomass densification is regarded as an acceptable solution to this issue. Herein, briquettes, pellets and their corresponding undensified biomass were pretreated by diluted-NaOH and hydrothermal method to investigate the responses of biomass densification to these typical water-involved pretreatments and subsequent enzymatic hydrolysis. The densified biomass auto-swelling was initially investigated before pretreatment. Results indicated pellets could be totally auto-swollen in an hour, while it took about 24 h for briquettes. When diluted-NaOH pretreatment was performed, biomass briquetting and pelleting improved sugar conversion rate by 20.1% and 5.5% comparing with their corresponding undensified biomass. Pelleting improved sugar conversion rate by 7.0% after hydrothermal pretreatment comparing with the undensified biomass. However, briquetting disturbed hydrothermal pretreatment resulting in the decrease of sugar conversion rate by 15.0%. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Abnormal brain activation in excoriation (skin-picking) disorder: evidence from an executive planning fMRI study

    PubMed Central

    Odlaug, Brian L.; Hampshire, Adam; Chamberlain, Samuel R.; Grant, Jon E.

    2016-01-01

    Background Excoriation (skin-picking) disorder (SPD) is a relatively common psychiatric condition whose neurobiological basis is unknown. Aims To probe the function of fronto-striatal circuitry in SPD. Method Eighteen participants with SPD and 15 matched healthy controls undertook an executive planning task (Tower of London) during functional magnetic resonance imaging (fMRI). Activation during planning was compared between groups using region of interest and whole-brain permutation cluster approaches. Results The SPD group exhibited significant functional underactivation in a cluster encompassing bilateral dorsal striatum (maximal in right caudate), bilateral anterior cingulate and right medial frontal regions. These abnormalities were, for the most part, outside the dorsal planning network typically activated by executive planning tasks. Conclusions Abnormalities of neural regions involved in habit formation, action monitoring and inhibition appear involved in the pathophysiology of SPD. Implications exist for understanding the basis of excessive grooming and the relationship of SPD with putative obsessive–compulsive spectrum disorders. PMID:26159604

  17. Species identification of Cannabis sativa using real-time quantitative PCR (qPCR).

    PubMed

    Johnson, Christopher E; Premasuthan, Amritha; Satkoski Trask, Jessica; Kanthaswamy, Sree

    2013-03-01

    Most narcotics-related cases in the United States involve Cannabis sativa. Material is typically identified based on the cystolithic hairs on the leaves and with chemical tests to identify of the presence of cannabinoids. Suspect seeds are germinated into a viable plant so that morphological and chemical tests can be conducted. Seed germination, however, causes undue analytical delays. DNA analyses that involve the chloroplast and nuclear genomes have been developed for identification of C. sativa materials, but they require several nanograms of template DNA. Using the trnL 3' exon-trnF intragenic spacer regions within the C. sativa chloroplast, we have developed a real-time quantitative PCR assay that is capable of identifying picogram amounts of chloroplast DNA for species determination of suspected C. sativa material. This assay provides forensic science laboratories with a quick and reliable method to identify an unknown sample as C. sativa. © 2013 American Academy of Forensic Sciences.

  18. Comparing Pattern Recognition Feature Sets for Sorting Triples in the FIRST Database

    NASA Astrophysics Data System (ADS)

    Proctor, D. D.

    2006-07-01

    Pattern recognition techniques have been used with increasing success for coping with the tremendous amounts of data being generated by automated surveys. Usually this process involves construction of training sets, the typical examples of data with known classifications. Given a feature set, along with the training set, statistical methods can be employed to generate a classifier. The classifier is then applied to process the remaining data. Feature set selection, however, is still an issue. This paper presents techniques developed for accommodating data for which a substantive portion of the training set cannot be classified unambiguously, a typical case for low-resolution data. Significance tests on the sort-ordered, sample-size-normalized vote distribution of an ensemble of decision trees is introduced as a method of evaluating relative quality of feature sets. The technique is applied to comparing feature sets for sorting a particular radio galaxy morphology, bent-doubles, from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) database. Also examined are alternative functional forms for feature sets. Associated standard deviations provide the means to evaluate the effect of the number of folds, the number of classifiers per fold, and the sample size on the resulting classifications. The technique also may be applied to situations for which, although accurate classifications are available, the feature set is clearly inadequate, but is desired nonetheless to make the best of available information.

  19. Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.

    PubMed

    Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H

    2018-01-01

    Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.

  20. Special nuclear material simulation device

    DOEpatents

    Leckey, John H.; DeMint, Amy; Gooch, Jack; Hawk, Todd; Pickett, Chris A.; Blessinger, Chris; York, Robbie L.

    2014-08-12

    An apparatus for simulating special nuclear material is provided. The apparatus typically contains a small quantity of special nuclear material (SNM) in a configuration that simulates a much larger quantity of SNM. Generally the apparatus includes a spherical shell that is formed from an alloy containing a small quantity of highly enriched uranium. Also typically provided is a core of depleted uranium. A spacer, typically aluminum, may be used to separate the depleted uranium from the shell of uranium alloy. A cladding, typically made of titanium, is provided to seal the source. Methods are provided to simulate SNM for testing radiation monitoring portals. Typically the methods use at least one primary SNM spectral line and exclude at least one secondary SNM spectral line.

  1. Multiplexing N-glycan analysis by DNA analyzer.

    PubMed

    Feng, Hua-Tao; Li, Pingjing; Rui, Guo; Stray, James; Khan, Shaheer; Chen, Shiaw-Min; Li, Sam F Y

    2017-07-01

    Analysis of N-glycan structures has been gaining attentions over the years due to their critical importance to biopharma-based applications and growing roles in biological research. Glycan profiling is also critical to the development of biosimilar drugs. The detailed characterization of N-glycosylation is mandatory because it is a nontemplate driven process and that significantly influences critical properties such as bio-safety and bio-activity. The ability to comprehensively characterize highly complex mixtures of N-glycans has been analytically challenging and stimulating because of the difficulties in both the structure complexity and time-consuming sample pretreatment procedures. CE-LIF is one of the typical techniques for N-glycan analysis due to its high separation efficiency. In this paper, a 16-capillary DNA analyzer was coupled with a magnetic bead glycan purification method to accelerate the sample preparation procedure and therefore increase N-glycan assay throughput. Routinely, the labeling dye used for CE-LIF is 8-aminopyrene-1,3,6-trisulfonic acid, while the typical identification method involves matching migration times with database entries. Two new fluorescent dyes were used to either cross-validate and increase the glycan identification precision or simplify sample preparation steps. Exoglycosidase studies were carried out using neuramididase, galactosidase, and fucosidase to confirm the results of three dye cross-validation. The optimized method combines the parallel separation capacity of multiple-capillary separation with three labeling dyes, magnetic bead assisted preparation, and exoglycosidase treatment to allow rapid and accurate analysis of N-glycans. These new methods provided enough useful structural information to permit N-glycan structure elucidation with only one sample injection. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Rapid Fabrication of Cell-Laden Alginate Hydrogel 3D Structures by Micro Dip-Coating.

    PubMed

    Ghanizadeh Tabriz, Atabak; Mills, Christopher G; Mullins, John J; Davies, Jamie A; Shu, Wenmiao

    2017-01-01

    Development of a simple, straightforward 3D fabrication method to culture cells in 3D, without relying on any complex fabrication methods, remains a challenge. In this paper, we describe a new technique that allows fabrication of scalable 3D cell-laden hydrogel structures easily, without complex machinery: the technique can be done using only apparatus already available in a typical cell biology laboratory. The fabrication method involves micro dip-coating of cell-laden hydrogels covering the surface of a metal bar, into the cross-linking reagents calcium chloride or barium chloride to form hollow tubular structures. This method can be used to form single layers with thickness ranging from 126 to 220 µm or multilayered tubular structures. This fabrication method uses alginate hydrogel as the primary biomaterial and a secondary biomaterial can be added depending on the desired application. We demonstrate the feasibility of this method, with survival rate over 75% immediately after fabrication and normal responsiveness of cells within these tubular structures using mouse dermal embryonic fibroblast cells and human embryonic kidney 293 cells containing a tetracycline-responsive, red fluorescent protein (tHEK cells).

  3. Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation

    PubMed Central

    Dayan, Peter; Berridge, Kent C.

    2014-01-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659

  4. Feasibility of using LANDSAT images of vegetation cover to estimate effective hydraulic properties of soils

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.

    1985-01-01

    Research activities conducted from February 1, 1985 to July 31, 1985 and preliminary conclusions regarding research objectives are summarized. The objective is to determine the feasibility of using LANDSAT data to estimate effective hydraulic properties of soils. The general approach is to apply the climatic-climax hypothesis (Ealgeson, 1982) to natural water-limited vegetation systems using canopy cover estimated from LANDSAT data. Natural water-limited systems typically consist of inhomogeneous vegetation canopies interspersed with bare soils. The ground resolution associated with one pixel from LANDSAT MSS (or TM) data is generally greater than the scale of the plant canopy or canopy clusters. Thus a method for resolving percent canopy cover at a subpixel level must be established before the Eagleson hypothesis can be tested. Two formulations are proposed which extend existing methods of analyzing mixed pixels to naturally vegetated landscapes. The first method involves use of the normalized vegetation index. The second approach is a physical model based on radiative transfer principles. Both methods are to be analyzed for their feasibility on selected sites.

  5. Scale transition using dislocation dynamics and the nudged elastic band method

    DOE PAGES

    Sobie, Cameron; Capolungo, Laurent; McDowell, David L.; ...

    2017-08-01

    Microstructural features such as precipitates or irradiation-induced defects impede dislocation motion and directly influence macroscopic mechanical properties such as yield point and ductility. In dislocation-defect interactions both atomic scale and long range elastic interactions are involved. Thermally assisted dislocation bypass of obstacles occurs when thermal fluctuations and driving stresses contribute sufficient energy to overcome the energy barrier. The Nudged Elastic Band (NEB) method is typically used in the context of atomistic simulations to quantify the activation barriers for a given reaction. In this work, the NEB method is generalized to coarse-grain continuum representations of evolving microstructure states beyond the discretemore » particle descriptions of first principles and atomistics. The method we employed enables the calculation of activation energies for a View the MathML source glide dislocation bypassing a [001] self-interstitial atom loop of size in the range of 4-10 nm with a spacing larger than 150nm in α-iron for a range of applied stresses and interaction geometries. This study is complemented by a comparison between atomistic and continuum based prediction of barriers.« less

  6. Simple and practical approach for computing the ray Hessian matrix in geometrical optics.

    PubMed

    Lin, Psang Dain

    2018-02-01

    A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.

  7. Crenulation cleavage development by partitioning of deformation into zones of progressive shearing (combined shearing, shortening and volume loss) and progressive shortening (no volume loss): quantification of solution shortening and intermicrolithon-movement

    NASA Astrophysics Data System (ADS)

    Stewart, L. K.

    1997-11-01

    An analytical method for determining amounts of cleavage-normal dissolution and cleavage-parallel shear movement that occurred between adjacent microlithons during crenulation cleavage seam formation within a deformed slate is developed for the progressive bulk inhomogeneous shortening (PBIS) mechanism of crenulation cleavage formation. The method utilises structural information obtained from samples where a diverging bed and vein are offset by a crenulation cleavage seam. Several samples analysed using this method produced ratios of relative, cleavage-parallel movement of microlithons to the material thickness removed by dissolution typically in the range of 1.1-3.4:1. The mean amount of solution shortening attributed to the formation of the cleavage seams examined is 24%. The results indicate that a relationship may exist between the width of microlithons and the amount of cleavage-parallel intermicrolithon-movement. The method presented here has the potential to help determine whether crenulation cleavage seams formed by the progressive bulk inhomogeneous shortening mechanism or by that involving cleavage-normal pressure solution alone.

  8. Adaptive optimal training of animal behavior

    NASA Astrophysics Data System (ADS)

    Bak, Ji Hyun; Choi, Jung Yoon; Akrami, Athena; Witten, Ilana; Pillow, Jonathan

    Neuroscience experiments often require training animals to perform tasks designed to elicit various sensory, cognitive, and motor behaviors. Training typically involves a series of gradual adjustments of stimulus conditions and rewards in order to bring about learning. However, training protocols are usually hand-designed, and often require weeks or months to achieve a desired level of task performance. Here we combine ideas from reinforcement learning and adaptive optimal experimental design to formulate methods for efficient training of animal behavior. Our work addresses two intriguing problems at once: first, it seeks to infer the learning rules underlying an animal's behavioral changes during training; second, it seeks to exploit these rules to select stimuli that will maximize the rate of learning toward a desired objective. We develop and test these methods using data collected from rats during training on a two-interval sensory discrimination task. We show that we can accurately infer the parameters of a learning algorithm that describes how the animal's internal model of the task evolves over the course of training. We also demonstrate by simulation that our method can provide a substantial speedup over standard training methods.

  9. Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.

    PubMed

    Dayan, Peter; Berridge, Kent C

    2014-06-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.

  10. Cranio-orbital primary intraosseous haemangioma

    PubMed Central

    Gupta, T; Rose, G E; Manisali, M; Minhas, P; Uddin, J M; Verity, D H

    2013-01-01

    Purpose Primary intraosseous haemangioma (IOH) is a rare benign neoplasm presenting in the fourth and fifth decades of life. The spine and skull are the most commonly involved, orbital involvement is extremely rare. We describe six patients with cranio-orbital IOH, the largest case series to date. Patients and methods Retrospective review of six patients with histologically confirmed primary IOH involving the orbit. Clinical characteristics, imaging features, approach to management, and histopathological findings are described. Results Five patients were male with a median age of 56. Pain and diplopia were the most common presenting features. A characteristic ‘honeycomb' pattern on CT imaging was demonstrated in three of the cases. Complete surgical excision was performed in all cases with presurgical embolisation carried out in one case. In all the cases, histological studies identified cavernous vascular spaces within the bony tissue. These channels were lined by single layer of cytologically normal endothelial cells. Discussion IOCH of the cranio-orbital region is rare; in the absence of typical imaging features, the differential diagnosis includes chondroma, chondrosarcoma, bony metastasis, and lymphoma. Surgical excision may be necessary to exclude more sinister pathology. Intraoperative haemorrhage can be severe and may be reduced by preoperative embolisation. PMID:23989119

  11. [Local fixation of antibiotics by fibrin spray : In bone defects with soft tissue involvement].

    PubMed

    Janko, Maren; Nau, Christoph; Marzi, Ingo; Frank, Johannes

    2017-02-01

    In acute and chronic bone infections with concomitant soft tissue involvement the current gold standard is radical surgical debridement including explantation of the infected prosthetic devices. This is followed by initiation of systemic antibiotic therapy appropriate for the antibiogram. Several revision operations are often necessary to achieve complete healing. Additional treatment with local antibiotics or antibiotic-containing substances is routinely used in bone surgery. Apart from the typical procedures with commercially available products, we have conducted a study with 21 patients by application of local antibiotic treatment in combination with the fibrin glue spray technique and evaluated the results. Out of nine wounds of the lower extremities with bone involvement, total healing could be achieved in eight cases. We were also successful in two out of three very complex pelvic wounds; however, as expected the implant infections were complicated. Out of the seven desolate cases we were only able to achieve complete long-term healing in two cases. In the meantime we routinely use the described method in such special disastrous infection situations; however, this is carried out only in combination with established surgical procedures in sepsis surgery and anti-infection management.

  12. [Secondary bladder lymphoma in a patient with AIDS].

    PubMed

    Vendrell, J R; Alcaraz, A; Gutíerrez, R; Rodríguez, A; Barranco, M A; Carretero, P

    1996-10-01

    Contribution of one case of non-Hodgkin lymphoma (NHL) with vesical involvement, that presented clinically with urological symptomatology. Vesical involvement is typical of NHL, and is becoming more frequent in association with the increased number of AIDS patients under immunosuppressive therapy. It should be expected that this currently unusual entity will become more common in the future.

  13. Discussion of David Thissen's Bad Questions: An Essay Involving Item Response Theory

    ERIC Educational Resources Information Center

    Wainer, Howard

    2016-01-01

    The usual role of a discussant is to clarify and correct the paper being discussed, but in this case, the author, Howard Wainer, generally agrees with everything David Thissen says in his essay, "Bad Questions: An Essay Involving Item Response Theory." This essay expands on David Thissen's statement that there are typically two principal…

  14. Involving Your Child or Teen with ASD in Integrated Community Activities

    ERIC Educational Resources Information Center

    McKee, Rebecca

    2011-01-01

    Participating in outside activities and community-based endeavors can be tricky for people with special needs, like Autism Spectrum Disorder (ASD). Families meet more than a few obstacles attempting to integrate their children or teens who have special needs like ASD. Most typical children are highly involved in sports, clubs and camps. If a…

  15. Foster Care Involvement among Medicaid-Enrolled Children with Autism

    ERIC Educational Resources Information Center

    Cidav, Zuleyha; Xie, Ming; Mandell, David S.

    2018-01-01

    The prevalence and risk of foster care involvement among children with autism spectrum disorder (ASD) relative to children with intellectual disability (ID), children with ASD and ID, and typically developing children were examined using 2001-2007 Medicaid data. Children were followed up to the first foster care placement or until the end of 2007;…

  16. A Comparison of Two Methods of Teaching an Elementary School Science Methods Course at Hunter College.

    ERIC Educational Resources Information Center

    Graeber, Mary

    The typical approach to the teaching of an elementary school science methods course for undergraduate students was compared with an experimental approach based upon activities appearing in the Conceptually Oriented Program in Elementary Science (COPES) teacher's guides. The typical approach was characterized by a coverage of many topics and a…

  17. Automated Transition State Search and Its Application to Diverse Types of Organic Reactions.

    PubMed

    Jacobson, Leif D; Bochevarov, Art D; Watson, Mark A; Hughes, Thomas F; Rinaldo, David; Ehrlich, Stephan; Steinbrecher, Thomas B; Vaitheeswaran, S; Philipp, Dean M; Halls, Mathew D; Friesner, Richard A

    2017-11-14

    Transition state search is at the center of multiple types of computational chemical predictions related to mechanistic investigations, reactivity and regioselectivity predictions, and catalyst design. The process of finding transition states in practice is, however, a laborious multistep operation that requires significant user involvement. Here, we report a highly automated workflow designed to locate transition states for a given elementary reaction with minimal setup overhead. The only essential inputs required from the user are the structures of the separated reactants and products. The seamless workflow combining computational technologies from the fields of cheminformatics, molecular mechanics, and quantum chemistry automatically finds the most probable correspondence between the atoms in the reactants and the products, generates a transition state guess, launches a transition state search through a combined approach involving the relaxing string method and the quadratic synchronous transit, and finally validates the transition state via the analysis of the reactive chemical bonds and imaginary vibrational frequencies as well as by the intrinsic reaction coordinate method. Our approach does not target any specific reaction type, nor does it depend on training data; instead, it is meant to be of general applicability for a wide variety of reaction types. The workflow is highly flexible, permitting modifications such as a choice of accuracy, level of theory, basis set, or solvation treatment. Successfully located transition states can be used for setting up transition state guesses in related reactions, saving computational time and increasing the probability of success. The utility and performance of the method are demonstrated in applications to transition state searches in reactions typical for organic chemistry, medicinal chemistry, and homogeneous catalysis research. In particular, applications of our code to Michael additions, hydrogen abstractions, Diels-Alder cycloadditions, carbene insertions, and an enzyme reaction model involving a molybdenum complex are shown and discussed.

  18. The kinetics and acoustics of fingering and note transitions on the flute.

    PubMed

    Almeida, André; Chow, Renee; Smith, John; Wolfe, Joe

    2009-09-01

    Motion of the keys was measured in a transverse flute while beginner, amateur, and professional flutists played a range of exercises. The time taken for a key to open or close was typically 10 ms when pushed by a finger or 16 ms when moved by a spring. Because the opening and closing of keys will never be exactly simultaneous, transitions between notes that involve the movement of multiple fingers can occur via several possible pathways with different intermediate fingerings. A transition is classified as "safe" if it is possible to be slurred from the initial to final note with little perceptible change in pitch or volume. Some transitions are "unsafe" and possibly involve a transient change in pitch or a decrease in volume. Players, on average, used safe transitions more frequently than unsafe transitions. Delays between the motion of the fingers were typically tens of milliseconds, with longer delays as more fingers become involved. Professionals exhibited smaller average delays between the motion of their fingers than did amateurs.

  19. Optimal Threshold Determination for Interpreting Semantic Similarity and Particularity: Application to the Comparison of Gene Sets and Metabolic Pathways Using GO and ChEBI

    PubMed Central

    Bettembourg, Charles; Diot, Christian; Dameron, Olivier

    2015-01-01

    Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274

  20. In the Beginning-There Is the Introduction-and Your Study Hypothesis.

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2017-05-01

    Writing a manuscript for a medical journal is very akin to writing a newspaper article-albeit a scholarly one. Like any journalist, you have a story to tell. You need to tell your story in a way that is easy to follow and makes a compelling case to the reader. Although recommended since the beginning of the 20th century, the conventional Introduction-Methods-Results-And-Discussion (IMRAD) scientific reporting structure has only been the standard since the 1980s. The Introduction should be focused and succinct in communicating the significance, background, rationale, study aims or objectives, and the primary (and secondary, if appropriate) study hypotheses. Hypothesis testing involves posing both a null and an alternative hypothesis. The null hypothesis proposes that no difference or association exists on the outcome variable of interest between the interventions or groups being compared. The alternative hypothesis is the opposite of the null hypothesis and thus typically proposes that a difference in the population does exist between the groups being compared on the parameter of interest. Most investigators seek to reject the null hypothesis because of their expectation that the studied intervention does result in a difference between the study groups or that the association of interest does exist. Therefore, in most clinical and basic science studies and manuscripts, the alternative hypothesis is stated, not the null hypothesis. Also, in the Introduction, the alternative hypothesis is typically stated in the direction of interest, or the expected direction. However, when assessing the association of interest, researchers typically look in both directions (ie, favoring 1 group or the other) by conducting a 2-tailed statistical test because the true direction of the effect is typically not known, and either direction would be important to report.

  1. Benefit and cost curves for typical pollination mutualisms.

    PubMed

    Morris, William F; Vázquez, Diego P; Chacoff, Natacha P

    2010-05-01

    Mutualisms provide benefits to interacting species, but they also involve costs. If costs come to exceed benefits as population density or the frequency of encounters between species increases, the interaction will no longer be mutualistic. Thus curves that represent benefits and costs as functions of interaction frequency are important tools for predicting when a mutualism will tip over into antagonism. Currently, most of what we know about benefit and cost curves in pollination mutualisms comes from highly specialized pollinating seed-consumer mutualisms, such as the yucca moth-yucca interaction. There, benefits to female reproduction saturate as the number of visits to a flower increases (because the amount of pollen needed to fertilize all the flower's ovules is finite), but costs continue to increase (because pollinator offspring consume developing seeds), leading to a peak in seed production at an intermediate number of visits. But for most plant-pollinator mutualisms, costs to the plant are more subtle than consumption of seeds, and how such costs scale with interaction frequency remains largely unknown. Here, we present reasonable benefit and cost curves that are appropriate for typical pollinator-plant interactions, and we show how they can result in a wide diversity of relationships between net benefit (benefit minus cost) and interaction frequency. We then use maximum-likelihood methods to fit net-benefit curves to measures of female reproductive success for three typical pollination mutualisms from two continents, and for each system we chose the most parsimonious model using information-criterion statistics. We discuss the implications of the shape of the net-benefit curve for the ecology and evolution of plant-pollinator mutualisms, as well as the challenges that lie ahead for disentangling the underlying benefit and cost curves for typical pollination mutualisms.

  2. PROPOSALS FOR THE ESTABLISHMENT OF NATIONAL DIAGNOSTIC REFERENCE LEVELS FOR RADIOGRAPHY FOR ADULT PATIENTS BASED ON REGIONAL DOSE SURVEYS IN RUSSIAN FEDERATION.

    PubMed

    Vodovatov, A V; Balonov, M I; Golikov, V Yu; Shatsky, I G; Chipiga, L A; Bernhardsson, C

    2017-04-01

    In 2009-2014, dose surveys aimed to collect adult patient data and parameters of most common radiographic examinations were performed in six Russian regions. Typical patient doses were estimated for the selected examinations both in entrance surface dose and in effective dose. 75%-percentiles of typical patient effective dose distributions were proposed as preliminary regional diagnostic reference levels (DRLs) for radiography. Differences between the 75%-percentiles of regional typical patient dose distributions did not exceed 30-50% for the examinations with standardized clinical protocols (skull, chest and thoracic spine) and a factor of 1.5 for other examinations. Two different approaches for establishing national DRLs were evaluated: as a 75%-percentile of a pooled regional sample of patient typical doses (pooled method) and as a median of 75%-percentiles of regional typical patient dose distributions (median method). Differences between pooled and median methods for effective dose did not exceed 20%. It was proposed to establish Russian national DRLs in effective dose using a pooled method. In addition, the local authorities were granted an opportunity to establish regional DRLs if the local radiological practice and typical patient dose distributions are significantly different. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Modal smoothing for analysis of room reflections measured with spherical microphone and loudspeaker arrays.

    PubMed

    Morgenstern, Hai; Rafaely, Boaz

    2018-02-01

    Spatial analysis of room acoustics is an ongoing research topic. Microphone arrays have been employed for spatial analyses with an important objective being the estimation of the direction-of-arrival (DOA) of direct sound and early room reflections using room impulse responses (RIRs). An optimal method for DOA estimation is the multiple signal classification algorithm. When RIRs are considered, this method typically fails due to the correlation of room reflections, which leads to rank deficiency of the cross-spectrum matrix. Preprocessing methods for rank restoration, which may involve averaging over frequency, for example, have been proposed exclusively for spherical arrays. However, these methods fail in the case of reflections with equal time delays, which may arise in practice and could be of interest. In this paper, a method is proposed for systems that combine a spherical microphone array and a spherical loudspeaker array, referred to as multiple-input multiple-output systems. This method, referred to as modal smoothing, exploits the additional spatial diversity for rank restoration and succeeds where previous methods fail, as demonstrated in a simulation study. Finally, combining modal smoothing with a preprocessing method is proposed in order to increase the number of DOAs that can be estimated using low-order spherical loudspeaker arrays.

  4. Addison's Disease

    MedlinePlus

    ... usually involves taking prescription hormones. This can include hydrocortisone, prednisone, or cortisone acetate. If your body is ... treatment typically consists of intravenous (IV) injections of hydrocortisone, saline (salt water), and dextrose (sugar). These injections ...

  5. Atypical Pityriasis rosea in a black child: a case report

    PubMed Central

    2009-01-01

    Introduction Pityriasis rosea is a self-limited inflammatory condition of the skin that mostly affects healthy children and adolescents. Atypical cases of Pityriasis rosea are fairly common and less readily recognized than typical eruptions, and may pose a diagnostic challenge. Case presentation We report the case of a 12-year-old black child that developed an intense pruritic papular eruption with intense facial involvement that was diagnosed of Pityriasis rosea and resolved after five weeks leaving a slight hyperpigmentation. Conclusion Facial and scalp involvement, post-inflammatory disorders of pigmentation and papular lesions are characteristics typically associated to black patients with Pityriasis rosea. The knowledge of features found more frequently in dark-skinned population may be helpful to physicians for diagnosing an atypical Pityriasis rosea in these patients. PMID:20181179

  6. Designing human centered GeoVisualization application--the SanaViz--for telehealth users: a case study.

    PubMed

    Joshi, Ashish; de Araujo Novaes, Magdala; Machiavelli, Josiane; Iyengar, Sriram; Vogler, Robert; Johnson, Craig; Zhang, Jiajie; Hsu, Chiehwen E

    2012-01-01

    Public health data is typically organized by geospatial unit. GeoVisualization (GeoVis) allows users to see information visually on a map. Examine telehealth users' perceptions towards existing public health GeoVis applications and obtains users' feedback about features important for the design and development of Human Centered GeoVis application "the SanaViz". We employed a cross sectional study design using mixed methods approach for this pilot study. Twenty users involved with the NUTES telehealth center at Federal University of Pernambuco (UFPE), Recife, Brazil were enrolled. Open and closed ended questionnaires were used to gather data. We performed audio recording for the interviews. Information gathered included socio-demographics, prior spatial skills and perception towards use of GeoVis to evaluate telehealth services. Card sorting and sketching methods were employed. Univariate analysis was performed for the continuous and categorical variables. Qualitative analysis was performed for open ended questions. Existing Public Health GeoVis applications were difficult to use. Results found interaction features zooming, linking and brushing and representation features Google maps, tables and bar chart as most preferred GeoVis features. Early involvement of users is essential to identify features necessary to be part of the human centered GeoVis application "the SanaViz".

  7. Microwave-Accelerated Method for Ultra-Rapid Extraction of Neisseria gonorrhoeae DNA for Downstream Detection

    PubMed Central

    Melendez, Johan H.; Santaus, Tonya M.; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A.; Geddes, Chris D.

    2016-01-01

    Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by the detection of the genomic target often involving PCR-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (GC) DNA. Our approach is based on the use of highly-focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the present study, we show that highly focused microwaves at 2.45 GHz, using 12.3 mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification in less than 10 minutes total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward towards the development of a point-of-care (POC) platform for detection of gonorrhea infections. PMID:27325503

  8. Usefulness of Sweat Management for Patients with Adult Atopic Dermatitis, regardless of Sweat Allergy: A Pilot Study.

    PubMed

    Kaneko, Sakae; Murota, Hiroyuki; Murata, Susumu; Katayama, Ichiro; Morita, Eishin

    2017-01-01

    Background . Sweat is an aggravating factor in atopic dermatitis (AD), regardless of age. Sweat allergy may be involved in AD aggravated by sweating. Objective. We investigated whether sweat exacerbates adult AD symptoms and examined the extent of sweat allergy's involvement. Method. We asked 34 AD patients (17 men, 17 women; mean age: 27.8 years) to record the extent to which sweat aggravated their symptoms on a 10-point numerical scale. Participant responses were compared with histamine release tests (HRT). Furthermore, 24 of the patients received instructions on methods of sweat management, and their outcomes were evaluated on a 10-point scale. Results. Sweat HRT results were class ≥ 2 in 13 patients, but HRT results were not correlated with the patients' self-assessments of symptom aggravation by sweat. One month after receiving sweat management instructions, a low mean score of 4.6 was obtained regarding whether active sweating was good, but a high mean score of 7.0 was obtained in response to whether the sweat management instructions had been helpful. Conclusion . Our investigation showed that patients' negative impressions of sweat might derive from crude personal experiences that are typically linked to sweating. Sweat management for patients with adult atopic dermatitis was extremely useful regardless of sweat allergy.

  9. Ammonium hydroxide treatment of Aβ produces an aggregate free solution suitable for biophysical and cell culture characterization

    PubMed Central

    Ryan, Timothy M.; Caine, Joanne; Mertens, Haydyn D.T.; Kirby, Nigel; Nigro, Julie; Breheney, Kerry; Waddington, Lynne J.; Streltsov, Victor A.; Curtain, Cyril; Masters, Colin L.

    2013-01-01

    Alzheimer’s disease is the leading cause of dementia in the elderly. Pathologically it is characterized by the presence of amyloid plaques and neuronal loss within the brain tissue of affected individuals. It is now widely hypothesised that fibrillar structures represent an inert structure. Biophysical and toxicity assays attempting to characterize the formation of both the fibrillar and the intermediate oligomeric structures of Aβ typically involves preparing samples which are largely monomeric; the most common method by which this is achieved is to use the fluorinated organic solvent 1,1,1,3,3,3-hexafluoro-2-propanol (HFIP). Recent evidence has suggested that this method is not 100% effective in producing an aggregate free solution. We show, using dynamic light scattering, size exclusion chromatography and small angle X-ray scattering that this is indeed the case, with HFIP pretreated Aβ peptide solutions displaying an increased proportion of oligomeric and aggregated material and an increased propensity to aggregate. Furthermore we show that an alternative technique, involving treatment with strong alkali results in a much more homogenous solution that is largely monomeric. These techniques for solubilising and controlling the oligomeric state of Aβ are valuable starting points for future biophysical and toxicity assays. PMID:23678397

  10. Cheiro-Oral Syndrome: A Clinical Analysis and Review of Literature

    PubMed Central

    2009-01-01

    Purpose After a century, cheiro-oral syndrome (COS) was harangued and emphasized for its localizing value and benign course in recent two decades. However, an expanding body of case series challenged when COS may arise from an involvement of ascending sensory pathways between cortex and pons and terminate into poor outcome occasionally. Materials and Methods To analyze the location, underlying etiologies and prognosis in 76 patients presented with COS collected between 1989 and 2007. Results Four types of COS were categorized, namely unilateral (71.1%), typically bilateral (14.5%), atypically bilateral (7.9%) and crossed COS (6.5%). The most common site of COS occurrence was at pons (27.6%), following by thalamus (21.1%) and cortex (15.8%). Stroke with small infarctions or hemorrhage was the leading cause. Paroxysmal paresthesia was predicted for cortical involvement and bilateral paresthesia for pontine involvement, whereas crossed paresthesia for medullary involvement. However, the majority of lesions cannot be localized by clinical symptoms alone, and were demonstrated only by neuroimaging. Deterioration was ensued in 12% of patients, whose lesions were large cortical infarction, medullary infarction, and bilateral subdural hemorrhage. Conclusion COS arises from varied sites between medulla and cortex, and is usually caused by small stroke lesion. Neurological deterioration occurs in 12% of patients and relates to large vessel occlusion, medullary involvement or cortical stroke. Since the location and deterioration of COS cannot be predicted by clinical symptoms alone, COS should be considered an emergent condition for aggressive investigation until fatal cause is substantially excluded. PMID:20046417

  11. New simple evaluation method of the monosyllable /sa/ using a psychoacoustic system in maxillectomy patients.

    PubMed

    Chowdhury, Nafees Uddin; Otomaru, Takafumi; Murase, Mai; Inohara, Ken; Hattori, Mariko; Sumita, Yuka I; Taniguchi, Hisashi

    2011-01-01

    An objective assessment of speech would benefit the prosthetic rehabilitation of maxillectomy patients. This study aimed to establish a simple, objective evaluation of monosyllable /sa/ utterances in maxillectomy patients by using a psychoacoustic system typically used in industry. This study comprised two experiments. Experiment 1 involved analysis of the psychoacoustic parameters (loudness, sharpness and roughness) in monosyllable /sa/ utterances by 18 healthy subjects (9 males, 9 females). The utterances were recorded in a sound-treated room. The coefficient of variation (CV) for each parameter was compared to identify the most suitable parameter for objective evaluation of speech. Experiment 2 involved analysis of /sa/ utterances by 18 maxillectomy patients (9 males, 9 females) with and without prosthesis, and comparisons of the psychoacoustic data between the healthy subjects and maxillectomy patients without prosthesis, between the maxillectomy patients with and without prosthesis, and between the healthy subjects and maxillectomy patients with prosthesis. The CV for sharpness was the lowest among the three psychoacoustic parameters in both the healthy males and females. There were significant differences in the sharpness of /sa/ between the healthy subjects and the maxillectomy patients without prosthesis (but not with prosthesis), and between the maxillectomy patients with and without prosthesis. We found that the psychoacoustic parameters typically adopted in industrial research could also be applied to evaluate the psychoacoustics of the monosyllable /sa/ utterance, and distinguished the monosyllable /sa/ in maxillectomy patients with an obturator from that without an obturator using the system. Copyright © 2010 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  12. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  13. Multiple Cranial Nerve Palsies in Giant Cell Arteritis.

    PubMed

    Ross, Michael; Bursztyn, Lulu; Superstein, Rosanne; Gans, Mark

    2017-12-01

    Giant cell arteritis (GCA) is a systemic vasculitis of medium and large arteries often with ophthalmic involvement, including ischemic optic neuropathy, retinal artery occlusion, and ocular motor cranial nerve palsies. This last complication occurs in 2%-15% of patients, but typically involves only 1 cranial nerve. We present 2 patients with biopsy-proven GCA associated with multiple cranial nerve palsies.

  14. Boosting safety behaviour: Descriptive norms encourage child booster seat usage amongst low involvement parents.

    PubMed

    Jeffrey, Jennifer; Whelan, Jodie; Pirouz, Dante M; Snowdon, Anne W

    2016-07-01

    Campaigns advocating behavioural changes often employ social norms as a motivating technique, favouring injunctive norms (what is typically approved or disapproved) over descriptive norms (what is typically done). Here, we investigate an upside to including descriptive norms in health and safety appeals. Because descriptive norms are easy to process and understand, they should provide a heuristic to guide behaviour in those individuals who lack the interest or motivation to reflect on the advocated behaviour more deeply. When those descriptive norms are positive - suggesting that what is done is consistent with what ought to be done - including them in campaigns should be particularly beneficial at influencing this low-involvement segment. We test this proposition via research examining booster seat use amongst parents with children of booster seat age, and find that incorporating positive descriptive norms into a related campaign is particularly impactful for parents who report low involvement in the topic of booster seat safety. Descriptive norms are easy to state and easy to understand, and our research suggests that these norms resonate with low involvement individuals. As a result, we recommend incorporating descriptive norms when possible into health and safety campaigns. Copyright © 2016. Published by Elsevier Ltd.

  15. ELISA test for anti-neutrophil cytoplasm antibodies detection evaluated by a computer screen photo-assisted technique.

    PubMed

    Filippini, D; Tejle, K; Lundström, I

    2005-08-15

    The computer screen photo-assisted technique (CSPT), a method for substance classification based on spectral fingerprinting, which involves just a computer screen and a web camera as measuring platform is used here for the evaluation of a prospective enzyme-linked immunosorbent assay (ELISA). A anti-neutrophil cytoplasm antibodies (ANCA-ELISA) test, typically used for diagnosing patients suffering from chronic inflammatory disorders in the skin, joints, blood vessels and other tissues is comparatively tested with a standard microplate reader and CSPT, yielding equivalent results at a fraction of the instrumental costs. The CSPT approach is discussed as a distributed measuring platform allowing decentralized measurements in routine applications, whereas keeping centralized information management due to its natural network embedded operation.

  16. Calibration of asynchronous smart phone cameras from moving objects

    NASA Astrophysics Data System (ADS)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  17. Method and apparatus for dispensing small quantities of mercury from evacuated and sealed glass capsules

    DOEpatents

    Grossman, Mark W.; George, William A.; Pai, Robert Y.

    1985-01-01

    A technique for opening an evacuated and sealed glass capsule containing a material that is to be dispensed which has a relatively high vapor pressure such as mercury. The capsule is typically disposed in a discharge tube envelope. The technique involves the use of a first light source imaged along the capsule and a second light source imaged across the capsule substantially transversely to the imaging of the first light source. Means are provided for constraining a segment of the capsule along its length with the constraining means being positioned to correspond with the imaging of the second light source. These light sources are preferably incandescent projection lamps. The constraining means is preferably a multiple looped wire support.

  18. Automatic streak endpoint localization from the cornerness metric

    NASA Astrophysics Data System (ADS)

    Sease, Brad; Flewelling, Brien; Black, Jonathan

    2017-05-01

    Streaked point sources are a common occurrence when imaging unresolved space objects from both ground- and space-based platforms. Effective localization of streak endpoints is a key component of traditional techniques in space situational awareness related to orbit estimation and attitude determination. To further that goal, this paper derives a general detection and localization method for streak endpoints based on the cornerness metric. Corners detection involves searching an image for strong bi-directional gradients. These locations typically correspond to robust structural features in an image. In the case of unresolved imagery, regions with a high cornerness score correspond directly to the endpoints of streaks. This paper explores three approaches for global extraction of streak endpoints and applies them to an attitude and rate estimation routine.

  19. Lateral epicondylitis of the elbow.

    PubMed

    Tosti, Rick; Jennings, John; Sewards, J Milo

    2013-04-01

    Lateral epicondylitis, or "tennis elbow," is a common musculotendinous degenerative disorder of the extensor origin at the lateral humeral epicondyle. Repetitive occupational or athletic activities involving wrist extension and supination are thought to be causative. The typical symptoms include lateral elbow pain, pain with wrist extension, and weakened grip strength. The diagnosis is made clinically through history and physical examination; however, a thorough understanding of the differential diagnosis is imperative to prevent unnecessary testing and therapies. Most patients improve with nonoperative measures, such as activity modification, physical therapy, and injections. A small percentage of patients will require surgical release of the extensor carpi radialis brevis tendon. Common methods of release may be performed via percutaneous, arthroscopic, or open approaches. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Statistical process control in nursing research.

    PubMed

    Polit, Denise F; Chaboyer, Wendy

    2012-02-01

    In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. Copyright © 2011 Wiley Periodicals, Inc.

  1. Moving in Parallel Toward a Modern Modeling Epistemology: Bayes Factors and Frequentist Modeling Methods.

    PubMed

    Rodgers, Joseph Lee

    2016-01-01

    The Bayesian-frequentist debate typically portrays these statistical perspectives as opposing views. However, both Bayesian and frequentist statisticians have expanded their epistemological basis away from a singular focus on the null hypothesis, to a broader perspective involving the development and comparison of competing statistical/mathematical models. For frequentists, statistical developments such as structural equation modeling and multilevel modeling have facilitated this transition. For Bayesians, the Bayes factor has facilitated this transition. The Bayes factor is treated in articles within this issue of Multivariate Behavioral Research. The current presentation provides brief commentary on those articles and more extended discussion of the transition toward a modern modeling epistemology. In certain respects, Bayesians and frequentists share common goals.

  2. Rheumatoid pseudocyst (geode) of the femoral neck without apparent joint involvement.

    PubMed

    Morrey, B F

    1987-05-01

    Typically, rheumatoid cysts are associated with obvious joint involvement and are located in the subchondral portion of the adjacent joint. Giant pseudocysts (geodes) are uncommon and are characteristically associated with extensive joint destruction. The patient described in this report had a giant pseudocyst of the femoral neck but no joint involvement. To the best of my knowledge, this is the first report of such a manifestation of a giant pseudocyst. As such, it posed a somewhat difficult diagnostic problem.

  3. Encephalitis due to antibodies to voltage gated potassium channel (VGKC) with cerebellar involvement in a teenager

    PubMed Central

    Langille, Megan M.; Desai, Jay

    2015-01-01

    Encephalitis due to antibodies to voltage gated potassium channel (VGKC) typically presents with limbic encephalitis and medial temporal lobe involvement on neuroimaging. We describe a case of 13 year girl female with encephalitis due to antibodies to VGKC with signal changes in the cerebellar dentate nuclei bilaterally and clinical features that suggested predominant cerebellar involvement. These have never been reported previously in the literature. Our case expands the phenotypic spectrum of this rare condition. PMID:26019428

  4. Encephalitis due to antibodies to voltage gated potassium channel (VGKC) with cerebellar involvement in a teenager.

    PubMed

    Langille, Megan M; Desai, Jay

    2015-01-01

    Encephalitis due to antibodies to voltage gated potassium channel (VGKC) typically presents with limbic encephalitis and medial temporal lobe involvement on neuroimaging. We describe a case of 13 year girl female with encephalitis due to antibodies to VGKC with signal changes in the cerebellar dentate nuclei bilaterally and clinical features that suggested predominant cerebellar involvement. These have never been reported previously in the literature. Our case expands the phenotypic spectrum of this rare condition.

  5. Fire Ant Allergy

    MedlinePlus

    ... a life-threatening reaction called anaphylaxis (an-a-fi-LAK-sis). Symptoms of anaphylaxis typically involve more ... Immunology 555 East Wells Street Suite 1100, Milwaukee , WI 53202-3823 (414) 272-6071 Additional Contact Information ...

  6. Fecal microbiota transplantation and its potential therapeutic uses in gastrointestinal disorders.

    PubMed

    Heath, Ryan D; Cockerell, Courtney; Mankoo, Ravinder; Ibdah, Jamal A; Tahan, Veysel

    2018-01-01

    Typical human gut flora has been well characterized in previous studies and has been noted to have significant differences when compared with the typical microbiome of various disease states involving the gastrointestinal tract. Such diseases include Clostridium difficile colitis, inflammatory bowel disease, functional bowel syndromes, and various states of liver disease. A growing number of studies have investigated the use of a fecal microbiota transplant as a potential therapy for these disease states.

  7. Untangling the Reaction Mechanisms Involved in the Explosive Decomposition of Model Compounds of Energetic Materials

    DTIC Science & Technology

    2014-06-11

    typically of a few 10-11 torr using oil-free magnetically suspended turbomolecular pumps backed with dry scroll pumps . A cold finger assembled from...on line and in situ utilizing a Faraday cup mounted inside a differentially pumped chamber on an ultrahigh vacuum compatible translation state. The...down to a base pressure typically of a few 10-11 torr using oil-free magnetically suspended turbomolecular pumps backed with dry scroll pumps . A

  8. Fecal microbiota transplantation and its potential therapeutic uses in gastrointestinal disorders

    PubMed Central

    Heath, Ryan D.; Cockerell, Courtney; Mankoo, Ravinder; Ibdah, Jamal A.; Tahan, Veysel

    2018-01-01

    Typical human gut flora has been well characterized in previous studies and has been noted to have significant differences when compared with the typical microbiome of various disease states involving the gastrointestinal tract. Such diseases include Clostridium difficile colitis, inflammatory bowel disease, functional bowel syndromes, and various states of liver disease. A growing number of studies have investigated the use of a fecal microbiota transplant as a potential therapy for these disease states. PMID:29607440

  9. Neuroradiological findings in maple syrup urine disease

    PubMed Central

    Indiran, Venkatraman; Gunaseelan, R. Emmanuel

    2013-01-01

    Maple syrup urine disease is a rare inborn error of amino acid metabolism involving catabolic pathway of the branched-chain amino acids. This disease, if left untreated, may cause damage to the brain and may even cause death. These patients typically present with distinctive maple syrup odour of sweat and urine. Patients typically present with skin and urine smelling like maple syrup. Here we describe a case with relevant magnetic resonance imaging findings and confirmatory biochemical findings. PMID:23772241

  10. Neuroradiological findings in maple syrup urine disease.

    PubMed

    Indiran, Venkatraman; Gunaseelan, R Emmanuel

    2013-01-01

    Maple syrup urine disease is a rare inborn error of amino acid metabolism involving catabolic pathway of the branched-chain amino acids. This disease, if left untreated, may cause damage to the brain and may even cause death. These patients typically present with distinctive maple syrup odour of sweat and urine. Patients typically present with skin and urine smelling like maple syrup. Here we describe a case with relevant magnetic resonance imaging findings and confirmatory biochemical findings.

  11. An approach to the design and implementation of spacecraft attitude control systems

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Mangus, David J.

    1998-01-01

    Over 39 years and a long list of missions, the guidance, navigation, and control (GN&C) groups at the Goddard Space Flight Center have gradually developed approaches to the design and implementation of successful spacecraft attitude control systems. With the recent creation of the Guidance, Navigation, and Control Center at Goddard, there is a desire to document some of these design practices to help to ensure their consistent application in the future. In this paper, we will discuss the beginnings of this effort, drawing primarily on the experience of one of the past attitude control system (ACS) groups at Goddard (what was formerly known as Code 712, the Guidance, Navigation, and Control Branch). We will discuss the analysis and design methods and criteria used, including guidelines for linear and nonlinear analysis, as well as the use of low- and high-fidelity simulation for system design and verification of performance. Descriptions of typical ACS sensor and actuator hardware will be shown, and typical sensor/actuator suites for a variety of mission types detailed. A description of the software and hardware test effort will be given, along with an attempt to make some qualitative estimates on how much effort is involved. The spacecraft and GN&C subsystem review cycles will be discussed, giving an outline of what design reviews are typically held and what information should be presented at each stage. Finally, we will point out some of the lessons learned at Goddard.

  12. An Approach to the Design and Implementation of Spacecraft Attitude Control Systems

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Mangus, David J.

    1998-01-01

    Over 39 years and a long list of missions, the guidance, navigation, and control (GN&C) groups at the Goddard Space Flight Center have gradually developed approaches to the design and implementation of successful spacecraft attitude control systems. With the recent creation of the Guidance, Navigation, and Control Center at Goddard, there is a desire to document some of these design practices to help to ensure their consistent application in the future. In this paper, we will discuss the beginnings of this effort, drawing primarily on the experience of one of the past attitude control system (ACS) groups at Goddard (what was formerly known as Code 712, the Guidance, Navigation, and Control Branch). We will discuss the analysis and design methods and criteria used, including guidelines for linear and nonlinear analysis, as well as the use of low- and high-fidelity simulation for system design and verification of performance. Descriptions of typical ACS sensor and actuator hardware will be shown, and typical sensor/actuator suites for a variety of mission types detailed. A description of the software and hardware test effort will be given, along with an attempt to make some qualitative estimates on how much effort is involved. The spacecraft and GN&C subsystem review cycles will be discussed, giving an outline of what design reviews are typically held and .what information should be presented at each stage. Finally, we will point out some of the lessons learned at Goddard.

  13. A numerical analysis of phase-change problems including natural convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y.; Faghri, A.

    1990-08-01

    Fixed grid solutions for phase-change problems remove the need to satisfy conditions at the phase-change front and can be easily extended to multidimensional problems. The two most important and widely used methods are enthalpy methods and temperature-based equivalent heat capacity methods. Both methods in this group have advantages and disadvantages. Enthalpy methods (Shamsundar and Sparrow, 1975; Voller and Prakash, 1987; Cao et al., 1989) are flexible and can handle phase-change problems occurring both at a single temperature and over a temperature range. The drawback of this method is that although the predicted temperature distributions and melting fronts are reasonable, themore » predicted time history of the temperature at a typical grid point may have some oscillations. The temperature-based fixed grid methods (Morgan, 1981; Hsiao and Chung, 1984) have no such time history problems and are more convenient with conjugate problems involving an adjacent wall, but have to deal with the severe nonlinearity of the governing equations when the phase-change temperature range is small. In this paper, a new temperature-based fixed-grid formulation is proposed, and the reason that the original equivalent heat capacity model is subject to such restrictions on the time step, mesh size, and the phase-change temperature range will also be discussed.« less

  14. Robust Strategy for Rocket Engine Health Monitoring

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    2001-01-01

    Monitoring the health of rocket engine systems is essentially a two-phase process. The acquisition phase involves sensing physical conditions at selected locations, converting physical inputs to electrical signals, conditioning the signals as appropriate to establish scale or filter interference, and recording results in a form that is easy to interpret. The inference phase involves analysis of results from the acquisition phase, comparison of analysis results to established health measures, and assessment of health indications. A variety of analytical tools may be employed in the inference phase of health monitoring. These tools can be separated into three broad categories: statistical, rule based, and model based. Statistical methods can provide excellent comparative measures of engine operating health. They require well-characterized data from an ensemble of "typical" engines, or "golden" data from a specific test assumed to define the operating norm in order to establish reliable comparative measures. Statistical methods are generally suitable for real-time health monitoring because they do not deal with the physical complexities of engine operation. The utility of statistical methods in rocket engine health monitoring is hindered by practical limits on the quantity and quality of available data. This is due to the difficulty and high cost of data acquisition, the limited number of available test engines, and the problem of simulating flight conditions in ground test facilities. In addition, statistical methods incur a penalty for disregarding flow complexity and are therefore limited in their ability to define performance shift causality. Rule based methods infer the health state of the engine system based on comparison of individual measurements or combinations of measurements with defined health norms or rules. This does not mean that rule based methods are necessarily simple. Although binary yes-no health assessment can sometimes be established by relatively simple rules, the causality assignment needed for refined health monitoring often requires an exceptionally complex rule base involving complicated logical maps. Structuring the rule system to be clear and unambiguous can be difficult, and the expert input required to maintain a large logic network and associated rule base can be prohibitive.

  15. Research on the recycling industry development model for typical exterior plastic components of end-of-life passenger vehicle based on the SWOT method.

    PubMed

    Zhang, Hongshen; Chen, Ming

    2013-11-01

    In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Atypical cross talk between mentalizing and mirror neuron networks in autism spectrum disorder.

    PubMed

    Fishman, Inna; Keown, Christopher L; Lincoln, Alan J; Pineda, Jaime A; Müller, Ralph-Axel

    2014-07-01

    Converging evidence indicates that brain abnormalities in autism spectrum disorder (ASD) involve atypical network connectivity, but it is unclear whether altered connectivity is especially prominent in brain networks that participate in social cognition. To investigate whether adolescents with ASD show altered functional connectivity in 2 brain networks putatively impaired in ASD and involved in social processing, theory of mind (ToM) and mirror neuron system (MNS). Cross-sectional study using resting-state functional magnetic resonance imaging involving 25 adolescents with ASD between the ages of 11 and 18 years and 25 typically developing adolescents matched for age, handedness, and nonverbal IQ. Statistical parametric maps testing the degree of whole-brain functional connectivity and social functioning measures. Relative to typically developing controls, participants with ASD showed a mixed pattern of both over- and underconnectivity in the ToM network, which was associated with greater social impairment. Increased connectivity in the ASD group was detected primarily between the regions of the MNS and ToM, and was correlated with sociocommunicative measures, suggesting that excessive ToM-MNS cross talk might be associated with social impairment. In a secondary analysis comparing a subset of the 15 participants with ASD with the most severe symptomology and a tightly matched subset of 15 typically developing controls, participants with ASD showed exclusive overconnectivity effects in both ToM and MNS networks, which were also associated with greater social dysfunction. Adolescents with ASD showed atypically increased functional connectivity involving the mentalizing and mirror neuron systems, largely reflecting greater cross talk between the 2. This finding is consistent with emerging evidence of reduced network segregation in ASD and challenges the prevailing theory of general long-distance underconnectivity in ASD. This excess ToM-MNS connectivity may reflect immature or aberrant developmental processes in 2 brain networks involved in understanding of others, a domain of impairment in ASD. Further, robust links with sociocommunicative symptoms of ASD implicate atypically increased ToM-MNS connectivity in social deficits observed in ASD.

  17. Method for Atypical Opinion Extraction from Ungrammatical Answers in Open-ended Questions

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Ayako; Tamura, Shingo; Oiso, Hiroaki; Komoda, Norihisa

    This paper presents a method for atypical opinion extraction from ungrammatical answers to open-ended questions supplied through cellular phones. The proposed system excludes typical opinions and extracts only atypical opinions. To cope with incomplete syntax of texts due to the input by cellular phones, the system treats the opinions as the sets of keywords. The combinations of words are established beforehand in a typical word database. Based on the ratio of typical word combinations in sentences of an opinion, the system classifies the opinion typical or atypical. When typical word combinations are sought in an opinion, the system considers the word order and the distance of difference between the positions of words to exclude unnecessary combinations. Furthermore, when an opinion includes meanings the system divides the opinion into phrases at each typical word combination. By applying questionnaire data supplied by users of a mobile game content when they cancel their account, the extraction accuracy of the proposed system was confirmed.

  18. The intersection syndrome: Ultrasound findings and their diagnostic value

    PubMed Central

    Montechiarello, S.; Miozzi, F.; D’Ambrosio, I.; Giovagnorio, F.

    2010-01-01

    Introduction The intersection syndrome is a well-known overuse syndrome of the distal forearm. It is characterized by noninfectious, inflammatory changes involving the area of intersection of the first (abductor pollicis longus and extensor pollicis brevis) and second (extensor carpi radialis longus and extensor carpi radialis brevis) extensor compartments in the dorsoradial aspect of the distal forearm. Imaging modalities used to diagnosis this syndrome include ultrasonography (US) and magnetic resonance imaging. The purpose of this report is to describe typical US findings in the intersection syndrome and to demonstrate the diagnostic value of this approach. Materials and methods We reviewed US findings in 4 patients (mean age 40 years) referred to our staff for symptoms suggestive of the intersection syndrome (pain, swelling, erythema, and edema of the wrist). Results In all 4 cases, the US examination revealed peritendinous edema and synovial fluid within the tendon sheaths at the intersection between the first and the second dorsal extensor tendon compartments. Discussion Our experience shows that the intersection syndrome is associated with typical signs on US. This imaging modality can be considered a reliable tool for diagnosing this syndrome and may eliminate the need for other more expensive tests. PMID:23396515

  19. Evaluation of bacterial flora during the ripening of Kedong sufu, a typical Chinese traditional bacteria-fermented soybean product.

    PubMed

    Feng, Zhen; Gao, Wei; Ren, Dan; Chen, Xi; Li, Juan-juan

    2013-04-01

    Kedong sufu is a typical bacteria-fermented sufu in China. Isolation and identification of the autochthonous bacteria involved would allow the design of specific starters for this speciality. The purpose of the present study was to evaluate the bacterial flora during the ripening of Kedong sufu using polymerase chain reaction denaturing gradient gel electrophoresis (PCR-DGGE) and culturing. In terms of bacterial diversity, 22 strains were isolated and identified and 27 strains were detected by DGGE. Regarding bacterial dynamics, the results of culturing and PCR-DGGE exhibited a similar trend towards dominant strains. Throughout the fermentation of sufu, Enterococcus avium, Enterococcus faecalis and Staphylococcus carnosus were the dominant microflora, while the secondary microflora comprised Leuconostoc mesenteroides, Staphylococcus saprophyticus, Streptococcus lutetiensis, Kocuria rosea, Kocuria kristinae, Bacillus pumilus, Bacillus cereus and Bacillus subtilis. This study is the first to reveal the bacterial flora during the ripening of Kedong sufu using both culture-dependent and culture-independent methods. This information will help in the design of autochthonous starter cultures for the production of Kedong sufu with desirable characteristic sensory profiles and shorter ripening times. © 2012 Society of Chemical Industry.

  20. Anomalous Putamen Volume in Children with Complex Motor Stereotypies

    PubMed Central

    Mahone, E. Mark; Crocetti, Deana; Tochen, Laura; Kline, Tina; Mostofsky, Stewart H.; Singer, Harvey S.

    2016-01-01

    Introduction Complex motor stereotypies in children are repetitive, rhythmic movements that have a predictable pattern and location, seem purposeful, but serve no obvious function, tend to be prolonged, and stop with distraction, e.g., arm/hand flapping, waving. They occur in both “primary” (otherwise typically developing) and secondary conditions. These movements are best defined as habitual behaviors and therefore pathophysiologically hypothesized to reside in premotor to posterior putamen circuits. This study sought to clarify the underlying neurobiological abnormality in children with primary complex motor stereotypies using structural neuroimaging, emphasizing brain regions hypothesized to underlie these atypical behaviors. Methods High-resolution anatomical MRI images, acquired at 3.0T, were analyzed in children ages 8–12 years (20 with primary complex motor stereotypies, 20 typically developing). Frontal lobe sub-regions and striatal structures were delineated for analysis. Results Significant reductions (p=0.045) in the stereotypies group were identified in total putamen volume, but not caudate, nucleus accumbens or frontal sub-regions. There were no group differences in total cerebral volume. Conclusion Findings of a smaller putamen provide preliminary evidence suggesting the potential involvement of the habitual pathway as the underlying anatomical site in primary complex motor stereotypies. PMID:27751663

  1. Detection of triglycerides using immobilized enzymes in food and biological samples

    NASA Astrophysics Data System (ADS)

    Raichur, Ashish; Lesi, Abiodun; Pedersen, Henrik

    1996-04-01

    A scheme for the determination of total triglyceride (fat) content in biomedical and food samples is being developed. The primary emphasis is to minimize the reagents used, simplify sample preparation and develop a robust system that would facilitate on-line monitoring. The new detection scheme developed thus far involves extracting triglycerides into an organic solvent (cyclohexane) and performing partial least squares (PLS) analysis on the NIR (1100 - 2500 nm) absorbance spectra of the solution. A training set using 132 spectra of known triglyceride mixtures was complied. Eight PLS calibrations were generated and were used to predict the total fat extracted from commercial samples such as mayonnaise, butter, corn oil and coconut oil. The results typically gave a correlation coefficient (r) of 0.99 or better. Predictions were typically within 90% and better at higher concentrations. Experiments were also performed using an immobilized lipase reactor to hydrolyze the fat extracted into the organic solvent. Performing PLS analysis on the difference spectra of the substrate and product could enhance specificity. This is being verified experimentally. Further work with biomedical samples is to be performed. This scheme may be developed into a feasible detection method for triglycerides in the biomedical and food industries.

  2. The Application of Infrared Thermographic Inspection Techniques to the Space Shuttle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Cramer, K. E.; Winfree, W. P.

    2005-01-01

    The Nondestructive Evaluation Sciences Branch at NASA s Langley Research Center has been actively involved in the development of thermographic inspection techniques for more than 15 years. Since the Space Shuttle Columbia accident, NASA has focused on the improvement of advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter s wing leading edge. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can be used to inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. A typical implementation of PCA is when the eigenvectors are generated from the data set being analyzed. Although it is a powerful tool for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from the RCC materials. Details of a one-dimensional analytic model and a two-dimensional finite-element model will be presented. An overview of the PCA process as well as a quantitative signal-to-noise comparison of the results of performing both embodiments of PCA on thermographic data from various RCC specimens will be shown. Finally, a number of different applications of this technology to various RCC components will be presented.

  3. Psittacosis

    MedlinePlus

    ... shop employees People who work in poultry processing plants Veterinarians Typical birds involved are parrots, parakeets, and budgerigars, although other birds have also caused the disease. Psittacosis is a rare disease. Very few cases are reported each year ...

  4. Click It or Ticket Evaluation, 2011

    DOT National Transportation Integrated Search

    2013-05-01

    The 2011 Click It or Ticket (CIOT) mobilization followed a typical selective traffic enforcement program (STEP) sequence, involving paid media, earned media, and enforcement. A nationally representative telephone survey indicated that the mobilizatio...

  5. Genetics Home Reference: Swyer syndrome

    MedlinePlus

    ... they help determine whether a person will develop male or female sex characteristics. Girls and women typically ... Y protein starts processes that are involved in male sexual development. These processes cause a fetus to ...

  6. Analysis of comfort and ergonomics for clinical work environments.

    PubMed

    Shafti, Ali; Lazpita, Beatriz Urbistondo; Elhage, Oussama; Wurdemann, Helge A; Althoefer, Kaspar

    2016-08-01

    Work related musculoskeletal disorders (WMSD) are a serious risk to workers' health in any work environment, and especially in clinical work places. These disorders are typically the result of prolonged exposure to non-ergonomic postures and the resulting discomfort in the workplace. Thus a continuous assessment of comfort and ergonomics is necessary. There are different techniques available to make such assessments, such as self-reports on perceived discomfort and observational scoring models based on the posture's relevant joint angles. These methods are popular in medical and industrial environments alike. However, there are uncertainties with regards to objectivity of these methods and whether they provide a full picture. This paper reports on a study about these methods and how they correlate with the activity of muscles involved in the task at hand. A wearable 4-channel electromyography (EMG) and joint angle estimation device with wireless transmission was made specifically for this study to allow continuous, long-term and real-time measurements and recording of activities. N=10 participants took part in an experiment involving a buzz-wire test at 3 different levels, with their muscle activity (EMG), joint angle scores (Rapid Upper Limb Assessment - RULA), self-reports of perceived discomfort (Borg scale) and performance score on the buzz-wire being recorded and compared. Results show that the Borg scale is not responsive to smaller changes in discomfort whereas RULA and EMG can be used to detect more detailed changes in discomfort, effort and ergonomics.

  7. Parental Involvement and Spousal Satisfaction with Division of Early Childcare in Turkish Families with Normal Children and Children with Special Needs

    ERIC Educational Resources Information Center

    Ozgun, Ozkan; Honig, Alice Sterling

    2005-01-01

    In this low-income Turkish sample, parents reported on father and mother division of childcare labor and satisfaction with division. Regardless of whether they were rearing typical or atypical children, mothers reported a higher level of involvement than fathers in every domain of childcare. In general, both mothers and fathers reported slight…

  8. An Analysis of a Typical Instructional Unit in Junior High School Science to Determine the Explicit and Implicit Concept Loading Involved. Final Report.

    ERIC Educational Resources Information Center

    Smith, Herbert A.

    This study involved examining an instructional unit with regard to its concept content and appropriateness for its target audience. The study attempted to determine (1) what concepts are treated explicitly or implicitly, (2) whether there is a hierarchical conceptual structure within the unit, (3) what level of sophistication is required to…

  9. Atypical autoerotic deaths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowitt, G.T.; Hanzlick, R.L.

    1992-06-01

    So-called typical' autoerotic fatalities are the result of asphyxia due to mechanical compression of the neck, chest, or abdomen, whereas atypical' autoeroticism involves sexual self-stimulation by other means. The authors present five atypical autoerotic fatalities that involved the use of dichlorodifluoromethane, nitrous oxide, isobutyl nitrite, cocaine, or compounds containing 1-1-1-trichloroethane. Mechanisms of death are discussed in each case and the pertinent literature is reviewed.

  10. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  11. Recreational technical diving part 1: an introduction to technical diving methods and activities.

    PubMed

    Mitchell, Simon J; Doolette, David J

    2013-06-01

    Technical divers use gases other than air and advanced equipment configurations to conduct dives that are deeper and/or longer than typical recreational air dives. The use of oxygen-nitrogen (nitrox) mixes with oxygen fractions higher than air results in longer no-decompression limits for shallow diving, and faster decompression from deeper dives. For depths beyond the air-diving range, technical divers mix helium, a light non-narcotic gas, with nitrogen and oxygen to produce 'trimix'. These blends are tailored to the depth of intended use with a fraction of oxygen calculated to produce an inspired oxygen partial pressure unlikely to cause cerebral oxygen toxicity and a nitrogen fraction calculated to produce a tolerable degree of nitrogen narcosis. A typical deep technical dive will involve the use of trimix at the target depth with changes to gases containing more oxygen and less inert gas during the decompression. Open-circuit scuba may be used to carry and utilise such gases, but this is very wasteful of expensive helium. There is increasing use of closed-circuit 'rebreather' devices. These recycle expired gas and potentially limit gas consumption to a small amount of inert gas to maintain the volume of the breathing circuit during descent and the amount of oxygen metabolised by the diver. This paper reviews the basic approach to planning and execution of dives using these methods to better inform physicians of the physical demands and risks.

  12. Chromatic illumination discrimination ability reveals that human colour constancy is optimised for blue daylight illuminations.

    PubMed

    Pearce, Bradley; Crichton, Stuart; Mackiewicz, Michal; Finlayson, Graham D; Hurlbert, Anya

    2014-01-01

    The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed.

  13. Mimicry of Appendicitis Symptomatology in Congenital Anomalies and Diseases of the Genitourinary System and Pregnancy

    PubMed Central

    Dalpiaz, Amanda; Gandhi, Jason; Smith, Noel L.; Dagur, Gautam; Schwamb, Richard; Weissbart, Steven J.; Khan, Sardar Ali

    2017-01-01

    Introduction Appendicitis is a prevailing cause of acute abdomen, but is often difficult to diagnose due to its wide range of symptoms, anatomical variations, and developmental abnormalities. Urological disorders of the genitourinary tract may be closely related to appendicitis due to the close proximity of the appendix to the genitourinary tract. This review provides a summary of the urological complications and simulations of appendicitis. Both typical and urological symptoms of appendicitis are discussed, as well as recommended diagnostic and treatment methods. Methods Medline searches were conducted via PubMed in order to incorporate data from the recent and early literature. Results Urological manifestations of appendicitis affect the adrenal glands, kidney, retroperitoneum, ureter, bladder, prostate, scrotum, and penis. Appendicitis in pregnancy is difficult to diagnose due to variations in appendiceal position and trimester-specific symptoms. Ultrasound, CT, and MRI are used in diagnosis of appendicitis and its complications. Treatment of appendicitis may be done via open appendectomy or laparoscopic appendectomy. In some cases, other surgeries are required to treat urological complications, though surgery may be avoided completely in other cases. Conclusion Clinical presentation and complications of appendicitis vary among patients, especially when the genitourinary tract is involved. Appendicitis may mimic urological disorders and vice versa. Awareness of differential diagnosis and proper diagnostic techniques is important in preventing delayed diagnosis and possible complications. MRI is recommended for diagnosis of pregnant patients. Ultrasound is preferred in patients exhibiting typical symptoms. PMID:28413377

  14. Lower Extremity Muscle Activity during Cycling in Adolescents with and without Cerebral Palsy

    PubMed Central

    Lauer, Richard T.; Johnston, Therese E.; Smith, Brian T.; Lee, Samuel C.K.

    2008-01-01

    Background In individuals with cerebral palsy (CP), adaptation and plasticity in the neuromuscular system can lead to detrimental changes affecting gait. Cycling may be an effective method to improve mobility. The biomechanics of cycling in adolescents with CP have been studied, but further analysis of the frequency and amplitude characteristics of the electromyographic (EMG) signals can assist with interpretation of the cycling kinematics. Methods Data were analyzed from ten adolescents with typical development (TD) (mean = 14.9 SD = 1.4 years) and ten adolescents with CP (mean = 15.6 SD = 1.8 years) as they cycled at two different cadences. Analyses of the lower extremity EMG signals involved frequency and amplitude analysis across the cycling revolution. Findings Examination of cycling cadence revealed that adolescents with CP had altered EMG characteristics in comparison to adolescents with typical development across the entire crank revolution for all muscles. Analyses of individual muscles indicated both inappropriate muscle activation and weakness. Interpretation A more comprehensive analysis of EMG activity has the potential to provide insight into how a task is accomplished. In this study, the control of the several muscles, especially the rectus femoris, was significantly different in adolescents with cerebral palsy. This, combined with muscle weakness, may have contributed to the observed deviations in joint kinematics. Interventions that increase muscle strength with feedback to the nervous system about appropriate activation timing may be beneficial to allow individuals with CP to cycle more efficiently. PMID:18082920

  15. Development of a Task-Exposure Matrix (TEM) for Pesticide Use (TEMPEST).

    PubMed

    Dick, F D; Semple, S E; van Tongeren, M; Miller, B G; Ritchie, P; Sherriff, D; Cherrie, J W

    2010-06-01

    Pesticides have been associated with increased risks for a range of conditions including Parkinson's disease, but identifying the agents responsible has proven challenging. Improved pesticide exposure estimates would increase the power of epidemiological studies to detect such an association if one exists. Categories of pesticide use were identified from the tasks reported in a previous community-based case-control study in Scotland. Typical pesticides used in each task in each decade were identified from published scientific and grey literature and from expert interviews, with the number of potential agents collapsed into 10 groups of pesticides. A pesticide usage database was then created, using the task list and the typical pesticide groups employed in those tasks across seven decades spanning the period 1945-2005. Information about the method of application and concentration of pesticides used in these tasks was then incorporated into the database. A list was generated of 81 tasks involving pesticide exposure in Scotland covering seven decades producing a total of 846 task per pesticide per decade combinations. A Task-Exposure Matrix for PESTicides (TEMPEST) was produced by two occupational hygienists who quantified the likely probability and intensity of inhalation and dermal exposures for each pesticide group for a given use during each decade. TEMPEST provides a basis for assessing exposures to specific pesticide groups in Scotland covering the period 1945-2005. The methods used to develop TEMPEST could be used in a retrospective assessment of occupational exposure to pesticides for Scottish epidemiological studies or adapted for use in other countries.

  16. Are there parental socialization effects on the sex-typed behavior of individuals with congenital adrenal hyperplasia?

    PubMed

    Wong, Wang I; Pasterski, Vickie; Hindmarsh, Peter C; Geffner, Mitchell E; Hines, Melissa

    2013-04-01

    Influences of prenatal androgen exposure on human sex-typical behavior have been established largely through studies of individuals with congenital adrenal hyperplasia (CAH). However, evidence that addresses the potential confounding influence of parental socialization is limited. Parental socialization and its relationship to sex-typical toy play and spatial ability were investigated in two samples involving 137 individuals with CAH and 107 healthy controls. Females with CAH showed more boy-typical toy play and better targeting performance than control females, but did not differ in mental rotations performance. Males with CAH showed worse mental rotations performance than control males, but did not differ in sex-typical toy play or targeting. Reported parental encouragement of girl-typical toy play correlated with girl-typical toy play in all four groups. Moreover, parents reported encouraging less girl-typical, and more boy-typical, toy play in females with CAH than in control females and this reported encouragement partially mediated the relationship between CAH status and sex-typical toy play. Other evidence suggests that the reported parental encouragement of sex-atypical toy play in girls with CAH may be a response to the girls' preferences for boys' toys. Nevertheless, this encouragement could further increase boy-typical behavior in girls with CAH. In contrast to the results for toy play, we found no differential parental socialization for spatial activities and little evidence linking parental socialization to spatial ability. Overall, evidence suggests that prenatal androgen exposure and parental socialization both contribute to sex-typical toy play.

  17. Click It or Ticket Evaluation, 2010

    DOT National Transportation Integrated Search

    2013-05-01

    The 201 Click It or Ticket (CIOT) mobilization followed a typical elective traffic enforcement program TEP) sequence, involving paid media, earned media, and enforcement. A nationally representative telephone survey indicated that the mobilization wa...

  18. SMARTE'S SITE CHARACTERIZATION TOOL

    EPA Science Inventory

    Site Characterization involves collecting environmental data to evaluate the nature and extent of contamination. Environmental data could consist of chemical analyses of soil, sediment, water or air samples. Typically site characterization data are statistically evaluated for thr...

  19. RFID Reader Antenna with Multi-Linear Polarization Diversity

    NASA Technical Reports Server (NTRS)

    Fink, Patrick; Lin, Greg; Ngo, Phong; Kennedy, Timothy; Rodriguez, Danny; Chu, Andrew; Broyan, James; Schmalholz, Donald

    2018-01-01

    This paper describes an RFID reader antenna that offers reduced polarization loss compared to that typically associated with reader-tag communications involving arbitrary relative orientation of the reader antenna and the tag.

  20. The White Adolescent's Drug Odyssey.

    ERIC Educational Resources Information Center

    Lipton, Douglas S.; Marel, Rozanne

    1980-01-01

    Presents a "typical" case history of a White middle-class teenager who becomes involved with marihuana and subsequently begins to abuse other drugs. Sociological findings from other research are interspersed in the anecdotal account. (GC)

  1. Substance use disorder

    MedlinePlus

    ... through the stages than do adults. Stages are: Experimental use. Typically involves peers, done for recreational use; ... Hostility when confronted about drug dependence Lack of control ... Secretive behavior to hide drug use Using drugs even when alone

  2. Brownfields Environmental Insurance and Risk Management Tools Glossary of Terms

    EPA Pesticide Factsheets

    This document provides a list of terms that are typically used by the environmental insurance industry, transactional specialists, and other parties involved in using environmental insurance or risk management tools.

  3. The San Francisco Bay - Delta Wastewater and Residual Solids Management Study. Volume III. Technical Appendix. Wastewater Residual Solids Management Study

    DTIC Science & Technology

    1972-08-01

    of public health hazards and may alter reuse approaches to de -emphasize the fertilizer uses of these sludges because of the heavy metals involved...materials are removed with organic sludges, or lime sludges where that process is used. Toxic solids would typically include phenols and heavy metals , 80...solids would typically include phenols and heavy metals , 80 percent and 40 percent respectively being removable with the organic sludges. - 8

  4. ‘What brings him here today?’: Medical problem presentation involving children with Autism Spectrum Disorders and typically developing children

    PubMed Central

    Solomon, Olga; Heritage, John; Yin, Larry; Marynard, Douglas; Bauman, Margaret

    2015-01-01

    Conversation and discourse analyses were used to examine medical problem presentation in pediatric care. Healthcare visits involving children with ASD and typically developing children were analyzed. We examined how children’s communicative and epistemic capabilities and their opportunities to be socialized into a competent patient role are interactionally achieved. We found that medical problem presentation is designed to contain a ‘pre-visit’ account of the interactional and epistemic work that children and caregivers carry out at home to identify the child’s health problems; and that the intersubjective accessibility of children’s experiences that becomes disrupted by ASD presents a dilemma to all participants in the visit. The article examines interactional roots of unmet healthcare needs and foregone medical care of people with ASD. PMID:26463739

  5. Magnetic Reconnection in Different Environments: Similarities and Differences

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Aunai, Nicolas; Kuznetsova, Masha; Zenitani, Seiji; Birn, Joachim

    2014-01-01

    Depending on the specific situation, magnetic reconnection may involve symmetric or asymmetric inflow regions. Asymmetric reconnection applies, for example, to reconnection at the Earth's magnetopause, whereas reconnection in the nightside magnetotail tends to involve more symmetric geometries. A combination of review and new results pertaining to magnetic reconnection is being presented. The focus is on three aspects: A basic, MHD-based, analysis of the role magnetic reconnection plays in the transport of energy, followed by an analysis of a kinetic model of time dependent reconnection in a symmetric current sheet, similar to what is typically being encountered in the magnetotail of the Earth. The third element is a review of recent results pertaining to the orientation of the reconnection line in asymmetric geometries, which are typical for the magnetopause of the Earth, as well as likely to occur at other planets.

  6. Comparison of methods for determining the numbers and species distribution of coliform bacteria in well water samples.

    PubMed

    Niemi, R M; Heikkilä, M P; Lahti, K; Kalso, S; Niemelä, S I

    2001-06-01

    Enumeration of coliform bacteria and Escherichia coli is the most widely used method in the estimation of hygienic quality of drinking water. The yield of target bacteria and the species composition of different populations of coliform bacteria may depend on the method.Three methods were compared. Three membrane filtration methods were used for the enumeration of coliform bacteria in shallow well waters. The yield of confirmed coliform bacteria was highest on Differential Coliform agar, followed by LES Endo agar. Differential Coliform agar had the highest proportion of typical colonies, of which 74% were confirmed as belonging to the Enterobacteriaceae. Of the typical colonies on Lactose Tergitol 7 TTC agar, 75% were confirmed as Enterobacteriaceae, whereas 92% of typical colonies on LES Endo agar belonged to the Enterobacteriaceae. LES Endo agar yielded many Serratia strains, Lactose Tergitol 7 TTC agar yielded numerous strains of Rahnella aquatilis and Enterobacter, whereas Differential Coliform agar yielded the widest range of species. The yield of coliform bacteria varied between methods. Each method compared had a characteristic species distribution of target bacteria and a typical level of interference of non-target bacteria. Identification with routine physiological tests to distinct species was hampered by the slight differences between species. High yield and sufficient selectivity are difficult to achieve simultaneously, especially if the target group is diverse. The results showed that several aspects of method performance should be considered, and that the target group must be distinctly defined to enable method comparisons.

  7. Rotor Wake Vortex Definition: Initial Evaluation of 3-C PIV Results of the Hart-II Study

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; vanderWall, Berend; Richard, Hughes; Raffel, Markus; Beaumier, Philippe; Delrieux, Yves; Lim, Joon W.; Yu, Yung H.; Tung, Chee

    2002-01-01

    An initial evaluation is made of extensive three-component (3C) particle image velocimetry (PIV) measurements within the wake across a rotor disk plane. The model is a 40 percent scale BO-105 helicopter main rotor in forward flight simulation. This study is part of the HART II test program conducted in the German-Dutch Wind Tunnel (DNW). Included are wake vortex field measurements over the advancing and retreating sides of the rotor operating at a typical descent landing condition important for impulsive blade-vortex interaction (BVI) noise. Also included are advancing side results for rotor angle variations from climb to steep descent. Using detailed PIV vector maps of the vortex fields, methods of extracting key vortex parameters are examined and a new method was developed and evaluated. An objective processing method, involving a center-of-vorticity criterion and a vorticity 'disk' integration, was used to determine vortex core size, strength, core velocity distribution characteristics, and unsteadiness. These parameters are mapped over the rotor disk and offer unique physical insight for these parameters of importance for rotor noise and vibration prediction.

  8. Standard Transistor Array (STAR). Volume 1: Placement technique

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Caroll, B. D.

    1979-01-01

    A large scale integration (LSI) technology, the standard transistor array uses a prefabricated understructure of transistors and a comprehensive library of digital logic cells to allow efficient fabrication of semicustom digital LSI circuits. The cell placement technique for this technology involves formation of a one dimensional cell layout and "folding" of the one dimensional placement onto the chip. It was found that, by use of various folding methods, high quality chip layouts can be achieved. Methods developed to measure of the "goodness" of the generated placements include efficient means for estimating channel usage requirements and for via counting. The placement and rating techniques were incorporated into a placement program (CAPSTAR). By means of repetitive use of the folding methods and simple placement improvement strategies, this program provides near optimum placements in a reasonable amount of time. The program was tested on several typical LSI circuits to provide performance comparisons both with respect to input parameters and with respect to the performance of other placement techniques. The results of this testing indicate that near optimum placements can be achieved by use of the procedures incurring severe time penalties.

  9. The Measurement of Fuel-Air Ratio by Analysis for the Oxidized Exhaust Gas

    NASA Technical Reports Server (NTRS)

    Gerrish, Harold C.; Meem, J. Lawrence, Jr.

    1943-01-01

    An investigation was made to determine a method of measuring fuel-air ratio that could be used for test purposes in flight and for checking conventional equipment in the laboratory. Two single-cylinder test engines equipped with typical commercial engine cylinders were used. The fuel-air ratio of the mixture delivered to the engines was determined by direct measurement of the quantity of air and of fuel supplied and also by analysis of the oxidized exhaust gas and of the normal exhaust gas. Five fuels were used: gasoline that complied with Army-Navy fuel Specification No. AN-VV-F-781 and four mixtures of this gasoline with toluene, benzene, and xylene. The method of determining the fuel-air ratio described in this report involves the measurement of the carbon-dioxide content of the oxidized exhaust gas and the use of graphs for the presented equation. This method is considered useful in aircraft, in the field, or in the laboratory for a range of fuel-air ratios from 0.047 to 0.124.

  10. The Measurement of Fuel-air Ratio by Analysis of the Oxidized Exhaust Gas

    NASA Technical Reports Server (NTRS)

    Memm, J. Lawrence, Jr.

    1943-01-01

    An investigation was made to determine a method of measuring fuel-air ratio that could be used for test purposes in flight and for checking conventional equipment in the laboratory. Two single-cylinder test engines equipped with typical commercial engine cylinders were used. The fuel-air ratio of the mixture delivered to the engines was determined by direct measurement of the quantity of air and of fuel supplied and also by analysis of the oxidized exhaust gas and of the normal exhaust gas. Five fuels were used: gasoline that complied with Army-Navy Fuel Specification, No. AN-VV-F-781 and four mixtures of this gasoline with toluene, benzene, and xylene. The method of determining the fuel-air ratio described in this report involves the measurement of the carbon-dioxide content of the oxidized exhaust gas and the use of graphs or the presented equation. This method is considered useful in aircraft, in the field, or in the laboratory for a range of fuel-air ratios from 0.047 to 0.124

  11. Biocompatible gold nanorods: one-step surface functionalization, highly colloidal stability, and low cytotoxicity.

    PubMed

    Liu, Kang; Zheng, Yuanhui; Lu, Xun; Thai, Thibaut; Lee, Nanju Alice; Bach, Udo; Gooding, J Justin

    2015-05-05

    The conjugation of gold nanorods (AuNRs) with polyethylene glycol (PEG) is one of the most effective ways to reduce their cytotoxicity arising from the cetyltrimethylammonium bromide (CTAB) and silver ions used in their synthesis. However, typical PEGylation occurs only at the tips of the AuNRs, producing partially modified AuNRs. To address this issue, we have developed a novel, facile, one-step surface functionalization method that involves the use of Tween 20 to stabilize AuNRs, bis(p-sulfonatophenyl)phenylphosphine (BSPP) to activate the AuNR surface for the subsequent PEGylation, and NaCl to etch silver from the AuNRs. This method allows for the complete removal of the surface-bound CTAB and the most active surface silver from the AuNRs. The produced AuNRs showed far lower toxicity than other methods to PEGylate AuNRs, with no apparent toxicity when their concentration is lower than 5 μg/mL. Even at a high concentration of 80 μg/mL, their cell viability is still four times higher than that of the tip-modified AuNRs.

  12. An innovative method to involve community health workers as partners in evaluation research.

    PubMed

    Peacock, Nadine; Issel, L Michele; Townsell, Stephanie J; Chapple-McGruder, Theresa; Handler, Arden

    2011-12-01

    We developed a process through which community outreach workers, whose role is not typically that of a trained researcher, could actively participate in collection of qualitative evaluation data. Outreach workers for a community-based intervention project received training in qualitative research methodology and certification in research ethics. They used a Voice over Internet Protocol phone-in system to provide narrative reports about challenges faced by women they encountered in their outreach activities as well as their own experiences as outreach workers. Qualitative data contributed by outreach workers provided insights not otherwise available to the evaluation team, including details about the complex lives of underserved women at risk for poor pregnancy outcomes and the challenges and rewards of the outreach worker role. Lay health workers can be a valuable asset as part of a research team. Training in research ethics and methods can be tailored to their educational level and preferences, and their insights provide important information and perspectives that may not be accessible via other data collection methods. Challenges encountered in the dual roles of researcher and lay health worker can be addressed in training.

  13. Gas chromatographic/mass spectrometric and microbiological analyses on irradiated chicken

    NASA Astrophysics Data System (ADS)

    Parlato, A.; Calderaro, E.; Bartolotta, A.; D'Oca, M. C.; Giuffrida, S. A.; Brai, M.; Tranchina, L.; Agozzino, P.; Avellone, G.; Ferrugia, M.; Di Noto, A. M.; Caracappa, S.

    2007-08-01

    Ionizing radiation is widely used as treatment technique for food preservation. It involves among others reduction of microbial contamination, disinfestations, sprout inhibition and extension of shelf life of food. However, the commercialization of irradiated food requires the availability of reliable methods to identify irradiated foodstuffs. In this paper, we present results on the application to irradiated chicken of this method, based on the detection, in muscle and skin samples, of the peaks of ions 98 Da and 112 Da, in a ratio approximately 4:1, typical of radiation induced 2-dodecylcyclobutanones (2-DCB). Aim of the work was also to study the time stability of the measured parameters in samples irradiated at 3 and 5 kGy, and to verify the efficacy of the treatment from a microbiological point of view. Our results show that, one month after irradiation at 3 kGy, the method is suitable using the skin but not the muscle, while the measured parameters are detectable in both samples irradiated at 5 kGy. The microbial population was substantially reduced even at 3 kGy.

  14. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  15. Computer-Aided Drug Design Methods.

    PubMed

    Yu, Wenbo; MacKerell, Alexander D

    2017-01-01

    Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.

  16. Assessment of the Derivative-Moment Transformation method for unsteady-load estimation

    NASA Astrophysics Data System (ADS)

    Mohebbian, Ali; Rival, David

    2011-11-01

    It is often difficult, if not impossible, to measure the aerodynamic or hydrodynamic forces on a moving body. For this reason, a classical control-volume technique is typically applied to extract the unsteady forces instead. However, measuring the acceleration term within the volume of interest using PIV can be limited by optical access, reflections as well as shadows. Therefore in this study an alternative approach, termed the Derivative-Moment Transformation (DMT) method, is introduced and tested on a synthetic data set produced using numerical simulations. The test case involves the unsteady loading of a flat plate in a two-dimensional, laminar periodic gust. The results suggest that the DMT method can accurately predict the acceleration term so long as appropriate spatial and temporal resolutions are maintained. The major deficiency was found to be the determination of pressure in the wake. The effect of control-volume size was investigated suggesting that smaller domains work best by minimizing the associated error with the pressure field. When increasing the control-volume size, the number of calculations necessary for the pressure-gradient integration increases, in turn substantially increasing the error propagation.

  17. Direct-Solve Image-Based Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    2009-01-01

    A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.

  18. Analysis of Student Errors on Division of Fractions

    NASA Astrophysics Data System (ADS)

    Maelasari, E.; Jupri, A.

    2017-02-01

    This study aims to describe the type of student errors that typically occurs at the completion of the division arithmetic operations on fractions, and to describe the causes of students’ mistakes. This research used a descriptive qualitative method, and involved 22 fifth grade students at one particular elementary school in Kuningan, Indonesia. The results of this study showed that students’ error answers caused by students changing their way of thinking to solve multiplication and division operations on the same procedures, the changing of mix fractions to common fraction have made students confused, and students are careless in doing calculation. From student written work, in solving the fraction problems, we found that there is influence between the uses of learning methods and student response, and some of student responses beyond researchers’ prediction. We conclude that the teaching method is not only the important thing that must be prepared, but the teacher should also prepare about predictions of students’ answers to the problems that will be given in the learning process. This could be a reflection for teachers to be better and to achieve the expected learning goals.

  19. Quint: An R package for the identification of subgroups of clients who differ in which treatment alternative is best for them.

    PubMed

    Dusseldorp, Elise; Doove, Lisa; Mechelen, Iven van

    2016-06-01

    In the analysis of randomized controlled trials (RCTs), treatment effect heterogeneity often occurs, implying differences across (subgroups of) clients in treatment efficacy. This phenomenon is typically referred to as treatment-subgroup interactions. The identification of subgroups of clients, defined in terms of pretreatment characteristics that are involved in a treatment-subgroup interaction, is a methodologically challenging task, especially when many characteristics are available that may interact with treatment and when no comprehensive a priori hypotheses on relevant subgroups are available. A special type of treatment-subgroup interaction occurs if the ranking of treatment alternatives in terms of efficacy differs across subgroups of clients (e.g., for one subgroup treatment A is better than B and for another subgroup treatment B is better than A). These are called qualitative treatment-subgroup interactions and are most important for optimal treatment assignment. The method QUINT (Qualitative INteraction Trees) was recently proposed to induce subgroups involved in such interactions from RCT data. The result of an analysis with QUINT is a binary tree from which treatment assignment criteria can be derived. The implementation of this method, the R package quint, is the topic of this paper. The analysis process is described step-by-step using data from the Breast Cancer Recovery Project, showing the reader all functions included in the package. The output is explained and given a substantive interpretation. Furthermore, an overview is given of the tuning parameters involved in the analysis, along with possible motivational concerns associated with choice alternatives that are available to the user.

  20. Privacy-preserving record linkage on large real world datasets.

    PubMed

    Randall, Sean M; Ferrante, Anna M; Boyd, James H; Bauer, Jacqueline K; Semmens, James B

    2014-08-01

    Record linkage typically involves the use of dedicated linkage units who are supplied with personally identifying information to determine individuals from within and across datasets. The personally identifying information supplied to linkage units is separated from clinical information prior to release by data custodians. While this substantially reduces the risk of disclosure of sensitive information, some residual risks still exist and remain a concern for some custodians. In this paper we trial a method of record linkage which reduces privacy risk still further on large real world administrative data. The method uses encrypted personal identifying information (bloom filters) in a probability-based linkage framework. The privacy preserving linkage method was tested on ten years of New South Wales (NSW) and Western Australian (WA) hospital admissions data, comprising in total over 26 million records. No difference in linkage quality was found when the results were compared to traditional probabilistic methods using full unencrypted personal identifiers. This presents as a possible means of reducing privacy risks related to record linkage in population level research studies. It is hoped that through adaptations of this method or similar privacy preserving methods, risks related to information disclosure can be reduced so that the benefits of linked research taking place can be fully realised. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. [Ocular sarcoidosis].

    PubMed

    Springer-Wanner, C; Brauns, T

    2017-06-01

    Ocular manifestation of sarcoidosis occurs in up to 60% of patients with confirmed systemic sarcoidosis and represents one of the most common forms of noninfectious uveitis. In known pulmonary sarcoidosis, ocular involvement can occur in up to 80% of cases. Sarcoidosis can also present only in the eye, without a systemic manifestation (ocular sarcoidosis). Typically, ocular sarcoidosis shows bilateral granulomatous uveitis and can involve all parts of the eye. Apart from an acute anterior uveitis, chronic intermediate or posterior uveitis can be found. In order to prevent a severe reduction of visual acuity leading to blindness, early diagnosis and treatment is essential. For diagnosis, specific clinical signs involving the eye (bilateral granulomatous changes in all parts of the eye) and typical laboratory investigations (angiotensin-converting enzyme, ACE; lysozyme; soluble interleukin 2 receptor, sIL2R; chest X‑ray; chest CT) have to be taken into account, since biopsy to prove noncaseating granulomas is not performed with changes restricted to the eye due to the high risk of vision loss. Ocular sarcoidosis mostly responds well to local or systemic steroid treatment. If the therapeutic effect is insufficient, immunosuppressive agents and biologics can be applied.

  2. Mullerian papilloma-like proliferation arising in cystic pelvic endosalpingiosis.

    PubMed

    McCluggage, W Glenn; O'Rourke, Declan; McElhenney, Clodagh; Crooks, Michael

    2002-09-01

    This report describes an unusual epithelial proliferation occurring in pelvic cystic endosalpingiosis. A cyst mass lined by a layer of ciliated epithelial cells involved the posterior surface of the cervix and vagina. The epithelial proliferation within the wall resembled a mullerian papilloma with fibrous and fibrovascular cores lined by bland cuboidal epithelial cells. Other areas had a microglandular growth pattern resembling cervical microglandular hyperplasia, and focally there was a solid growth pattern. Foci of typical endosalpingiosis involved the surface of both ovaries and pelvic soft tissues. The cystic lesion recurred after partial cystectomy and drainage and was followed up radiologically and with periodic fine-needle aspiration. Part of the wall of the cyst removed 11 years after the original surgery showed an identical epithelial proliferation. MIB1 staining showed a proliferation index of less than 5%, contrasting with the higher proliferation index of a typical serous borderline tumor. The differential diagnosis is discussed. As far as we are aware, this is the first report of such a benign epithelial proliferation involving cystic endosalpingiosis. Copyright 2002, Elsevier Science (USA). All rights reserved.

  3. The Role of Adolescents From a Low Socioeconomic Background in Household Food Preparation: A Qualitative Study.

    PubMed

    Leak, Tashara M; Aasand, Taylor A; Vickers, Zata; Reicks, Marla

    2018-05-01

    The purpose of this study was to understand adolescents' from low-income households perceptions of their involvement in home food preparation, reasons underlying the extent to which they were involved, and positive and negative consequences associated with their involvement. Semistructured interviews were conducted with a convenience sample of 19 adolescents (13-18 years). Audio-recorded interviews were transcribed verbatim. Themes were identified using grounded theory and the constant comparative method. Eight adolescents described cooking as a primary responsibility due to adult work and family schedules, age, gender, and/or cultural expectations. They were typically preparing food for themselves and their family without assistance, and making decisions about what was prepared. They identified positive and negative consequences including enjoyment and satisfaction, as well as stress and less time for other activities. Eleven adolescents mostly assisted the primary food preparer, with little input in deciding what was prepared. They identified benefits such as enjoyment and family interaction. Foods prepared by many adolescents tended to be quick and easy to prepare foods. Future studies should investigate the relationship between adultified cooking responsibilities, diet quality, and health. Also, cooking education for adolescents needs to address how to prepare a healthy family meal on a budget.

  4. Psychobiological responses to critically evaluated multitasking.

    PubMed

    Wetherell, Mark A; Craw, Olivia; Smith, Kenny; Smith, Michael A

    2017-12-01

    In order to understand psychobiological responses to stress it is necessary to observe how people react to controlled stressors. A range of stressors exist for this purpose; however, laboratory stressors that are representative of real life situations provide more ecologically valid opportunities for assessing stress responding. The current study assessed psychobiological responses to an ecologically valid laboratory stressor involving multitasking and critical evaluation. The stressor elicited significant increases in psychological and cardiovascular stress reactivity; however, no cortisol reactivity was observed. Other socially evaluative laboratory stressors that lead to cortisol reactivity typically require a participant to perform tasks that involve verbal responses, whilst standing in front of evaluative others. The current protocol contained critical evaluation of cognitive performance; however, this was delivered from behind a seated participant. The salience of social evaluation may therefore be related to the response format of the task and the method of evaluation. That is, the current protocol did not involve the additional vulnerability associated with in person, face-to-face contact, and verbal delivery. Critical evaluation of multitasking provides an ecologically valid technique for inducing laboratory stress and provides an alternative tool for assessing psychological and cardiovascular reactivity. Future studies could additionally use this paradigm to investigate those components of social evaluation necessary for eliciting a cortisol response.

  5. Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.

    2015-03-01

    The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.

  6. Analysis of Membrane Lipids of Airborne Micro-Organisms

    NASA Technical Reports Server (NTRS)

    MacNaughton, Sarah

    2006-01-01

    A method of characterization of airborne micro-organisms in a given location involves (1) large-volume filtration of air onto glass-fiber filters; (2) accelerated extraction of membrane lipids of the collected micro-organisms by use of pressurized hot liquid; and (3) identification and quantitation of the lipids by use of gas chromatography and mass spectrometry. This method is suitable for use in both outdoor and indoor environments; for example, it can be used to measure airborne microbial contamination in buildings ("sick-building syndrome"). The classical approach to analysis of airborne micro-organisms is based on the growth of cultureable micro-organisms and does not provide an account of viable but noncultureable micro-organisms, which typically amount to more than 90 percent of the micro-organisms present. In contrast, the present method provides an account of all micro-organisms, including cultureable, noncultureable, aerobic, and anaerobic ones. The analysis of lipids according to this method makes it possible to estimate the number of viable airborne micro-organisms present in the sampled air and to obtain a quantitative profile of the general types of micro-organisms present along with some information about their physiological statuses.

  7. Serum Hydroxyl Radical Scavenging Capacity as Quantified with Iron-Free Hydroxyl Radical Source

    PubMed Central

    Endo, Nobuyuki; Oowada, Shigeru; Sueishi, Yoshimi; Shimmei, Masashi; Makino, Keisuke; Fujii, Hirotada; Kotake, Yashige

    2009-01-01

    We have developed a simple ESR spin trapping based method for hydroxyl (OH) radical scavenging-capacity determination, using iron-free OH radical source. Instead of the widely used Fenton reaction, a short (typically 5 seconds) in situ UV-photolysis of a dilute hydrogen peroxide aqueous solution was employed to generate reproducible amounts of OH radicals. ESR spin trapping was applied to quantify OH radicals; the decrease in the OH radical level due to the specimen’s scavenging activity was converted into the OH radical scavenging capacity (rate). The validity of the method was confirmed in pure antioxidants, and the agreement with the previous data was satisfactory. In the second half of this work, the new method was applied to the sera of chronic renal failure (CRF) patients. We show for the first time that after hemodialysis, OH radical scavenging capacity of the CRF serum was restored to the level of healthy control. This method is simple and rapid, and the low concentration hydrogen peroxide is the only chemical added to the system, that could eliminate the complexity of iron-involved Fenton reactions or the use of the pulse-radiolysis system. PMID:19794928

  8. Hydro-environmental management of groundwater resources: A fuzzy-based multi-objective compromise approach

    NASA Astrophysics Data System (ADS)

    Alizadeh, Mohammad Reza; Nikoo, Mohammad Reza; Rakhshandehroo, Gholam Reza

    2017-08-01

    Sustainable management of water resources necessitates close attention to social, economic and environmental aspects such as water quality and quantity concerns and potential conflicts. This study presents a new fuzzy-based multi-objective compromise methodology to determine the socio-optimal and sustainable policies for hydro-environmental management of groundwater resources, which simultaneously considers the conflicts and negotiation of involved stakeholders, uncertainties in decision makers' preferences, existing uncertainties in the groundwater parameters and groundwater quality and quantity issues. The fuzzy multi-objective simulation-optimization model is developed based on qualitative and quantitative groundwater simulation model (MODFLOW and MT3D), multi-objective optimization model (NSGA-II), Monte Carlo analysis and Fuzzy Transformation Method (FTM). Best compromise solutions (best management policies) on trade-off curves are determined using four different Fuzzy Social Choice (FSC) methods. Finally, a unanimity fallback bargaining method is utilized to suggest the most preferred FSC method. Kavar-Maharloo aquifer system in Fars, Iran, as a typical multi-stakeholder multi-objective real-world problem is considered to verify the proposed methodology. Results showed an effective performance of the framework for determining the most sustainable allocation policy in groundwater resource management.

  9. An Innovative Method to Involve Community Health Workers as Partners in Evaluation Research

    PubMed Central

    Issel, L. Michele; Townsell, Stephanie J.; Chapple-McGruder, Theresa; Handler, Arden

    2011-01-01

    Objectives. We developed a process through which community outreach workers, whose role is not typically that of a trained researcher, could actively participate in collection of qualitative evaluation data. Methods. Outreach workers for a community-based intervention project received training in qualitative research methodology and certification in research ethics. They used a Voice over Internet Protocol phone-in system to provide narrative reports about challenges faced by women they encountered in their outreach activities as well as their own experiences as outreach workers. Results. Qualitative data contributed by outreach workers provided insights not otherwise available to the evaluation team, including details about the complex lives of underserved women at risk for poor pregnancy outcomes and the challenges and rewards of the outreach worker role. Conclusions. Lay health workers can be a valuable asset as part of a research team. Training in research ethics and methods can be tailored to their educational level and preferences, and their insights provide important information and perspectives that may not be accessible via other data collection methods. Challenges encountered in the dual roles of researcher and lay health worker can be addressed in training. PMID:22021290

  10. On determining fluxgate magnetometer spin axis offsets from mirror mode observations

    NASA Astrophysics Data System (ADS)

    Plaschke, Ferdinand; Narita, Yasuhito

    2016-09-01

    In-flight calibration of fluxgate magnetometers that are mounted on spacecraft involves finding their outputs in vanishing ambient fields, the so-called magnetometer offsets. If the spacecraft is spin-stabilized, then the spin plane components of these offsets can be relatively easily determined, as they modify the spin tone content in the de-spun magnetic field data. The spin axis offset, however, is more difficult to determine. Therefore, usually Alfvénic fluctuations in the solar wind are used. We propose a novel method to determine the spin axis offset: the mirror mode method. The method is based on the assumption that mirror mode fluctuations are nearly compressible such that the maximum variance direction is aligned to the mean magnetic field. Mirror mode fluctuations are typically found in the Earth's magnetosheath region. We introduce the method and provide a first estimate of its accuracy based on magnetosheath observations by the THEMIS-C spacecraft. We find that 20 h of magnetosheath measurements may already be sufficient to obtain high-accuracy spin axis offsets with uncertainties on the order of a few tenths of a nanotesla, if offset stability can be assumed.

  11. Shape classification of wear particles by image boundary analysis using machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Yuan, Wei; Chin, K. S.; Hua, Meng; Dong, Guangneng; Wang, Chunhui

    2016-05-01

    The shape features of wear particles generated from wear track usually contain plenty of information about the wear states of a machinery operational condition. Techniques to quickly identify types of wear particles quickly to respond to the machine operation and prolong the machine's life appear to be lacking and are yet to be established. To bridge rapid off-line feature recognition with on-line wear mode identification, this paper presents a new radial concave deviation (RCD) method that mainly involves the use of the particle boundary signal to analyze wear particle features. Signal output from the RCDs subsequently facilitates the determination of several other feature parameters, typically relevant to the shape and size of the wear particle. Debris feature and type are identified through the use of various classification methods, such as linear discriminant analysis, quadratic discriminant analysis, naïve Bayesian method, and classification and regression tree method (CART). The average errors of the training and test via ten-fold cross validation suggest CART is a highly suitable approach for classifying and analyzing particle features. Furthermore, the results of the wear debris analysis enable the maintenance team to diagnose faults appropriately.

  12. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  13. Video redaction: a survey and comparison of enabling technologies

    NASA Astrophysics Data System (ADS)

    Sah, Shagan; Shringi, Ameya; Ptucha, Raymond; Burry, Aaron; Loce, Robert

    2017-09-01

    With the prevalence of video recordings from smart phones, dash cams, body cams, and conventional surveillance cameras, privacy protection has become a major concern, especially in light of legislation such as the Freedom of Information Act. Video redaction is used to obfuscate sensitive and personally identifiable information. Today's typical workflow involves simple detection, tracking, and manual intervention. Automated methods rely on accurate detection mechanisms being paired with robust tracking methods across the video sequence to ensure the redaction of all sensitive information while minimizing spurious obfuscations. Recent studies have explored the use of convolution neural networks and recurrent neural networks for object detection and tracking. The present paper reviews the redaction problem and compares a few state-of-the-art detection, tracking, and obfuscation methods as they relate to redaction. The comparison introduces an evaluation metric that is specific to video redaction performance. The metric can be evaluated in a manner that allows balancing the penalty for false negatives and false positives according to the needs of particular application, thereby assisting in the selection of component methods and their associated hyperparameters such that the redacted video has fewer frames that require manual review.

  14. Longitudinal Relationships Between Resources, Motivation, and Functioning

    PubMed Central

    Emery, Lisa; Neupert, Shevaun D.

    2012-01-01

    Objectives. We investigated how fluctuations and linear changes in health and cognitive resources influence the motivation to engage in complex cognitive activity and the extent to which motivation mediated the relationship between changing resources and cognitively demanding activities. Method. Longitudinal data from 332 adults aged 20–85 years were examined. Motivation was assessed using a composite of Need for Cognition and Personal Need for Structure and additional measures of health, sensory functioning, cognitive ability, and self-reported activity engagement. Results. Multilevel modeling revealed that age-typical changes in health, sensory functions, and ability were associated with changes in motivation, with the impact of declining health on motivation being particularly strong in older adulthood. Changes in motivation, in turn, predicted involvement in cognitive and social activities as well as changes in cognitive ability. Finally, motivation was observed to partially mediate the relationship between changes in resources and cognitively demanding activities. Discussion. Our results suggest that motivation may play an important role in determining the course of cognitive change and involvement in cognitively demanding everyday activities in adulthood. PMID:21926400

  15. [Morphological pathology of vessels in granulomatosis with polyangiitis (Wegener's disease)].

    PubMed

    Zerbino, D D; Zimba, E A

    2015-01-01

    to investigate the incidence of injuries in different vascular beds and the morphopathological changes in vessels in granulomatosis with polyangiitis. The morphopathological features of vascular injuries were investigated in 11 dead patients aged 16--74 years with granulomatosis with polyangiitis. Proliferative and destructive angiitis with predominant involvement of microcirculatory vessels and with development of necrosis-prone granulomas in their walls and perivascularly was established to underlie the clinical manifestations of granulomatosis with polyangiitis. The most typical localization of the pathologic process is the vessels of the upper respiratory tract, lungs, and kidneys. Cardiopulmonary and renal failures are causes of death in the majority of cases. It should be noted that the vessels of the heart, liver, and gastrointestinal tract are frequently involved in the pathological process. Vascular changes in these organs determine the clinical features of granulomatosis with polyangiitis and lead to a number of fatal complications. Granulomatosis with polyangiitis is a systemic disease with polymorphism of clinical manifestations, which requires in-depth analysis based on current precision patient examination methods, including a histopathological study.

  16. The Influence of the Heat-Affected Zone Mechanical Properties on the Behaviour of the Welding in Transverse Plate-to-Tube Joints.

    PubMed

    Lozano, Miguel; Serrano, Miguel A; López-Colina, Carlos; Gayarre, Fernando L; Suárez, Jesús

    2018-02-09

    Eurocode 3 establishes the component method to analytically characterize the structural joints between beam and columns. When one of the members involved in the joint is a hollow section (i.e., a tube) there is a lack of information for the specific components present in the joint. There are two different ways to bridge the gap: experimental testing on the actual beam column joints involving tubular sections; or numerical modelization, typically by means of finite element analysis. For this second option, it is necessary to know the actual mechanical properties of the material. As long as the joint implies a welding process, there is a concern related to how the mechanical properties in the heat-affected zone (HAZ) influence the behavior of the joint. In this work, some coupons were extracted from the HAZ of the beam-column joint. The coupons were tested and the results were implemented in the numerical model of the joint, in an attempt to bring it closer to the experimental results of the tested joints.

  17. Polymer Disentanglement during 3D Printing

    NASA Astrophysics Data System (ADS)

    McIlroy, Claire; Olmsted, Peter D.

    Although 3D printing has the potential to transform manufacturing processes, improving the strength of printed parts to rival that of traditionally-manufactured parts remains an underlying issue. The most common method, fused filament fabrication (FFF), involves melting a thermoplastic, followed by layer-by-layer filament extrusion to fabricate a 3D object. The key to ensuring strength at the weld between layers is successful inter-diffusion and re-entanglement of the melt across the interface. Under typical printing conditions the melt experiences high strain rates within the nozzle, which can significantly stretch and orient the polymers. Consequently, inter-diffusion does not occur from an equilibrium state. The printed layer also cools towards the glass transition, which limits inter-diffusion time. We employ a continuum polymer model (Rolie-Poly) that incorporates flow-induced changes in the entanglement density to predict how an amorphous polymer melt is deformed during FFF. The deformation is dominated by the deposition process, which involves a 90 degree turn and transformation from circular to elliptical geometry. Polymers become highly stretched and aligned with the flow direction, which significantly disentangles the melt via convective constraint release.

  18. Relational and systems methodologies for analysing parent-child relationships: an exploration of conflict, support and independence in adolescence and post-adolescence.

    PubMed

    Honess, T M; Lintern, F

    1990-12-01

    Research procedures which constitute a significant reorientation towards relational rather than individually focused methods are critically reviewed. A novel 'paired interview' strategy, consistent with this relational paradigm, is employed in two research studies. The first study involves the analysis of 18 mother-adolescent son/daughter relationships at the time when the young people had just left school at the age of 16. A relationship typology was developed for these 18 pairs and it is demonstrated that the themes of conflict, independence and interdependence can be operationalized using this procedure. The second study involves the analysis of 25 mother and post-adolescent daughter relationships to test the proposition that such relationships are inherently conflicted. Taken together, the two studies demonstrate that mothers and daughters are not chronically conflicted in their relationships. Nevertheless, the potential for conflict may be a significant element in accounting for girls' adolescent development. However, on entering early adulthood, the young women typically reach a state of 'interdependence' with their mothers.

  19. Three-Stage Decision-Making Model under Restricted Conditions for Emergency Response to Ships Not under Control.

    PubMed

    Wu, Bing; Yan, Xinping; Wang, Yang; Zhang, Di; Guedes Soares, C

    2017-12-01

    A ship that is not under control (NUC) is a typical incident that poses serious problems when in confined waters close to shore. The emergency response to NUC ships is to select the best risk control options, which is a challenge in restricted conditions (e.g., time limitation, resource constraint, and information asymmetry), particularly in inland waterway transportation. To enable a quick and effective response, this article develops a three-stage decision-making framework for NUC ship handling. The core of this method is (1) to propose feasible options for each involved entity (e.g., maritime safety administration, NUC ship, and ships passing by) under resource constraint in the first stage, (2) to select the most feasible options by comparing the similarity of the new case and existing cases in the second stage, and (3) to make decisions considering the cooperation between the involved organizations by using a developed Bayesian network in the third stage. Consequently, this work provides a useful tool to achieve well-organized management of NUC ships. © 2017 Society for Risk Analysis.

  20. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  1. Exploring Typical and Atypical Safety Climate Perceptions of Practitioners in the Repair, Maintenance, Minor Alteration and Addition (RMAA) Sector in Hong Kong.

    PubMed

    Hon, Carol K H; Liu, Yulin

    2016-09-22

    The safety of repair, maintenance, minor alteration and addition (RMAA) work is an under-explored area. This study explored the typical and atypical safety climate perceptions of practitioners in the RMAA sector in Hong Kong, based on a self-administered questionnaire survey of 662 local practitioners in the industry. Profile analysis, via multidimensional scaling of the respondents' scores of three safety climate scales, identified one typical perception: high in management commitment to occupational health and safety (OHS) and employee involvement, low in applicability for safety rules and regulations, and low in responsibility for OHS. The respondents were clustered into typical and atypical perception groups according to their safety climate scores' match to the typical perception. A comparison of demographics between the two groups with logistic regression found that work level and direct employer significantly affect their classification. A multivariate analysis of variance of safety performance measures between the two groups indicated that the typical group had a significantly higher level of safety compliance than the atypical group, with no significant difference in safety participation or injury. The significance of this study lies in revealing the typical safety climate perception profile pattern of RMAA works and offering a new perspective of safety climate research.

  2. Exploring Typical and Atypical Safety Climate Perceptions of Practitioners in the Repair, Maintenance, Minor Alteration and Addition (RMAA) Sector in Hong Kong

    PubMed Central

    Hon, Carol K.H.; Liu, Yulin

    2016-01-01

    The safety of repair, maintenance, minor alteration and addition (RMAA) work is an under-explored area. This study explored the typical and atypical safety climate perceptions of practitioners in the RMAA sector in Hong Kong, based on a self-administered questionnaire survey of 662 local practitioners in the industry. Profile analysis, via multidimensional scaling of the respondents’ scores of three safety climate scales, identified one typical perception: high in management commitment to occupational health and safety (OHS) and employee involvement, low in applicability for safety rules and regulations, and low in responsibility for OHS. The respondents were clustered into typical and atypical perception groups according to their safety climate scores’ match to the typical perception. A comparison of demographics between the two groups with logistic regression found that work level and direct employer significantly affect their classification. A multivariate analysis of variance of safety performance measures between the two groups indicated that the typical group had a significantly higher level of safety compliance than the atypical group, with no significant difference in safety participation or injury. The significance of this study lies in revealing the typical safety climate perception profile pattern of RMAA works and offering a new perspective of safety climate research. PMID:27669269

  3. Efficient estimation of the maximum metabolic productivity of batch systems.

    PubMed

    St John, Peter C; Crowley, Michael F; Bomble, Yannick J

    2017-01-01

    Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumption that all fluxes in the cell are free to vary is a challenging numerical task. Previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable. This work presents an efficient method for the calculation of a maximum theoretical productivity of a batch culture system using a dynamic optimization framework. The proposed method follows traditional assumptions of dynamic flux balance analysis: first, that internal metabolite fluxes are governed by a pseudo-steady state, and secondly that external metabolite fluxes are dynamically bounded. The optimization is achieved via collocation on finite elements, and accounts explicitly for an arbitrary number of flux changes. The method can be further extended to calculate the complete Pareto surface of productivity as a function of yield. We apply this method to succinate production in two engineered microbial hosts, Escherichia coli and Actinobacillus succinogenes , and demonstrate that maximum productivities can be more than doubled under dynamic control regimes. The maximum theoretical yield is a measure that is well established in the metabolic engineering literature and whose use helps guide strain and pathway selection. We present a robust, efficient method to calculate the maximum theoretical productivity: a metric that will similarly help guide and evaluate the development of dynamic microbial bioconversions. Our results demonstrate that nearly optimal yields and productivities can be achieved with only two discrete flux stages, indicating that near-theoretical productivities might be achievable in practice.

  4. Full-Body CT Scans - What You Need to Know

    MedlinePlus

    ... Medical Imaging Medical X-ray Imaging Full-Body CT Scans - What You Need to Know Share Tweet ... new service for health-conscious people: "Whole-body CT screening." This typically involves scanning the body from ...

  5. Coherence, Charging, and Spin Effects in Quantum Dots and Point Contacts

    DTIC Science & Technology

    2001-12-01

    requires changing the direction of the external field. Considering the typical fields involved (several tesla) and the high- inductance superconducting ...84 6-5 QPC nonlinear conductance...86 6-6 Nonlinear transconductance colorscales

  6. Genetics Home Reference: recombinant 8 syndrome

    MedlinePlus

    ... with a change in chromosome 8 called an inversion . An inversion involves the breakage of a chromosome in two ... typically not lost as a result of this inversion in chromosome 8 , so people usually do not ...

  7. [Reducing maternal parenting stress of children with autism spectrum disorder: father's involvement].

    PubMed

    Hu, C C; Li, Y; Zhou, B R; Liu, C X; Li, C Y; Zhang, Y; Xu, Q; Xu, X

    2017-05-04

    Objective: To explore the relationship between fathers' nursing time and maternal parenting stress of children with autism spectrum disorder(ASD). Method: Mothers of 98 ASD children who were first diagnosed in the department of Child Health Care, Children's Hospital of Fudan University during June 2015 to January 2016 were included in the ASD group, with mothers of 92 typical children from a Community Maternal and Child Health Hospital and a kindergarten in the control group. The evaluation of parenting stress, parents' nursing time and other related factors were cross-sectionally analyzed. Interview was conducted with the following tools: Parental Stress Index-Short Form(PSI-SF)for maternal parenting stress, and self-made General Parenting Information Questionnaire for nursing time of both parents and other related factors. The relationships were analyzed by Multiple Linear Regression analysis and Wilcoxon Rank-Sum test. Result: Maternal parenting stress of ASD children had a significant negative correlation with father's nursing time in total score of parenting stress, PCDI domain and PD domain ( t =-2.76, -2.98, -2.79; P =0.007, 0.004, 0.006), within which PD domain also included family annual income and mothers' nursing time ( R (2)=0.22, 0.24, 0.25); while no such correlation was found in control group in terms of father's nursing time( P =0.22, 0.42, 0.06). Wilcoxon Rank-Sum test showed that in 62 (63.3%) double-income ASD families and 72(78.3%) double-income typical families, there were significant differences between ASD fathers' and ASD mothers'and typical fathers'nursing time(2.0(0.5, 2.1) vs . 3.5(2.4, 6.0) vs . 3.0(2.0, 4.7)h, t =-86.32、-49.65, all P <0.01). Conclusion: Lack of fathers' involvements was common in ASD children's families. Increasing these fathers' nursing time, as well as their enthusiasm and initiative in the family intervention could relieve maternal parenting stress and improve the intervention pattern of ASD children.

  8. A density-adaptive SPH method with kernel gradient correction for modeling explosive welding

    NASA Astrophysics Data System (ADS)

    Liu, M. B.; Zhang, Z. L.; Feng, D. L.

    2017-09-01

    Explosive welding involves processes like the detonation of explosive, impact of metal structures and strong fluid-structure interaction, while the whole process of explosive welding has not been well modeled before. In this paper, a novel smoothed particle hydrodynamics (SPH) model is developed to simulate explosive welding. In the SPH model, a kernel gradient correction algorithm is used to achieve better computational accuracy. A density adapting technique which can effectively treat large density ratio is also proposed. The developed SPH model is firstly validated by simulating a benchmark problem of one-dimensional TNT detonation and an impact welding problem. The SPH model is then successfully applied to simulate the whole process of explosive welding. It is demonstrated that the presented SPH method can capture typical physics in explosive welding including explosion wave, welding surface morphology, jet flow and acceleration of the flyer plate. The welding angle obtained from the SPH simulation agrees well with that from a kinematic analysis.

  9. Experimental Determination of Dynamical Lee-Yang Zeros

    NASA Astrophysics Data System (ADS)

    Brandner, Kay; Maisi, Ville F.; Pekola, Jukka P.; Garrahan, Juan P.; Flindt, Christian

    2017-05-01

    Statistical physics provides the concepts and methods to explain the phase behavior of interacting many-body systems. Investigations of Lee-Yang zeros—complex singularities of the free energy in systems of finite size—have led to a unified understanding of equilibrium phase transitions. The ideas of Lee and Yang, however, are not restricted to equilibrium phenomena. Recently, Lee-Yang zeros have been used to characterize nonequilibrium processes such as dynamical phase transitions in quantum systems after a quench or dynamic order-disorder transitions in glasses. Here, we experimentally realize a scheme for determining Lee-Yang zeros in such nonequilibrium settings. We extract the dynamical Lee-Yang zeros of a stochastic process involving Andreev tunneling between a normal-state island and two superconducting leads from measurements of the dynamical activity along a trajectory. From the short-time behavior of the Lee-Yang zeros, we predict the large-deviation statistics of the activity which is typically difficult to measure. Our method paves the way for further experiments on the statistical mechanics of many-body systems out of equilibrium.

  10. Digital lattice gauge theories

    NASA Astrophysics Data System (ADS)

    Zohar, Erez; Farace, Alessandro; Reznik, Benni; Cirac, J. Ignacio

    2017-02-01

    We propose a general scheme for a digital construction of lattice gauge theories with dynamical fermions. In this method, the four-body interactions arising in models with 2 +1 dimensions and higher are obtained stroboscopically, through a sequence of two-body interactions with ancillary degrees of freedom. This yields stronger interactions than the ones obtained through perturbative methods, as typically done in previous proposals, and removes an important bottleneck in the road towards experimental realizations. The scheme applies to generic gauge theories with Lie or finite symmetry groups, both Abelian and non-Abelian. As a concrete example, we present the construction of a digital quantum simulator for a Z3 lattice gauge theory with dynamical fermionic matter in 2 +1 dimensions, using ultracold atoms in optical lattices, involving three atomic species, representing the matter, gauge, and auxiliary degrees of freedom, that are separated in three different layers. By moving the ancilla atoms with a proper sequence of steps, we show how we can obtain the desired evolution in a clean, controlled way.

  11. B-spline tight frame based force matching method

    NASA Astrophysics Data System (ADS)

    Yang, Jianbin; Zhu, Guanhua; Tong, Dudu; Lu, Lanyuan; Shen, Zuowei

    2018-06-01

    In molecular dynamics simulations, compared with popular all-atom force field approaches, coarse-grained (CG) methods are frequently used for the rapid investigations of long time- and length-scale processes in many important biological and soft matter studies. The typical task in coarse-graining is to derive interaction force functions between different CG site types in terms of their distance, bond angle or dihedral angle. In this paper, an ℓ1-regularized least squares model is applied to form the force functions, which makes additional use of the B-spline wavelet frame transform in order to preserve the important features of force functions. The B-spline tight frames system has a simple explicit expression which is useful for representing our force functions. Moreover, the redundancy of the system offers more resilience to the effects of noise and is useful in the case of lossy data. Numerical results for molecular systems involving pairwise non-bonded, three and four-body bonded interactions are obtained to demonstrate the effectiveness of our approach.

  12. Precise on-machine extraction of the surface normal vector using an eddy current sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Yongqing; Lian, Meng; Liu, Haibo; Ying, Yangwei; Sheng, Xianjun

    2016-11-01

    To satisfy the requirements of on-machine measurement of the surface normal during complex surface manufacturing, a highly robust normal vector extraction method using an Eddy current (EC) displacement sensor array is developed, the output of which is almost unaffected by surface brightness, machining coolant and environmental noise. A precise normal vector extraction model based on a triangular-distributed EC sensor array is first established. Calibration of the effects of object surface inclination and coupling interference on measurement results, and the relative position of EC sensors, is involved. A novel apparatus employing three EC sensors and a force transducer was designed, which can be easily integrated into the computer numerical control (CNC) machine tool spindle and/or robot terminal execution. Finally, to test the validity and practicability of the proposed method, typical experiments were conducted with specified testing pieces using the developed approach and system, such as an inclined plane and cylindrical and spherical surfaces.

  13. Application of the AHP method in modeling the trust and reputation of software agents

    NASA Astrophysics Data System (ADS)

    Zytniewski, Mariusz; Klementa, Marek; Skorupka, Dariusz; Stanek, Stanislaw; Duchaczek, Artur

    2016-06-01

    Given the unique characteristics of cyberspace and, in particular, the number of inherent security threats, communication between software agents becomes a highly complex issue and a major challenge that, on the one hand, needs to be continuously monitored and, on the other, awaits new solutions addressing its vulnerabilities. An approach that has recently come into view mimics mechanisms typical of social systems and is based on trust and reputation that assist agents in deciding which other agents to interact with. The paper offers an enhancement to existing trust and reputation models, involving the application of the AHP method that is widely used for decision support in social systems, notably for risks analysis. To this end, it is proposed to expand the underlying conceptual basis by including such notions as self-trust and social trust, and to apply these to software agents. The discussion is concluded with an account of an experiment aimed at testing the effectiveness of the proposed solution.

  14. Multimodality Imaging in Cardiooncology

    PubMed Central

    Pizzino, Fausto; Vizzari, Giampiero; Qamar, Rubina; Bomzer, Charles; Carerj, Scipione; Khandheria, Bijoy K.

    2015-01-01

    Cardiotoxicity represents a rising problem influencing prognosis and quality of life of chemotherapy-treated patients. Anthracyclines and trastuzumab are the drugs most commonly associated with development of a cardiotoxic effect. Heart failure, myocardial ischemia, hypertension, myocarditis, and thrombosis are typical manifestation of cardiotoxicity by chemotherapeutic agents. Diagnosis and monitoring of cardiac side-effects of cancer treatment is of paramount importance. Echocardiography and nuclear medicine methods are widely used in clinical practice and left ventricular ejection fraction is the most important parameter to asses myocardial damage secondary to chemotherapy. However, left ventricular ejection decrease is a delayed phenomenon, occurring after a long stage of silent myocardial damage that classic imaging methods are not able to detect. New imaging techniques including three-dimensional echocardiography, speckle tracking echocardiography, and cardiac magnetic resonance have demonstrated high sensitivity in detecting the earliest alteration of left ventricular function associated with future development of chemotherapy-induced cardiomyopathy. Early diagnosis of cardiac involvement in cancer patients can allow for timely and adequate treatment management and the introduction of cardioprotective strategies. PMID:26300915

  15. An effective method to increase bandwidth of EIK at 0.34 THz

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Wang, Guangqiang; Wang, Dongyang

    2018-02-01

    To increase the bandwidth of Extended Interaction Klystron (EIK) at 0.34 THz, the method of staggered tuning on cavities' configurations is proposed. Based on the analysis of phase relationship between gap voltage and the bunched beam, the buncher cavities in EIK are reasonably staggered-tuned to achieve various resonance frequencies, which is helpful to flat the gain response of the whole device. The characteristics of output cavities with different numbers of gaps are then researched and the issue of start current for the self-oscillation mode is also involved, leading to the optimum number of gaps to enhance the interaction and avoid the instability. By comparing the performances of various typical stagger-tuned models, the final configuration is accordingly confirmed. Particle-in-cell simulation is eventually applied to study performance of the optimised structure, whose gain is 34.8 dB in peak and -3 dB bandwidth reaches about 500 MHz, which is double that of the synchronous-tuned structure.

  16. [Peripheral facial paralysis: the role of physical medicine and rehabilitation].

    PubMed

    Matos, Catarina

    2011-12-01

    Peripheral facial paralysis (PFP) is a consequence of the peripheral neuronal lesion of the facial nerve (FN). It can be either primary (Bell`s Palsy) or secondary. The classical clinical presentation typically involves both stages of the hemiface. However, there may be other symptoms (ex. xerophthalmia, hyperacusis, phonation and deglutition changes) that one should recall. Clinical evaluation includes rigorous muscle tonus and sensibility search in the FN territory. Some useful instruments allow better objectivity in the patients' evaluation (House-Brackmann System, Facial Grading System, Functional Evaluation). There are clear referral criteria to Physical Medicine and Rehabilitation. Treatment of Bell`s Palsy may include pharmacotherapy, neuromuscular training (NMT), physical methods and surgery. In the NMT field the several treatment techniques are systematized. Therapeutic strategies should be problem-oriented and adjusted to the patient's symptoms and signs. Physical methods are reviewed. In about 15-20 % of patients permanent sequelae subside after 3 months of evolution. PFP is commonly a multidisciplinary condition. Therefore, it is important to review strategies that Physical Medicine and Rehabilitation may offer.

  17. Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.

    PubMed

    Kubitzki, Marcus B; de Groot, Bert L

    2007-06-15

    Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.

  18. Planetary Protection Considerations For Exomars Meteorological Instrumentation.

    NASA Astrophysics Data System (ADS)

    Camilletti, Adam

    2007-10-01

    Planetary protection requirements for Oxford University's contribution to the upcoming ESA ExoMars mission are discussed and the current methods being used to fulfil these requirements are detailed and reviewed. Oxford University is supplying temperature and wind sensors to the mission and since these will be exposed to the Martian environment there is a requirement that they are sterilised to stringent COSPAR standards adhered to by ESA. Typically dry heat microbial reduction (DHMR) is used to reduce spacecraft bioburden but the high temperatures involved are not compatible with the some hardware elements. Alternative, low-temperature sterilisation methods are reviewed and their applicability to spacecraft hardware discussed. The use of a commercially available, bench-top endotoxin tester in planetary protection is also discussed and data from preliminary tests performed at Oxford are presented. These devices, which utilise the immune response of horseshoe crabs to the presence of endotoxin, have the potential to reduce the time taken to determine bioburden by removing the need for conventional assaying -a lengthy and sometimes expensive process.

  19. A method of boundary equations for unsteady hyperbolic problems in 3D

    NASA Astrophysics Data System (ADS)

    Petropavlovsky, S.; Tsynkov, S.; Turkel, E.

    2018-07-01

    We consider interior and exterior initial boundary value problems for the three-dimensional wave (d'Alembert) equation. First, we reduce a given problem to an equivalent operator equation with respect to unknown sources defined only at the boundary of the original domain. In doing so, the Huygens' principle enables us to obtain the operator equation in a form that involves only finite and non-increasing pre-history of the solution in time. Next, we discretize the resulting boundary equation and solve it efficiently by the method of difference potentials (MDP). The overall numerical algorithm handles boundaries of general shape using regular structured grids with no deterioration of accuracy. For long simulation times it offers sub-linear complexity with respect to the grid dimension, i.e., is asymptotically cheaper than the cost of a typical explicit scheme. In addition, our algorithm allows one to share the computational cost between multiple similar problems. On multi-processor (multi-core) platforms, it benefits from what can be considered an effective parallelization in time.

  20. BetaScint{trademark} fiber-optic sensor for detecting strontium-90 and uranium-238 in soil. Innovative technology summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-12-01

    Accurate measurements of radioactivity in soils contaminated with Strontium-90 (Sr-90) or Uranium-238 (U-238) are essential for many DOE site remediation programs. These crucial measurements determine if excavation and soil removal is necessary, where remediation efforts should be focused, and/or if a site has reached closure. Measuring soil contamination by standard EPA laboratory methods typically takes a week (accelerated analytical test turnaround) or a month (standard analytical test turnaround). The time delay extends to operations involving heavy excavation equipment and associated personnel which are the main costs of remediation. This report describes an application of the BetaScint{trademark} fiber-optic sensor that measuresmore » Sr-90 or U-238 contamination in soil samples on site in about 20 minutes, at a much lower cost than time-consuming laboratory methods, to greatly facilitate remediation. This report describes the technology, its performance, its uses, cost, regulatory and policy issues, and lessons learned.« less

  1. Methods for extracting aerodynamic accelerations from Orbiter High Resolution Accelerometer Package flight data

    NASA Technical Reports Server (NTRS)

    Thompson, J. M.; Russell, J. W.; Blanchard, R. C.

    1987-01-01

    This report presents a process for extracting the aerodynamic accelerations of the Shuttle Orbiter Vehicle from the High Resolution Accelerometer Package (HiRAP) flight data during reentry. The methods for obtaining low-level aerodynamic accelerations, principally in the rarefied flow regime, are applied to 10 Orbiter flights. The extraction process is presented using data obtained from Space Transportation System Flight 32 (Mission 61-C) as a typical example. This process involves correcting the HiRAP measurements for the effects of temperature bias and instrument offset from the Orbiter center of gravity, and removing acceleration data during times they are affected by thruster firings. The corrected data are then made continuous and smooth and are further enhanced by refining the temperature bias correction and removing effects of the auxiliary power unit actuation. The resulting data are the current best estimate of the Orbiter aerodynamic accelerations during reentry and will be used for further analyses of the Orbiter aerodynamics and the upper atmosphere characteristics.

  2. Nonlinear dynamics analysis of the spur gear system for railway locomotive

    NASA Astrophysics Data System (ADS)

    Wang, Junguo; He, Guangyue; Zhang, Jie; Zhao, Yongxiang; Yao, Yuan

    2017-02-01

    Considering the factors such as the nonlinearity backlash, static transmission error and time-varying meshing stiffness, a three-degree-of-freedom torsional vibration model of spur gear transmission system for a typical locomotive is developed, in which the wheel/rail adhesion torque is considered as uncertain but bounded parameter. Meantime, the Ishikawa method is used for analysis and calculation of the time-varying mesh stiffness of the gear pair in meshing process. With the help of bifurcation diagrams, phase plane diagrams, Poincaré maps, time domain response diagrams and amplitude-frequency spectrums, the effects of the pinion speed and stiffness on the dynamic behavior of gear transmission system for locomotive are investigated in detail by using the numerical integration method. Numerical examples reveal various types of nonlinear phenomena and dynamic evolution mechanism involving one-period responses, multi-periodic responses, bifurcation and chaotic responses. Some research results present useful information to dynamic design and vibration control of the gear transmission system for railway locomotive.

  3. Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis

    ERIC Educational Resources Information Center

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia

    2016-01-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…

  4. Neural Correlates of Biased Responses: The Negative Method Effect in the Rosenberg Self-Esteem Scale Is Associated with Right Amygdala Volume.

    PubMed

    Wang, Yinan; Kong, Feng; Huang, Lijie; Liu, Jia

    2016-10-01

    Self-esteem is a widely studied construct in psychology that is typically measured by the Rosenberg Self-Esteem Scale (RSES). However, a series of cross-sectional and longitudinal studies have suggested that a simple and widely used unidimensional factor model does not provide an adequate explanation of RSES responses due to method effects. To identify the neural correlates of the method effect, we sought to determine whether and how method effects were associated with the RSES and investigate the neural basis of these effects. Two hundred and eighty Chinese college students (130 males; mean age = 22.64 years) completed the RSES and underwent magnetic resonance imaging (MRI). Behaviorally, method effects were linked to both positively and negatively worded items in the RSES. Neurally, the right amygdala volume negatively correlated with the negative method factor, while the hippocampal volume positively correlated with the general self-esteem factor in the RSES. The neural dissociation between the general self-esteem factor and negative method factor suggests that there are different neural mechanisms underlying them. The amygdala is involved in modulating negative affectivity; therefore, the current study sheds light on the nature of method effects that are related to self-report with a mix of positively and negatively worded items. © 2015 Wiley Periodicals, Inc.

  5. Geophysical methods for monitoring soil stabilization processes

    NASA Astrophysics Data System (ADS)

    Saneiyan, Sina; Ntarlagiannis, Dimitrios; Werkema, D. Dale; Ustra, Andréa

    2018-01-01

    Soil stabilization involves methods used to turn unconsolidated and unstable soil into a stiffer, consolidated medium that could support engineered structures, alter permeability, change subsurface flow, or immobilize contamination through mineral precipitation. Among the variety of available methods carbonate precipitation is a very promising one, especially when it is being induced through common soil borne microbes (MICP - microbial induced carbonate precipitation). Such microbial mediated precipitation has the added benefit of not harming the environment as other methods can be environmentally detrimental. Carbonate precipitation, typically in the form of calcite, is a naturally occurring process that can be manipulated to deliver the expected soil strengthening results or permeability changes. This study investigates the ability of spectral induced polarization and shear-wave velocity for monitoring calcite driven soil strengthening processes. The results support the use of these geophysical methods as soil strengthening characterization and long term monitoring tools, which is a requirement for viable soil stabilization projects. Both tested methods are sensitive to calcite precipitation, with SIP offering additional information related to long term stability of precipitated carbonate. Carbonate precipitation has been confirmed with direct methods, such as direct sampling and scanning electron microscopy (SEM). This study advances our understanding of soil strengthening processes and permeability alterations, and is a crucial step for the use of geophysical methods as monitoring tools in microbial induced soil alterations through carbonate precipitation.

  6. SYNCHROTRON ORIGIN OF THE TYPICAL GRB BAND FUNCTION—A CASE STUDY OF GRB 130606B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Bin-Bin; Briggs, Michael S.; Uhm, Z. Lucas

    2016-01-10

    We perform a time-resolved spectral analysis of GRB 130606B within the framework of a fast-cooling synchrotron radiation model with magnetic field strength in the emission region decaying with time, as proposed by Uhm and Zhang. The data from all time intervals can be successfully fit by the model. The same data can be equally well fit by the empirical Band function with typical parameter values. Our results, which involve only minimal physical assumptions, offer one natural solution to the origin of the observed GRB spectra and imply that, at least some, if not all, Band-like GRB spectra with typical Bandmore » parameter values can indeed be explained by synchrotron radiation.« less

  7. Using Multiple Control Groups and Matching to Address Unobserved Biases in Comparative Effectiveness Research: An Observational Study of the Effectiveness of Mental Health Parity.

    PubMed

    Yoon, Frank B; Huskamp, Haiden A; Busch, Alisa B; Normand, Sharon-Lise T

    2011-06-21

    Studies of large policy interventions typically do not involve randomization. Adjustments, such as matching, can remove the bias due to observed covariates, but residual confounding remains a concern. In this paper we introduce two analytical strategies to bolster inferences of the effectiveness of policy interventions based on observational data. First, we identify how study groups may differ and then select a second comparison group on this source of difference. Second, we match subjects using a strategy that finely balances the distributions of key categorical covariates and stochastically balances on other covariates. An observational study of the effect of parity on the severely ill subjects enrolled in the Federal Employees Health Benefits (FEHB) Program illustrates our methods.

  8. Epoxy Resin Composite Based on Functional Hybrid Fillers

    PubMed Central

    Oleksy, Mariusz; Szwarc-Rzepka, Karolina; Heneczkowski, Maciej; Oliwa, Rafał; Jesionowski, Teofil

    2014-01-01

    A study was carried out involving the filling of epoxy resin (EP) with bentonites and silica modified with polyhedral oligomeric silsesquioxane (POSS). The method of homogenization and the type of filler affect the functional and canceling properties of the composites was determined. The filler content ranged from 1.5% to 4.5% by mass. The basic mechanical properties of the hybrid composites were found to improve, and, in particular, there was an increase in tensile strength by 44%, and in Charpy impact strength by 93%. The developed hybrid composites had characteristics typical of polymer nanocomposites modified by clays, with a fine plate morphology of brittle fractures observed by SEM, absence of a plate separation peak in Wide Angles X-ray Scattering (WAXS) curves, and an exfoliated structure observed by TEM. PMID:28788177

  9. Core excitation effects on oscillator strengths for transitions in four electron atomic systems

    NASA Astrophysics Data System (ADS)

    Chang, T. N.; Luo, Yuxiang

    2007-06-01

    By including explicitly the electronic configurations with two and three simultaneously excited electronic orbital, we have extended the BSCI (B-spline based configuration interaction) method [1] to estimate directly the effect of inner shell core excitation to oscillator strengths for transitions in four-electron atomic systems. We will present explicitly the change in oscillator strengths due to core excitations, especially for transitions involving doubly excited states and those with very small oscillator strengths. The length and velocity results are typically in agreement better than 1% or less. [1] Tu-nan Chang, in Many-body Theory of Atomic Structure and Photoionization, edited by T. N. Chang (World Scientific, Singapore, 1993), p. 213-47; and T. N. Chang and T. K. Fang, Elsevier Radiation Physics and Chemistry 70, 173-190 (2004).

  10. Engineer Medium and Feed for Modulating N-Glycosylation of Recombinant Protein Production in CHO Cell Culture.

    PubMed

    Fan, Yuzhou; Kildegaard, Helene Faustrup; Andersen, Mikael Rørdam

    2017-01-01

    Chinese hamster ovary (CHO) cells have become the primary expression system for the production of complex recombinant proteins due to their long-term success in industrial scale production and generating appropriate protein N-glycans similar to that of humans. Control and optimization of protein N-glycosylation is crucial, as the structure of N-glycans can largely influence both biological and physicochemical properties of recombinant proteins. Protein N-glycosylation in CHO cell culture can be controlled and tuned by engineering medium, feed, culture process, as well as genetic elements of the cell. In this chapter, we will focus on how to carry out experiments for N-glycosylation modulation through medium and feed optimization. The workflow and typical methods involved in the experiment process will be presented.

  11. Comparison of a video-based assessment and a multiple stimulus assessment to identify preferred jobs for individuals with significant intellectual disabilities.

    PubMed

    Horrocks, Erin L; Morgan, Robert L

    2009-01-01

    The authors compare two methods of identifying job preferences for individuals with significant intellectual disabilities. Three individuals with intellectual disabilities between the ages of 19 and 21 participated in a video-based preference assessment and a multiple stimulus without replacement (MSWO) assessment. Stimulus preference assessment procedures typically involve giving participants access to the selected stimuli to increase the probability that participants will associate the selected choice with the actual stimuli. Although individuals did not have access to the selected stimuli in the video-based assessment, results indicated that both assessments identified the same highest preference job for all participants. Results are discussed in terms of using a video-based assessment to accurately identify job preferences for individuals with developmental disabilities.

  12. The thermo-optical behavior of turbid composite laminates under highly energetic laser irradiations

    NASA Astrophysics Data System (ADS)

    Allheily, Vadim; Merlat, Lionel; Lacroix, Fabrice; Eichhorn, Alfred; L'Hostis, Gildas

    2017-01-01

    From their prior emergence in the military domain but also nowadays in the civilian area, unmanned air vehicles constitute a growing threat to the todays civilization. In this respect, novel laser weapons are considered to eradicate this menace and the vulnerability of typical aeronautic materials under 1.07μm-wavelength irradiations is also investigated. In this paper, Kubelka-Munk optical parameters of laminated glass fiber-reinforced plastic composites are first assessed to build up a basic analytical interaction model involving internal refraction and reflection as well as the scattering effect due to the presence of glass fibers. Moreover, a thermo-gravimetric analysis is carried out and the kinetic parameters of the decomposition reaction extracted from this test with the Friedman method are verified trough a comparison with experimental measurements.

  13. Effect of Co on Discontinuous Precipitation Transformation with TCP Phase in Ni-based Alloy Containing Re

    NASA Astrophysics Data System (ADS)

    Shi, Qianying; An, Ning; Huo, Jiajie; Zheng, Yunrong; Feng, Qiang

    2017-05-01

    The effect of Co on discontinuous precipitation (DP) transformation involving the formation of topologically close-packed (TCP) phase was investigated in three Ni-Cr-Re model alloys containing different levels of Co. One typical TCP phase, σ, was generated within DP cellular colonies along the migrating grain boundaries in experimental alloys during aging treatment. As a result of the increased solubility of Re in the γ matrix and enlarged interlamellar spacing of σ precipitates inside of growing DP colonies, Co addition suppressed the formation of σ phase and associated DP colonies. This study suggests that Co could potentially serve as a microstructural stabilizer in Re-containing Ni-base superalloys, which provides an alternative method for the composition optimization of superalloys.

  14. Modelling multiple sources of dissemination bias in meta-analysis.

    PubMed

    Bowden, Jack; Jackson, Dan; Thompson, Simon G

    2010-03-30

    Asymmetry in the funnel plot for a meta-analysis suggests the presence of dissemination bias. This may be caused by publication bias through the decisions of journal editors, by selective reporting of research results by authors or by a combination of both. Typically, study results that are statistically significant or have larger estimated effect sizes are more likely to appear in the published literature, hence giving a biased picture of the evidence-base. Previous statistical approaches for addressing dissemination bias have assumed only a single selection mechanism. Here we consider a more realistic scenario in which multiple dissemination processes, involving both the publishing authors and journals, are operating. In practical applications, the methods can be used to provide sensitivity analyses for the potential effects of multiple dissemination biases operating in meta-analysis.

  15. APPLIED ORIGAMI. Origami of thick panels.

    PubMed

    Chen, Yan; Peng, Rui; You, Zhong

    2015-07-24

    Origami patterns, including the rigid origami patterns in which flat inflexible sheets are joined by creases, are primarily created for zero-thickness sheets. In order to apply them to fold structures such as roofs, solar panels, and space mirrors, for which thickness cannot be disregarded, various methods have been suggested. However, they generally involve adding materials to or offsetting panels away from the idealized sheet without altering the kinematic model used to simulate folding. We develop a comprehensive kinematic synthesis for rigid origami of thick panels that differs from the existing kinematic model but is capable of reproducing motions identical to that of zero-thickness origami. The approach, proven to be effective for typical origami, can be readily applied to fold real engineering structures. Copyright © 2015, American Association for the Advancement of Science.

  16. Measure Guideline. Steam System Balancing and Tuning for Multifamily Residential Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jayne; Ludwig, Peter; Brand, Larry

    2013-04-01

    This guideline provides building owners, professionals involved in multifamily audits, and contractors insights for improving the balance and tuning of steam systems. It provides readers an overview of one-pipe steam heating systems, guidelines for evaluating steam systems, typical costs and savings, and guidelines for ensuring quality installations. It also directs readers to additional resources for details not included here. Measures for balancing a distribution system that are covered include replacing main line vents and upgrading radiator vents. Also included is a discussion on upgrading boiler controls and the importance of tuning the settings on new or existing boiler controls. Themore » guideline focuses on one-pipe steam systems, though many of the assessment methods can be generalized to two-pipe steam systems.« less

  17. General aviation crash safety program at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.

    1976-01-01

    The purpose of the crash safety program is to support development of the technology to define and demonstrate new structural concepts for improved crash safety and occupant survivability in general aviation aircraft. The program involves three basic areas of research: full-scale crash simulation testing, nonlinear structural analyses necessary to predict failure modes and collapse mechanisms of the vehicle, and evaluation of energy absorption concepts for specific component design. Both analytical and experimental methods are being used to develop expertise in these areas. Analyses include both simplified procedures for estimating energy absorption capabilities and more complex computer programs for analysis of general airframe response. Full-scale tests of typical structures as well as tests on structural components are being used to verify the analyses and to demonstrate improved design concepts.

  18. Method and apparatus for dispensing small quantities of mercury from evacuated and sealed glass capsules

    DOEpatents

    Grossman, M.W.; George, W.A.; Pai, R.Y.

    1985-08-13

    A technique is disclosed for opening an evacuated and sealed glass capsule containing a material that is to be dispensed which has a relatively high vapor pressure such as mercury. The capsule is typically disposed in a discharge tube envelope. The technique involves the use of a first light source imaged along the capsule and a second light source imaged across the capsule substantially transversely to the imaging of the first light source. Means are provided for constraining a segment of the capsule along its length with the constraining means being positioned to correspond with the imaging of the second light source. These light sources are preferably incandescent projection lamps. The constraining means is preferably a multiple looped wire support. 6 figs.

  19. A preprocessing strategy for helioseismic inversions

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, J.; Thompson, M. J.

    1993-05-01

    Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.

  20. New Multi-HAzard and MulTi-RIsk Assessment MethodS for Europe (MATRIX): A research program towards mitigating multiple hazards and risks in Europe

    NASA Astrophysics Data System (ADS)

    Fleming, K. M.; Zschau, J.; Gasparini, P.; Modaressi, H.; Matrix Consortium

    2011-12-01

    Scientists, engineers, civil protection and disaster managers typically treat natural hazards and risks individually. This leads to the situation where the frequent causal relationships between the different hazards and risks, e.g., earthquakes and volcanos, or floods and landslides, are ignored. Such an oversight may potentially lead to inefficient mitigation planning. As part of their efforts to confront this issue, the European Union, under its FP7 program, is supporting the New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe or MATRIX project. The focus of MATRIX is on natural hazards, in particular earthquakes, landslides, volcanos, wild fires, storms and fluvial and coastal flooding. MATRIX will endeavour to develop methods and tools to tackle multi-type natural hazards and risks within a common framework, focusing on methodologies that are suited to the European context. The work will involve an assessment of current single-type hazard and risk assessment methodologies, including a comparison and quantification of uncertainties and harmonization of single-type methods, examining the consequence of cascade effects within a multi-hazard environment, time-dependent vulnerability, decision making and support for multi-hazard mitigation and adaption, and a series of test cases. Three test sites are being used to assess the methods developed within the project (Naples, Cologne, and the French West Indies), as well as a "virtual city" based on a comprehensive IT platform that will allow scenarios not represented by the test cases to be examined. In addition, a comprehensive dissemination program that will involve national platforms for disaster management, as well as various outreach activities, will be undertaken. The MATRIX consortium consists of ten research institutions (nine European and one Canadian), an end-user (i.e., one of the European national platforms for disaster reduction) and a partner from industry.

  1. Wind energy development: methods for assessing risks to birds and bats pre-construction

    USGS Publications Warehouse

    Katzner, Todd E.; Bennett, Victoria; Miller, Tricia A.; Duerr, Adam E.; Braham, Melissa A.; Hale, Amanda

    2016-01-01

    Wind power generation is rapidly expanding. Although wind power is a low-carbon source of energy, it can impact negatively birds and bats, either directly through fatality or indirectly by displacement or habitat loss. Pre-construction risk assessment at wind facilities within the United States is usually required only on public lands. When conducted, it generally involves a 3-tier process, with each step leading to more detailed and rigorous surveys. Preliminary site assessment (U.S. Fish and Wildlife Service, Tier 1) is usually conducted remotely and involves evaluation of existing databases and published materials. If potentially at-risk wildlife are present and the developer wishes to continue the development process, then on-site surveys are conducted (Tier 2) to verify the presence of those species and to assess site-specific features (e.g., topography, land cover) that may influence risk from turbines. The next step in the process (Tier 3) involves quantitative or scientific studies to assess the potential risk of the proposed project to wildlife. Typical Tier-3 research may involve acoustic, aural, observational, radar, capture, tracking, or modeling studies, all designed to understand details of risk to specific species or groups of species at the given site. Our review highlights several features lacking from many risk assessments, particularly the paucity of before-and-after-control- impact (BACI) studies involving modeling and a lack of understanding of cumulative effects of wind facilities on wildlife. Both are essential to understand effective designs for pre-construction monitoring and both would help expand risk assessment beyond eagles.

  2. Teacher training: the quick fix that stuck.

    PubMed

    Coldevin, G

    1988-01-01

    Of all the strategies for coping with teacher shortages that have been attempted, the effort that has gained the most ground in the past 20 years involves recruiting untrained or unqualified teachers pressing them into service, and then bringing them up to pedagogical or academic certification through distance education. The benefits of this approach include: for teachers, home-based study means attaining professional certification or academic upgrading without interrupting earnings; in-service training obviates the problem of finding substitute teachers who may have poorer qualifications than the teachers they are replacing; in-situ training reduces the possibility of urban migration that may result when college-based trainees do not want to return to their rural posts; and larger numbers of teachers can be served at 1 time and at lower costs than with campus-based instruction. According to the available studies, training costs for distance education students are typically 1/4 to 1/2 as expensive as conventional instruction. At least 50 3rd world countries are estimated to be involved in some form of distance teacher training. Mostly, countries combine correspondence with broadcasting and other support media, correspondence with occasional face-to-face sessions, and the 3-way combination of print/broadcasting/occasional residential teaching approaches. The broadcast medium typically is radio, but some programs, particularly programs offered by distance education universities, use television also. Tutorial counseling and local resource center support vary considerably and represent the most pressing challenges to the bulk of operations. 4 systems initiated during the 1980s in Pakistan, Burma, Zimbabwe, and Kenya are presented to illustrate the need for distance education training and the diversity of distance education methods.

  3. Full-field modal analysis during base motion excitation using high-speed 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Molina-Viedma, Ángel J.; López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.

    2017-10-01

    In recent years, many efforts have been made to exploit full-field measurement optical techniques for modal identification. Three-dimensional digital image correlation using high-speed cameras has been extensively employed for this purpose. Modal identification algorithms are applied to process the frequency response functions (FRF), which relate the displacement response of the structure to the excitation force. However, one of the most common tests for modal analysis involves the base motion excitation of a structural element instead of force excitation. In this case, the relationship between response and excitation is typically based on displacements, which are known as transmissibility functions. In this study, a methodology for experimental modal analysis using high-speed 3D digital image correlation and base motion excitation tests is proposed. In particular, a cantilever beam was excited from its base with a random signal, using a clamped edge join. Full-field transmissibility functions were obtained through the beam and converted into FRF for proper identification, considering a single degree-of-freedom theoretical conversion. Subsequently, modal identification was performed using a circle-fit approach. The proposed methodology facilitates the management of the typically large amounts of data points involved in the DIC measurement during modal identification. Moreover, it was possible to determine the natural frequencies, damping ratios and full-field mode shapes without requiring any additional tests. Finally, the results were experimentally validated by comparing them with those obtained by employing traditional accelerometers, analytical models and finite element method analyses. The comparison was performed by using the quantitative indicator modal assurance criterion. The results showed a high level of correspondence, consolidating the proposed experimental methodology.

  4. Effective Team Performance in Military Environments

    DTIC Science & Technology

    1986-12-01

    Salas, E. (in press). Personality and group effectiveness. Personality and Social Psychology Review. Gottfredson , G. D., Holland, J. L., & Ogawa, D. K...THEORETICAL, and require imagination, intelligence , and sensitivity to physical and intellectual problems. Typical groups involved in Investigative

  5. Watershed and Economic Data InterOperability (WEDO) System

    EPA Science Inventory

    Hydrologic modeling is essential for environmental, economic, and human health decision-making. However, sharing of modeling studies is limited within the watershed modeling community. Distribution of hydrologic modeling research typically involves publishing summarized data in p...

  6. Crashworthiness studies of locomotive wide nose short hood designs

    DOT National Transportation Integrated Search

    1999-11-01

    This paper investigates the parameters that influence the structural response of typical wide nose locomotive short hoods involved in offset collisions. This accident scenario was chosen based upon the railway collision that occurred in Selma, North ...

  7. 21 CFR 866.3870 - Trypanosoma spp. serological reagents.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... by fever, chills, headache, and vomiting. Central nervous system involvement produces typical.... Chagas disease, an acute form of trypanosomiasis in children, most seriously affects the central nervous system and heart muscle. (b) Classification. Class I (general controls). ...

  8. 21 CFR 866.3870 - Trypanosoma spp. serological reagents.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... by fever, chills, headache, and vomiting. Central nervous system involvement produces typical.... Chagas disease, an acute form of trypanosomiasis in children, most seriously affects the central nervous system and heart muscle. (b) Classification. Class I (general controls). ...

  9. 21 CFR 866.3870 - Trypanosoma spp. serological reagents.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... by fever, chills, headache, and vomiting. Central nervous system involvement produces typical.... Chagas disease, an acute form of trypanosomiasis in children, most seriously affects the central nervous system and heart muscle. (b) Classification. Class I (general controls). ...

  10. 21 CFR 866.3870 - Trypanosoma spp. serological reagents.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... by fever, chills, headache, and vomiting. Central nervous system involvement produces typical.... Chagas disease, an acute form of trypanosomiasis in children, most seriously affects the central nervous system and heart muscle. (b) Classification. Class I (general controls). ...

  11. 21 CFR 866.3870 - Trypanosoma spp. serological reagents.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... by fever, chills, headache, and vomiting. Central nervous system involvement produces typical.... Chagas disease, an acute form of trypanosomiasis in children, most seriously affects the central nervous system and heart muscle. (b) Classification. Class I (general controls). ...

  12. Strategies for Involvement in Wildlife Issues.

    ERIC Educational Resources Information Center

    Kennedy, Carolyn L.; Nye, Donna I.

    1984-01-01

    Highlights a workshop with children focusing on the treatment of controversial issues and guidelines typically provided by school districts or organizations. Activities designed to stimulate discussion on three issues (endangered species, acid rain, and predator control) are included. (BC)

  13. Liability for Off-Campus Injuries.

    ERIC Educational Resources Information Center

    Zirkel, Perry A.; Gluckman, Ivan B.

    1984-01-01

    Liability in cases involving students injured off school property generally hinges on whether districts fail to exercise due care in supervising students while on school premises. Typical activities that may occasion liability for negligence and possible defenses are listed. (MJL)

  14. Carbonic Anhydrase Catalysis: An Experiment on Enzyme Kinetics.

    ERIC Educational Resources Information Center

    Spyridis, Greg T.; And Others

    1985-01-01

    Describes an undergraduate enzyme kinetics experiment which uses bovine erythrocyte carbonic anhydrase, a very stable enzyme commercially available in lyophilized form. Includes background information, reactions involved, procedures used, and the calculation of typical results obtained. (JN)

  15. Brownian Motion--a Laboratory Experiment.

    ERIC Educational Resources Information Center

    Kruglak, Haym

    1988-01-01

    Introduces an experiment involving the observation of Brownian motion for college students. Describes the apparatus, experimental procedures, data analysis and results, and error analysis. Lists experimental techniques used in the experiment. Provides a circuit diagram, typical data, and graphs. (YP)

  16. Distraction and drowsiness in motorcoach drivers : research brief.

    DOT National Transportation Integrated Search

    2016-11-01

    Motorcoach crasheswhen they occurcan involve multiple injuries and deaths, beyond what is typically experienced in light vehicle crashes. Driver error is often cited as a factor in these crashes, with distraction and drowsiness being primary co...

  17. IVHS Institutional Issues And Case Studies: Westchester Commuter Central Case Study

    DOT National Transportation Integrated Search

    1997-01-01

    Shared resource projects are public-private arrangements that involve sharing public property such as rights-of-way and private resources such as telecommunications capacity and expertise. Typically, private telecommunications providers are granted a...

  18. Experiments with Disposable Hypodermic Syringes.

    ERIC Educational Resources Information Center

    Clayton, G. T.; And Others

    1988-01-01

    Lists five experiments or demonstrations involving hypodermic syringes. The titles of experiments are Boyle's Law, Charles' Law, Atmospheric Pressure, Expansion of Gases, and Boiling at Reduced Pressure. Provides a list of materials, the typical data, and graphs where appropriate. (YP)

  19. Predictive modeling of developmental toxicity using EPA’s Virtual Embryo

    EPA Science Inventory

    Standard practice in prenatal developmental toxicology involves testing chemicals in pregnant laboratory animals of two species, typically rats and rabbits, exposed during organogenesis and evaluating for fetal growth retardation, structural malformations, and prenatal death just...

  20. Watershed and Economic Data InterOperability (WEDO) System (presentation)

    EPA Science Inventory

    Hydrologic modeling is essential for environmental, economic, and human health decision- making. However, sharing of modeling studies is limited within the watershed modeling community. Distribution of hydrologic modeling research typically involves publishing summarized data in ...

Top