Sample records for analytical test function

  1. Statistical correlation analysis for comparing vibration data from test and analysis

    NASA Technical Reports Server (NTRS)

    Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.

    1986-01-01

    A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.

  2. Colorimetric and Fluorescent Biosensors Based on Directed Assembly of Nanomaterials with Functional DNA

    NASA Astrophysics Data System (ADS)

    Liu, Juewen; Lu, Yi

    This chapter reviews recent progress in the interface between functional nucleic acids and nanoscale science and technology, and its analytical applications. In particular, the use of metallic nanoparticles as the color reporting groups for the action (binding, catalysis, or both) of aptamers, DNAzymes, and aptazymes is described in detail. Because metallic nanoparticles possess high extinction coefficients and distance-dependent optical properties, they allow highly sensitive detections with minimal consumption of materials. The combination of quantum dots (QDs) with functional nucleic acids as fluorescent sensors is also described. The chapter starts with the design of colorimetric and fluorescent sensors responsive to single analytes, followed by sensors responsive to multiple analytes with controllable cooperativity and multiplex detection using both colorimetric and fluorescent signals in one pot, and ends by transferring solution-based detections into litmus paper type of tests, making them generally applicable and usable for a wide range of on-site and real-time analytical applications such as household tests, environmental monitoring, and clinical diagnostics.

  3. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    This paper presents a comparison of analysis and flight test data for a drone aircraft equipped with an active flutter suppression system. Emphasis is placed on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are presented for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. In addition to presenting the mathematical models and a brief description of existing analytical techniques, an alternative analytical technique for obtaining closed-loop results is presented.

  4. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    A drone aircraft equipped with an active flutter suppression system is considered with emphasis on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are given for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. The mathematical models are included and existing analytical techniques are described as well as an alternative analytical technique for obtaining closed-loop results.

  5. Advances in functional brain imaging technology and developmental neuro-psychology: their applications in the Jungian analytic domain.

    PubMed

    Petchkovsky, Leon

    2017-06-01

    Analytical psychology shares with many other psychotherapies the important task of repairing the consequences of developmental trauma. The majority of analytic patients come from compromised early developmental backgrounds: they may have experienced neglect, abuse, or failures of empathic resonance from their carers. Functional brain imagery techniques including Quantitative Electroencephalogram (QEEG), and functional Magnetic Resonance Imagery (fMRI), allow us to track mental processes in ways beyond verbal reportage and introspection. This independent perspective is useful for developing new psychodynamic hypotheses, testing current ones, providing diagnostic markers, and monitoring treatment progress. Jung, with the Word Association Test, grasped these principles 100 years ago. Brain imaging techniques have contributed to powerful recent advances in our understanding of neurodevelopmental processes in the first three years of life. If adequate nurturance is compromised, a range of difficulties may emerge. This has important implications for how we understand and treat our psychotherapy clients. The paper provides an overview of functional brain imaging and advances in developmental neuropsychology, and looks at applications of some of these findings (including neurofeedback) in the Jungian psychotherapy domain. © 2017, The Society of Analytical Psychology.

  6. Rational Selection, Criticality Assessment, and Tiering of Quality Attributes and Test Methods for Analytical Similarity Evaluation of Biosimilars.

    PubMed

    Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette

    2018-05-10

    Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.

  7. Analyte species and concentration identification using differentially functionalized microcantilever arrays and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senesac, Larry R; Datskos, Panos G; Sepaniak, Michael J

    2006-01-01

    In the present work, we have performed analyte species and concentration identification using an array of ten differentially functionalized microcantilevers coupled with a back-propagation artificial neural network pattern recognition algorithm. The array consists of ten nanostructured silicon microcantilevers functionalized by polymeric and gas chromatography phases and macrocyclic receptors as spatially dense, differentially responding sensing layers for identification and quantitation of individual analyte(s) and their binary mixtures. The array response (i.e. cantilever bending) to analyte vapor was measured by an optical readout scheme and the responses were recorded for a selection of individual analytes as well as several binary mixtures. Anmore » artificial neural network (ANN) was designed and trained to recognize not only the individual analytes and binary mixtures, but also to determine the concentration of individual components in a mixture. To the best of our knowledge, ANNs have not been applied to microcantilever array responses previously to determine concentrations of individual analytes. The trained ANN correctly identified the eleven test analyte(s) as individual components, most with probabilities greater than 97%, whereas it did not misidentify an unknown (untrained) analyte. Demonstrated unique aspects of this work include an ability to measure binary mixtures and provide both qualitative (identification) and quantitative (concentration) information with array-ANN-based sensor methodologies.« less

  8. A general statistical test for correlations in a finite-length time series.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  9. Biologic variability and correlation of platelet function testing in healthy dogs.

    PubMed

    Blois, Shauna L; Lang, Sean T; Wood, R Darren; Monteith, Gabrielle

    2015-12-01

    Platelet function tests are influenced by biologic variability, including inter-individual (CVG ) and intra-individual (CVI ), as well as analytic (CVA ) variability. Variability in canine platelet function testing is unknown, but if excessive, would make it difficult to interpret serial results. Additionally, the correlation between platelet function tests is poor in people, but not well described in dogs. The aims were to: (1) identify the effect of variation in preanalytic factors (venipuncture, elapsed time until analysis) on platelet function tests; (2) calculate analytic and biologic variability of adenosine diphosphate (ADP) and arachidonic acid (AA)-induced thromboelastograph platelet mapping (TEG-PM), ADP-, AA-, and collagen-induced whole blood platelet aggregometry (WBA), and collagen/ADP and collagen/epinephrine platelet function analysis (PFA-CADP, PFA-CEPI); and (3) determine the correlation between these variables. In this prospective observational trial, platelet function was measured once every 7 days, for 4 consecutive weeks, in 9 healthy dogs. In addition, CBC, TEG-PM, WBA, and PFA were performed. Overall coefficients of variability ranged from 13.3% to 87.8% for the platelet function tests. Biologic variability was highest for AA-induced maximum amplitude generated during TEG-PM (MAAA; CVG = 95.3%, CVI = 60.8%). Use of population-based reference intervals (RI) was determined appropriate only for PFA-CADP (index of individuality = 10.7). There was poor correlation between most platelet function tests. Use of population-based RI appears inappropriate for most platelet function tests, and tests poorly correlate with one another. Future studies on biologic variability and correlation of platelet function tests should be performed in dogs with platelet dysfunction and those treated with antiplatelet therapy. © 2015 American Society for Veterinary Clinical Pathology.

  10. Influence of analytical bias and imprecision on the number of false positive results using Guideline-Driven Medical Decision Limits.

    PubMed

    Hyltoft Petersen, Per; Klee, George G

    2014-03-20

    Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Reprint of "Influence of analytical bias and imprecision on the number of false positive results using Guideline-Driven Medical Decision Limits".

    PubMed

    Hyltoft Petersen, Per; Klee, George G

    2014-05-15

    Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2014. Published by Elsevier B.V.

  12. Constructing and Deriving Reciprocal Trigonometric Relations: A Functional Analytic Approach

    ERIC Educational Resources Information Center

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K.; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed…

  13. Appendix D : FIB-54 tests.

    DOT National Transportation Integrated Search

    2013-03-01

    Confinement reinforcement is placed near the end of pretensioned concrete I-girders to : enclose prestressing strands in the bottom flange. Experimental and analytical test programs : were conducted to investigate the function of confinement reinforc...

  14. Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions

    NASA Astrophysics Data System (ADS)

    Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus

    2017-10-01

    We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.

  15. Correlation of finite element free vibration predictions using random vibration test data. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Chambers, Jeffrey A.

    1994-01-01

    Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.

  16. The effects of display and autopilot functions on pilot workload for Single Pilot Instrument Flight Rule (SPIFR) operations

    NASA Technical Reports Server (NTRS)

    Hoh, Roger H.; Smith, James C.; Hinton, David A.

    1987-01-01

    An analytical and experimental research program was conducted to develop criteria for pilot interaction with advanced controls and displays in single pilot instrument flight rules (SPIFR) operations. The analytic phase reviewed fundamental considerations for pilot workload taking into account existing data, and using that data to develop a divided attention SPIFR pilot workload model. The pilot model was utilized to interpret the two experimental phases. The first experimental phase was a flight test program that evaluated pilot workload in the presence of current and near-term displays and autopilot functions. The second experiment was conducted on a King Air simulator, investigating the effects of co-pilot functions in the presence of very high SPIFR workload. The results indicate that the simplest displays tested were marginal for SPIFR operations. A moving map display aided the most in mental orientation, but had inherent deficiencies as a stand alone replacement for an HSI. Autopilot functions were highly effective for reducing pilot workload. The simulator tests showed that extremely high workload situations can be adequately handled when co-pilot functions are provided.

  17. WRAP-RIB antenna technology development

    NASA Technical Reports Server (NTRS)

    Freeland, R. E.; Garcia, N. F.; Iwamoto, H.

    1985-01-01

    The wrap-rib deployable antenna concept development is based on a combination of hardware development and testing along with extensive supporting analysis. The proof-of-concept hardware models are large in size so they will address the same basic problems associated with the design fabrication, assembly and test as the full-scale systems which were selected to be 100 meters at the beginning of the program. The hardware evaluation program consists of functional performance tests, design verification tests and analytical model verification tests. Functional testing consists of kinematic deployment, mesh management and verification of mechanical packaging efficiencies. Design verification consists of rib contour precision measurement, rib cross-section variation evaluation, rib materials characterizations and manufacturing imperfections assessment. Analytical model verification and refinement include mesh stiffness measurement, rib static and dynamic testing, mass measurement, and rib cross-section characterization. This concept was considered for a number of potential applications that include mobile communications, VLBI, and aircraft surveillance. In fact, baseline system configurations were developed by JPL, using the appropriate wrap-rib antenna, for all three classes of applications.

  18. Supervised Variational Relevance Learning, An Analytic Geometric Feature Selection with Applications to Omic Datasets.

    PubMed

    Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor

    2015-01-01

    We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.

  19. Apparent hyperthyroidism caused by biotin-like interference from IgM anti-streptavidin antibodies.

    PubMed

    Lam, Leo; Bagg, Warwick; Smith, Geoff; Chiu, Weldon; Middleditch, Martin James; Lim, Julie Ching-Hsia; Kyle, Campbell Vance

    2018-05-29

    Exclusion of analytical interference is important when there is discrepancy between clinical and laboratory findings. However, interferences on immunoassays are often mistaken as isolated laboratory artefacts. We characterized and report the mechanism of a rare cause of interference in two patients that caused erroneous thyroid function tests, and also affects many other biotin dependent immunoassays. Patient 1 was a 77 y female with worsening fatigue while taking carbimazole over several years. Her thyroid function tests however, were not suggestive of hypothyroidism. Patient 2 was a 25 y female also prescribed carbimazole for apparent primary hyperthyroidism. Despite an elevated FT4, the lowest TSH on record was 0.17mIU/L. In both cases, thyroid function tests performed by an alternative method were markedly different. Further characterization of both patients' serum demonstrated analytical interference on many immunoassays using the biotin-streptavidin interaction. Sandwich assays (e.g. TSH, FSH, TNT, beta-HCG) were falsely low, while competitive assays (e.g. FT4, FT3, TBII) were falsely high. Pre-incubation of serum with streptavidin microparticles removed the analytical interference, initially suggesting the cause of interference was biotin, however, neither patient had been taking biotin. Instead, a ~100kDa IgM immunoglobulin with high affinity to streptavidin was isolated from each patient's serum. The findings confirm IgM anti-streptavidin antibodies as the cause of analytical interference. We describe two patients with apparent hyperthyroidism as a result of analytical interference caused by IgM anti-streptavidin antibodies. Analytical interference identified on one immunoassay should raise the possibility of other affected results. Characterization of interference may help to identify other potentially affected immunoassays. In the case of anti-streptavidin antibodies, the pattern of interference mimics that due to biotin ingestion; however, the degree of interference varies between individual assays and between patients.

  20. Dynamic Response of Layered TiB/Ti Functionally Graded Material Specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrd, Larry; Beberniss, Tim; Chapman, Ben

    2008-02-15

    This paper covers the dynamic response of rectangular (25.4x101.6x3.175 mm) specimens manufactured from layers of TiB/Ti. The layers contained volume fractions of TiB that varied from 0 to 85% and thus formed a functionally graded material. Witness samples of the 85% TiB material were also tested to provide a baseline for the statistical variability of the test techniques. Static and dynamic tests were performed to determine the in situ material properties and fundamental frequencies. Damping in the material/ fixture was also found from the dynamic response. These tests were simulated using composite beam theory which gave an analytical solution, andmore » using finite element analysis. The response of the 85% TiB specimens was found to be much more uniform than the functionally graded material and the dynamic response more uniform than the static response. A least squares analysis of the data using the analytical solutions were used to determine the elastic modulus and Poisson's ratio of each layer. These results were used to model the response in the finite element analysis. The results indicate that current analytical and numerical methods for modeling the material give similar and adequate predictions for natural frequencies if the measured property values were used. The models did not agree as well if the properties from the manufacturer or those of Hill and Linn were used.« less

  1. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    NASA Astrophysics Data System (ADS)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  2. Variation in the modal parameters of space structures

    NASA Technical Reports Server (NTRS)

    Crawley, Edward F.; Barlow, Mark S.; Van Schoor, Marthinus C.; Bicos, Andrew S.

    1992-01-01

    An analytic and experimental study of gravity and suspension influences on space structural test articles is presented. A modular test article including deployable, erectable, and rotary modules was assembled in three one- and two-dimensional structures. The two deployable modules utilized cable diagonal bracing rather than rigid cross members; within a bay of one of the deployable modules, the cable preload was adjustable. A friction lock was used on the alpha joint to either allow or prohibit rotary motion. Suspension systems with plunge fundamentals of 1, 2, and 5 Hz were used for ground testing to evaluate the influences of suspension stiffness. Assembly and reassembly testing was performed, as was testing on two separate shipsets at two test sites. Trends and statistical variances in modal parameters are presented as a function of force amplitude, joint preload, reassembly, shipset and suspension. Linear finite element modeling of each structure provided analytical results for 0-g unsuspended and 1-g suspended models, which are correlated with the analytical model.

  3. An Analytical Model for Two-Order Asperity Degradation of Rock Joints Under Constant Normal Stiffness Conditions

    NASA Astrophysics Data System (ADS)

    Li, Yingchun; Wu, Wei; Li, Bo

    2018-05-01

    Jointed rock masses during underground excavation are commonly located under the constant normal stiffness (CNS) condition. This paper presents an analytical formulation to predict the shear behaviour of rough rock joints under the CNS condition. The dilatancy and deterioration of two-order asperities are quantified by considering the variation of normal stress. We separately consider the dilation angles of waviness and unevenness, which decrease to zero as the normal stress approaches the transitional stress. The sinusoidal function naturally yields the decay of dilation angle as a function of relative normal stress. We assume that the magnitude of transitional stress is proportionate to the square root of asperity geometric area. The comparison between the analytical prediction and experimental data shows the reliability of the analytical model. All the parameters involved in the analytical model possess explicit physical meanings and are measurable from laboratory tests. The proposed model is potentially practicable for assessing the stability of underground structures at various field scales.

  4. SMA-MAP: a plasma protein panel for spinal muscular atrophy.

    PubMed

    Kobayashi, Dione T; Shi, Jing; Stephen, Laurie; Ballard, Karri L; Dewey, Ruth; Mapes, James; Chung, Brett; McCarthy, Kathleen; Swoboda, Kathryn J; Crawford, Thomas O; Li, Rebecca; Plasterer, Thomas; Joyce, Cynthia; Chung, Wendy K; Kaufmann, Petra; Darras, Basil T; Finkel, Richard S; Sproule, Douglas M; Martens, William B; McDermott, Michael P; De Vivo, Darryl C; Walker, Michael G; Chen, Karen S

    2013-01-01

    Spinal Muscular Atrophy (SMA) presents challenges in (i) monitoring disease activity and predicting progression, (ii) designing trials that allow rapid assessment of candidate therapies, and (iii) understanding molecular causes and consequences of the disease. Validated biomarkers of SMA motor and non-motor function would offer utility in addressing these challenges. Our objectives were (i) to discover additional markers from the Biomarkers for SMA (BforSMA) study using an immunoassay platform, and (ii) to validate the putative biomarkers in an independent cohort of SMA patients collected from a multi-site natural history study (NHS). BforSMA study plasma samples (N = 129) were analyzed by immunoassay to identify new analytes correlating to SMA motor function. These immunoassays included the strongest candidate biomarkers identified previously by chromatography. We selected 35 biomarkers to validate in an independent cohort SMA type 1, 2, and 3 samples (N = 158) from an SMA NHS. The putative biomarkers were tested for association to multiple motor scales and to pulmonary function, neurophysiology, strength, and quality of life measures. We implemented a Tobit model to predict SMA motor function scores. 12 of the 35 putative SMA biomarkers were significantly associated (p<0.05) with motor function, with a 13(th) analyte being nearly significant. Several other analytes associated with non-motor SMA outcome measures. From these 35 biomarkers, 27 analytes were selected for inclusion in a commercial panel (SMA-MAP) for association with motor and other functional measures. Discovery and validation using independent cohorts yielded a set of SMA biomarkers significantly associated with motor function and other measures of SMA disease activity. A commercial SMA-MAP biomarker panel was generated for further testing in other SMA collections and interventional trials. Future work includes evaluating the panel in other neuromuscular diseases, for pharmacodynamic responsiveness to experimental SMA therapies, and for predicting functional changes over time in SMA patients.

  5. Predicting Differential Item Functioning in Cross-Lingual Testing: The Case of a High Stakes Test in the Kyrgyz Republic

    ERIC Educational Resources Information Center

    Drummond, Todd W.

    2011-01-01

    Cross-lingual tests are assessment instruments created in one language and adapted for use with another language group. Practitioners and researchers use cross-lingual tests for various descriptive, analytical and selection purposes both in comparative studies across nations and within countries marked by linguistic diversity (Hambleton, 2005).…

  6. Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Malsbury, T.; Atencio, A., Jr.

    1992-01-01

    A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.

  7. Inversion of the anomalous diffraction approximation for variable complex index of refraction near unity. [numerical tests for water-haze aerosol model

    NASA Technical Reports Server (NTRS)

    Smith, C. B.

    1982-01-01

    The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.

  8. Development tests for the 2.5 megawatt Mod-2 wind turbine generator

    NASA Technical Reports Server (NTRS)

    Andrews, J. S.; Baskin, J. M.

    1982-01-01

    The 2.5 megawatt MOD-2 wind turbine generator test program is discussed. The development of the 2.5 megawatt MOD-2 wind turbine generator included an extensive program of testing which encompassed verification of analytical procedures, component development, and integrated system verification. The test program was to assure achievement of the thirty year design operational life of the wind turbine system as well as to minimize costly design modifications which would otherwise have been required during on site system testing. Computer codes were modified, fatigue life of structure and dynamic components were verified, mechanical and electrical component and subsystems were functionally checked and modified where necessary to meet system specifications, and measured dynamic responses of coupled systems confirmed analytical predictions.

  9. Beyond Zipf's Law: The Lavalette Rank Function and Its Properties.

    PubMed

    Fontanelli, Oscar; Miramontes, Pedro; Yang, Yaning; Cocho, Germinal; Li, Wentian

    Although Zipf's law is widespread in natural and social data, one often encounters situations where one or both ends of the ranked data deviate from the power-law function. Previously we proposed the Beta rank function to improve the fitting of data which does not follow a perfect Zipf's law. Here we show that when the two parameters in the Beta rank function have the same value, the Lavalette rank function, the probability density function can be derived analytically. We also show both computationally and analytically that Lavalette distribution is approximately equal, though not identical, to the lognormal distribution. We illustrate the utility of Lavalette rank function in several datasets. We also address three analysis issues on the statistical testing of Lavalette fitting function, comparison between Zipf's law and lognormal distribution through Lavalette function, and comparison between lognormal distribution and Lavalette distribution.

  10. A Factor Analytic and Regression Approach to Functional Age: Potential Effects of Race.

    ERIC Educational Resources Information Center

    Colquitt, Alan L.; And Others

    Factor analysis and multiple regression are two major approaches used to look at functional age, which takes account of the extensive variation in the rate of physiological and psychological maturation throughout life. To examine the role of racial or cultural influences on the measurement of functional age, a battery of 12 tests concentrating on…

  11. A Guide for Setting the Cut-Scores to Minimize Weighted Classification Errors in Test Batteries

    ERIC Educational Resources Information Center

    Grabovsky, Irina; Wainer, Howard

    2017-01-01

    In this article, we extend the methodology of the Cut-Score Operating Function that we introduced previously and apply it to a testing scenario with multiple independent components and different testing policies. We derive analytically the overall classification error rate for a test battery under the policy when several retakes are allowed for…

  12. Alternative analytical forms to model diatomic systems based on the deformed exponential function.

    PubMed

    da Fonsêca, José Erinaldo; de Oliveira, Heibbe Cristhian B; da Cunha, Wiliam Ferreira; Gargano, Ricardo

    2014-07-01

    Using a deformed exponential function and the molecular-orbital theory for the simplest molecular ion, two new analytical functions are proposed to represent the potential energy of ground-state diatomic systems. The quality of these new forms was tested by fitting the ab initio electronic energies of the system LiH, LiNa, NaH, RbH, KH, H2, Li2, K2, H 2 (+) , BeH(+) and Li 2 (+) . From these fits, it was verified that these new proposals are able to adequately describe homonuclear, heteronuclear and cationic diatomic systems with good accuracy. Vibrational spectroscopic constant results obtained from these two proposals are in good agreement with experimental data.

  13. Experimental issues related to frequency response function measurements for frequency-based substructuring

    NASA Astrophysics Data System (ADS)

    Nicgorski, Dana; Avitabile, Peter

    2010-07-01

    Frequency-based substructuring is a very popular approach for the generation of system models from component measured data. Analytically the approach has been shown to produce accurate results. However, implementation with actual test data can cause difficulties and cause problems with the system response prediction. In order to produce good results, extreme care is needed in the measurement of the drive point and transfer impedances of the structure as well as observe all the conditions for a linear time invariant system. Several studies have been conducted to show the sensitivity of the technique to small variations that often occur during typical testing of structures. These variations have been observed in actual tested configurations and have been substantiated with analytical models to replicate the problems typically encountered. The use of analytically simulated issues helps to clearly see the effects of typical measurement difficulties often observed in test data. This paper presents some of these common problems observed and provides guidance and recommendations for data to be used for this modeling approach.

  14. Quantitative phase imaging method based on an analytical nonparaxial partially coherent phase optical transfer function.

    PubMed

    Bao, Yijun; Gaylord, Thomas K

    2016-11-01

    Multifilter phase imaging with partially coherent light (MFPI-PC) is a promising new quantitative phase imaging method. However, the existing MFPI-PC method is based on the paraxial approximation. In the present work, an analytical nonparaxial partially coherent phase optical transfer function is derived. This enables the MFPI-PC to be extended to the realistic nonparaxial case. Simulations over a wide range of test phase objects as well as experimental measurements on a microlens array verify higher levels of imaging accuracy compared to the paraxial method. Unlike the paraxial version, the nonparaxial MFPI-PC with obliquity factor correction exhibits no systematic error. In addition, due to its analytical expression, the increase in computation time compared to the paraxial version is negligible.

  15. Coding, testing and documentation of processors for the flight design system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The general functional design and implementation of processors for a space flight design system are briefly described. Discussions of a basetime initialization processor; conic, analytical, and precision coasting flight processors; and an orbit lifetime processor are included. The functions of several utility routines are also discussed.

  16. Morphology predicts species' functional roles and their degree of specialization in plant-frugivore interactions.

    PubMed

    Dehling, D Matthias; Jordano, Pedro; Schaefer, H Martin; Böhning-Gaese, Katrin; Schleuning, Matthias

    2016-01-27

    Species' functional roles in key ecosystem processes such as predation, pollination or seed dispersal are determined by the resource use of consumer species. An interaction between resource and consumer species usually requires trait matching (e.g. a congruence in the morphologies of interaction partners). Species' morphology should therefore determine species' functional roles in ecological processes mediated by mutualistic or antagonistic interactions. We tested this assumption for Neotropical plant-bird mutualisms. We used a new analytical framework that assesses a species's functional role based on the analysis of the traits of its interaction partners in a multidimensional trait space. We employed this framework to test (i) whether there is correspondence between the morphology of bird species and their functional roles and (ii) whether morphologically specialized birds fulfil specialized functional roles. We found that morphological differences between bird species reflected their functional differences: (i) bird species with different morphologies foraged on distinct sets of plant species and (ii) morphologically distinct bird species fulfilled specialized functional roles. These findings encourage further assessments of species' functional roles through the analysis of their interaction partners, and the proposed analytical framework facilitates a wide range of novel analyses for network and community ecology. © 2016 The Author(s).

  17. Wind Tunnel Database Development using Modern Experiment Design and Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2003-01-01

    A wind tunnel experiment for characterizing the aerodynamic and propulsion forces and moments acting on a research model airplane is described. The model airplane called the Free-flying Airplane for Sub-scale Experimental Research (FASER), is a modified off-the-shelf radio-controlled model airplane, with 7 ft wingspan, a tractor propeller driven by an electric motor, and aerobatic capability. FASER was tested in the NASA Langley 12-foot Low-Speed Wind Tunnel, using a combination of traditional sweeps and modern experiment design. Power level was included as an independent variable in the wind tunnel test, to allow characterization of power effects on aerodynamic forces and moments. A modeling technique that employs multivariate orthogonal functions was used to develop accurate analytic models for the aerodynamic and propulsion force and moment coefficient dependencies from the wind tunnel data. Efficient methods for generating orthogonal modeling functions, expanding the orthogonal modeling functions in terms of ordinary polynomial functions, and analytical orthogonal blocking were developed and discussed. The resulting models comprise a set of smooth, differentiable functions for the non-dimensional aerodynamic force and moment coefficients in terms of ordinary polynomials in the independent variables, suitable for nonlinear aircraft simulation.

  18. Functionally Graded Adhesives for Composite Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.; Arnold, Steven M.

    2012-01-01

    Adhesives with functionally graded material properties are being considered for use in adhesively bonded joints to reduce the peel stress concentrations located near adherend discontinuities. Several practical concerns impede the actual use of such adhesives. These include increased manufacturing complications, alterations to the grading due to adhesive flow during manufacturing, and whether changing the loading conditions significantly impact the effectiveness of the grading. An analytical study is conducted to address these three concerns. An enhanced joint finite element, which uses an analytical formulation to obtain exact shape functions, is used to model the joint. Furthermore, proof of concept testing is conducted to show the potential advantages of functionally graded adhesives. In this study, grading is achieved by strategically placing glass beads within the adhesive layer at different densities along the joint.

  19. An analytically solvable three-body break-up model problem in hyperspherical coordinates

    NASA Astrophysics Data System (ADS)

    Ancarani, L. U.; Gasaneo, G.; Mitnik, D. M.

    2012-10-01

    An analytically solvable S-wave model for three particles break-up processes is presented. The scattering process is represented by a non-homogeneous Coulombic Schrödinger equation where the driven term is given by a Coulomb-like interaction multiplied by the product of a continuum wave function and a bound state in the particles coordinates. The closed form solution is derived in hyperspherical coordinates leading to an analytic expression for the associated scattering transition amplitude. The proposed scattering model contains most of the difficulties encountered in real three-body scattering problem, e.g., non-separability in the electrons' spherical coordinates and Coulombic asymptotic behavior. Since the coordinates' coupling is completely different, the model provides an alternative test to that given by the Temkin-Poet model. The knowledge of the analytic solution provides an interesting benchmark to test numerical methods dealing with the double continuum, in particular in the asymptotic regions. An hyperspherical Sturmian approach recently developed for three-body collisional problems is used to reproduce to high accuracy the analytical results. In addition to this, we generalized the model generating an approximate wave function possessing the correct radial asymptotic behavior corresponding to an S-wave three-body Coulomb problem. The model allows us to explore the typical structure of the solution of a three-body driven equation, to identify three regions (the driven, the Coulombic and the asymptotic), and to analyze how far one has to go to extract the transition amplitude.

  20. High-performance space shuttle auxiliary propellant valve system

    NASA Technical Reports Server (NTRS)

    Smith, G. M.

    1973-01-01

    Several potential valve closures for the space shuttle auxiliary propulsion system (SS/APS) were investigated analytically and experimentally in a modeling program. The most promising of these were analyzed and experimentally evaluated in a full-size functional valve test fixture of novel design. The engineering investigations conducted for both model and scale evaluations of the SS/APS valve closures and functional valve fixture are described. Preliminary designs, laboratory tests, and overall valve test fixture designs are presented, and a final recommended flightweight SS/APS valve design is presented.

  1. Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brown, Douglas L.

    1994-01-01

    In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.

  2. Ground Test of the Urine Processing Assembly for Accelerations and Transfer Functions

    NASA Technical Reports Server (NTRS)

    Houston, Janice; Almond, Deborah F. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of the ground test of the urine processing assembly for accelerations and transfer functions. Details are given on the test setup, test data, data analysis, analytical results, and microgravity assessment. The conclusions of the tests include the following: (1) the single input/multiple output method is useful if the data is acquired by tri-axial accelerometers and inputs can be considered uncorrelated; (2) tying coherence with the matrix yields higher confidence in results; (3) the WRS#2 rack ORUs need to be isolated; (4) and future work includes a plan for characterizing performance of isolation materials.

  3. The detection of problem analytes in a single proficiency test challenge in the absence of the Health Care Financing Administration rule violations.

    PubMed

    Cembrowski, G S; Hackney, J R; Carey, N

    1993-04-01

    The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.

  4. Derived Transformation of Children's Pregambling Game Playing

    PubMed Central

    Dymond, Simon; Bateman, Helena; Dixon, Mark R

    2010-01-01

    Contemporary behavior-analytic perspectives on gambling emphasize the impact of verbal relations, or derived relational responding and the transformation of stimulus functions, on the initiation and maintenance of gambling. Approached in this way, it is possible to undertake experimental analysis of the role of verbal/mediational variables in gambling behavior. The present study therefore sought to demonstrate the ways new stimuli could come to have functions relevant to gambling without those functions being trained directly. Following a successful derived-equivalence-relations test, a simulated board game established high- and low-roll functions for two concurrently presented dice labelled with members of the derived relations. During the test for derived transformation, children were reexposed to the board game with dice labelled with indirectly related stimuli. All participants except 1 who passed the equivalence relations test selected the die that was indirectly related to the trained high-roll die more often than the die that was indirectly related to low-roll die, despite the absence of differential outcomes. All participants except 3 also gave the derived high-roll die higher liking ratings than the derived low-roll die. The implications of the findings for behavior-analytic research on gambling and the development of verbally-based interventions for disordered gambling are discussed. PMID:21541176

  5. Derived transformation of children's pregambling game playing.

    PubMed

    Dymond, Simon; Bateman, Helena; Dixon, Mark R

    2010-11-01

    Contemporary behavior-analytic perspectives on gambling emphasize the impact of verbal relations, or derived relational responding and the transformation of stimulus functions, on the initiation and maintenance of gambling. Approached in this way, it is possible to undertake experimental analysis of the role of verbal/mediational variables in gambling behavior. The present study therefore sought to demonstrate the ways new stimuli could come to have functions relevant to gambling without those functions being trained directly. Following a successful derived-equivalence-relations test, a simulated board game established high- and low-roll functions for two concurrently presented dice labelled with members of the derived relations. During the test for derived transformation, children were reexposed to the board game with dice labelled with indirectly related stimuli. All participants except 1 who passed the equivalence relations test selected the die that was indirectly related to the trained high-roll die more often than the die that was indirectly related to low-roll die, despite the absence of differential outcomes. All participants except 3 also gave the derived high-roll die higher liking ratings than the derived low-roll die. The implications of the findings for behavior-analytic research on gambling and the development of verbally-based interventions for disordered gambling are discussed.

  6. Experimental and analytical determination of characteristics affecting light aircraft landing-gear dynamics

    NASA Technical Reports Server (NTRS)

    Fasanella, E. L.; Mcgehee, J. R.; Pappas, M. S.

    1977-01-01

    An experimental and analytical investigation was conducted to determine which characteristics of a light aircraft landing gear influence gear dynamic behavior significantly. The investigation focused particularly on possible modification for load control. Pseudostatic tests were conducted to determine the gear fore-and-aft spring constant, axial friction as a function of drag load, brake pressure-torque characteristics, and tire force-deflection characteristics. To study dynamic tire response, vertical drops were conducted at impact velocities of 1.2, 1.5, and 1.8 m/s onto a level surface; to determine axial-friction effects, a second series of vertical drops were made at 1.5 m/s onto surfaces inclined 5 deg and 10 deg to the horizontal. An average dynamic axial-friction coefficient of 0.15 was obtained by comparing analytical data with inclined surface drop test data. Dynamic strut bending and associated axial friction were found to be severe for the drop tests on the 10 deg surface.

  7. Analytical and numerical analyses of an unconfined aquifer test considering unsaturated zone characteristics

    USGS Publications Warehouse

    Moench, A.F.

    2008-01-01

    A 7-d, constant rate aquifer test conducted by University of Waterloo researchers at Canadian Forces Base Borden in Ontario, Canada, is useful for advancing understanding of fluid flow processes in response to pumping from an unconfined aquifer. Measured data include not only drawdown in the saturated zone but also volumetric soil moisture measured at various times and distances from the pumped well. Analytical analyses were conducted with the model published in 2001 by Moench and colleagues, which allows for gradual drainage but does not include unsaturated zone characteristics, and the model published in 2006 by Mathias and Butler, which assumes that moisture retention and relative hydraulic conductivity (RHC) in the unsaturated zone are exponential functions of pressure head. Parameters estimated with either model yield good matches between measured and simulated drawdowns in piezometers. Numerical analyses were conducted with two versions of VS2DT: one that uses traditional Brooks and Corey functional relations and one that uses a RHC function introduced in 2001 by Assouline that includes an additional parameter that accounts for soil structure and texture. The analytical model of Mathias and Butler and numerical model of VS2DT with the Assouline model both show that the RHC function must contain a fitting parameter that is different from that used in the moisture retention function. Results show the influence of field-scale heterogeneity and suggest that the RHC at the Borden site declines more rapidly with elevation above the top of the capillary fringe than would be expected if the parameters were to reflect local- or core-scale soil structure and texture.

  8. Synthesis of novel monomeric graphene quantum dots and corresponding nanocomposite with molecularly imprinted polymer for electrochemical detection of an anticancerous ifosfamide drug.

    PubMed

    Bali Prasad, Bhim; Kumar, Anil; Singh, Ragini

    2017-08-15

    This paper reports a typical synthesis of a nanocomposite of functionalized graphene quantum dots and imprinted polymer at the surface of screen-printed carbon electrode using N-acryloyl-4-aminobenzamide, as a functional monomer, and an anticancerous drug, ifosfamide, as a print molecule (test analyte). Herein, graphene quantum dots in nanocomposite practically induced the electrocatalytic activity by lowering the oxidation overpotential of test analyte and thereby amplifying electronic transmission, without any interfacial barrier in between the film and the electrode surface. The differential pulse anodic stripping signal at functionalized graphene quantum dots based imprinted sensor was realized to be about 3- and 7-fold higher as compared to the traditionally made imprinted polymers prepared in the presence and the absence of graphene quantum dots (un-functionalized), respectively. This may be attributed to a pertinent synergism in between the positively charged functionalized graphene quantum dots in the film and the target analyte toward the enhancement of electro-conductivity of the film and thereby the electrode kinetics. In fact, the covalent attachment of graphene quantum dots with N-acryloyl-4-aminobenzamide molecules might exert an extended conjugation at their interface facilitating electro conducting to render the channelized pathways for the electron transport. The proposed sensor is practically applicable to the ultratrace evaluation of ifosfamide in real (biological/pharmaceutical) samples with detection limit as low as 0.11ngmL -1 (S/N=3), without any matrix effect, cross-reactivity, and false-positives. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  10. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.

  11. Analytical gradients for subsystem density functional theory within the slater-function-based amsterdam density functional program.

    PubMed

    Schlüns, Danny; Franchini, Mirko; Götz, Andreas W; Neugebauer, Johannes; Jacob, Christoph R; Visscher, Lucas

    2017-02-05

    We present a new implementation of analytical gradients for subsystem density-functional theory (sDFT) and frozen-density embedding (FDE) into the Amsterdam Density Functional program (ADF). The underlying theory and necessary expressions for the implementation are derived and discussed in detail for various FDE and sDFT setups. The parallel implementation is numerically verified and geometry optimizations with different functional combinations (LDA/TF and PW91/PW91K) are conducted and compared to reference data. Our results confirm that sDFT-LDA/TF yields good equilibrium distances for the systems studied here (mean absolute deviation: 0.09 Å) compared to reference wave-function theory results. However, sDFT-PW91/PW91k quite consistently yields smaller equilibrium distances (mean absolute deviation: 0.23 Å). The flexibility of our new implementation is demonstrated for an HCN-trimer test system, for which several different setups are applied. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Checking Equity: Why Differential Item Functioning Analysis Should Be a Routine Part of Developing Conceptual Assessments

    ERIC Educational Resources Information Center

    Martinková, Patricia; Drabinová, Adéla; Liaw, Yuan-Ling; Sanders, Elizabeth A.; McFarland, Jenny L.; Price, Rebecca M.

    2017-01-01

    We provide a tutorial on differential item functioning (DIF) analysis, an analytic method useful for identifying potentially biased items in assessments. After explaining a number of methodological approaches, we test for gender bias in two scenarios that demonstrate why DIF analysis is crucial for developing assessments, particularly because…

  13. First Definition of Reference Intervals of Liver Function Tests in China: A Large-Population-Based Multi-Center Study about Healthy Adults

    PubMed Central

    Zhang, Chuanbao; Guo, Wei; Huang, Hengjian; Ma, Yueyun; Zhuang, Junhua; Zhang, Jie

    2013-01-01

    Background Reference intervals of Liver function tests are very important for the screening, diagnosis, treatment, and monitoring of liver diseases. We aim to establish common reference intervals of liver function tests specifically for the Chinese adult population. Methods A total of 3210 individuals (20–79 years) were enrolled in six representative geographical regions in China. Analytes of ALT, AST, GGT, ALP, total protein, albumin and total bilirubin were measured using three analytical systems mainly used in China. The newly established reference intervals were based on the results of traceability or multiple systems, and then validated in 21 large hospitals located nationwide qualified by the National External Quality Assessment (EQA) of China. Results We had been established reference intervals of the seven liver function tests for the Chinese adult population and found there were apparent variances of reference values for the variables for partitioning analysis such as gender(ALT, GGT, total bilirubin), age(ALP, albumin) and region(total protein). More than 86% of the 21 laboratories passed the validation in all subgroup of reference intervals and overall about 95.3% to 98.8% of the 1220 validation results fell within the range of the new reference interval for all liver function tests. In comparison with the currently recommended reference intervals in China, the single side observed proportions of out of range of reference values from our study for most of the tests deviated significantly from the nominal 2.5% such as total bilirubin (15.2%), ALP (0.2%), albumin (0.0%). Most of reference intervals in our study were obviously different from that of other races. Conclusion These used reference intervals are no longer applicable for the current Chinese population. We have established common reference intervals of liver function tests that are defined specifically for Chinese population and can be universally used among EQA-approved laboratories located all over China. PMID:24058449

  14. Hantush Well Function revisited

    NASA Astrophysics Data System (ADS)

    Veling, E. J. M.; Maas, C.

    2010-11-01

    SummaryIn this paper, we comment on some recent numerical and analytical work to evaluate the Hantush Well Function. We correct an expression found in a Comment by Nadarajah [Nadarajah, S., 2007. A comment on numerical evaluation of Theis and Hantush-Jacob well functions. Journal of Hydrology 338, 152-153] to a paper by Prodanoff et al. [Prodanoff, J.A., Mansur, W.J., Mascarenhas, F.C.B., 2006. Numerical evaluation of Theis and Hantush-Jacob well functions. Journal of Hydrology 318, 173-183]. We subsequently derived another analytic representation based on a generalized hypergeometric function in two variables and from the hydrological literature we cite an analytic representation by Hunt [Hunt, B., 1977. Calculation of the leaky aquifer function. Journal of Hydrology 33, 179-183]. We have implemented both representations and compared the results. Using a convergence accelerator Hunt's representation of Hantush Well Function is efficient and accurate. While checking our implementations we found that Bear's table of the Hantush Well Function [Bear, J., 1979. Hydraulics of Groundwater. McGraw-Hill, New York, Tables 8-6] contains a number of typographical errors that are not present in the original table published by Hantush [Hantush, M.S., 1956. Analysis of data from pumping tests in leaky aquifers. Transactions, American Geophysical Union 37, 702-714]. Finally, we offer a very fast approximation with a maximum relative error of 0.0033 for the parameter range in the table given by Bear.

  15. Towards an Analytical Age-Dependent Model of Contrast Sensitivity Functions for an Ageing Society

    PubMed Central

    Joulan, Karine; Brémond, Roland

    2015-01-01

    The Contrast Sensitivity Function (CSF) describes how the visibility of a grating depends on the stimulus spatial frequency. Many published CSF data have demonstrated that contrast sensitivity declines with age. However, an age-dependent analytical model of the CSF is not available to date. In this paper, we propose such an analytical CSF model based on visual mechanisms, taking into account the age factor. To this end, we have extended an existing model from Barten (1999), taking into account the dependencies of this model's optical and physiological parameters on age. Age-dependent models of the cones and ganglion cells densities, the optical and neural MTF, and optical and neural noise are proposed, based on published data. The proposed age-dependent CSF is finally tested against available experimental data, with fair results. Such an age-dependent model may be beneficial when designing real-time age-dependent image coding and display applications. PMID:26078994

  16. What is the best strategy for investigating abnormal liver function tests in primary care? Implications from a prospective study.

    PubMed

    Lilford, Richard J; Bentham, Louise M; Armstrong, Matthew J; Neuberger, James; Girling, Alan J

    2013-06-20

    Evaluation of predictive value of liver function tests (LFTs) for the detection of liver-related disease in primary care. A prospective observational study. 11 UK primary care practices. Patients (n=1290) with an abnormal eight-panel LFT (but no previously diagnosed liver disease). Patients were investigated by recording clinical features, and repeating LFTs, specific tests for individual liver diseases, and abdominal ultrasound scan. Patients were characterised as having: hepatocellular disease; biliary disease; tumours of the hepato-biliary system and none of the above. The relationship between LFT results and disease categories was evaluated by stepwise regression and logistic discrimination, with adjustment for demographic and clinical factors. True and False Positives generated by all possible LFT combinations were compared with a view towards optimising the choice of analytes in the routine LFT panel. Regression methods showed that alanine aminotransferase (ALT) was associated with hepatocellular disease (32 patients), while alkaline phosphatase (ALP) was associated with biliary disease (12 patients) and tumours of the hepatobiliary system (9 patients). A restricted panel of ALT and ALP was an efficient choice of analytes, comparing favourably with the complete panel of eight analytes, provided that 48 False Positives can be tolerated to obtain one additional True Positive. Repeating a complete panel in response to an abnormal reading is not the optimal strategy. The LFT panel can be restricted to ALT and ALP when the purpose of testing is to exclude liver disease in primary care.

  17. Analytic algorithms for determining radiative transfer optical properties of ocean waters.

    PubMed

    Kaskas, Ayse; Güleçyüz, Mustafa C; Tezcan, Cevdet; McCormick, Norman J

    2006-10-10

    A synthetic model for the scattering phase function is used to develop simple algebraic equations, valid for any water type, for evaluating the ratio of the backscattering to absorption coefficients of spatially uniform, very deep waters with data from upward and downward planar irradiances and the remotely sensed reflectance. The phase function is a variable combination of a forward-directed Dirac delta function plus isotropic scattering, which is an elementary model for strongly forward scattering such as that encountered in oceanic optics applications. The incident illumination at the surface is taken to be diffuse plus a collimated beam. The algorithms are compared with other analytic correlations that were previously derived from extensive numerical simulations, and they are also numerically tested with forward problem results computed with a modified FN method.

  18. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  19. Analytical and Experimental Studies of Leak Location and Environment Characterization for the International Space Station

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael; Abel, Joshua; Autrey, David; Blackmon, Rebecca; Bond, Tim; Brown, Martin; Buffington, Jesse; Cheng, Edward; DeLatte, Danielle; Garcia, Kelvin; hide

    2014-01-01

    The International Space Station program is developing a robotically-operated leak locator tool to be used externally. The tool would consist of a Residual Gas Analyzer for partial pressure measurements and a full range pressure gauge for total pressure measurements. The primary application is to detect NH3 coolant leaks in the ISS thermal control system. An analytical model of leak plume physics is presented that can account for effusive flow as well as plumes produced by sonic orifices and thruster operations. This model is used along with knowledge of typical RGA and full range gauge performance to analyze the expected instrument sensitivity to ISS leaks of various sizes and relative locations ("directionality"). The paper also presents experimental results of leak simulation testing in a large thermal vacuum chamber at NASA Goddard Space Flight Center. This test characterized instrument sensitivity as a function of leak rates ranging from 1 lb-mass/yr. to about 1 lb-mass/day. This data may represent the first measurements collected by an RGA or ion gauge system monitoring off-axis point sources as a function of location and orientation. Test results are compared to the analytical model and used to propose strategies for on-orbit leak location and environment characterization using the proposed instrument while taking into account local ISS conditions and the effects of ram/wake flows and structural shadowing within low Earth orbit.

  20. Analytical and Experimental Studies of Leak Location and Environment Characterization for the International Space Station

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Abel, Joshua C.; Autrey, David; Blackmon, Rebecca; Bond, Tim; Brown, Martin; Buffington, Jesse; Cheng, Edward; DeLatte, Danielle; Garcia, Kelvin; hide

    2014-01-01

    The International Space Station program is developing a robotically-operated leak locator tool to be used externally. The tool would consist of a Residual Gas Analyzer for partial pressure measurements and a full range pressure gauge for total pressure measurements. The primary application is to detect NH3 coolant leaks in the ISS thermal control system.An analytical model of leak plume physics is presented that can account for effusive flow as well as plumes produced by sonic orifices and thruster operations. This model is used along with knowledge of typical RGA and full range gauge performance to analyze the expected instrument sensitivity to ISS leaks of various sizes and relative locations (directionality).The paper also presents experimental results of leak simulation testing in a large thermal vacuum chamber at NASA Goddard Space Flight Center. This test characterized instrument sensitivity as a function of leak rates ranging from 1 lbmyr. to about 1 lbmday. This data may represent the first measurements collected by an RGA or ion gauge system monitoring off-axis point sources as a function of location and orientation. Test results are compared to the analytical model and used to propose strategies for on-orbit leak location and environment characterization using the proposed instrument while taking into account local ISS conditions and the effects of ramwake flows and structural shadowing within low Earth orbit.

  1. openECA Platform and Analytics Alpha Test Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.

  2. Functional Alterations in Neural Substrates of Geometric Reasoning in Adults with High-Functioning Autism

    PubMed Central

    Yamada, Takashi; Ohta, Haruhisa; Watanabe, Hiromi; Kanai, Chieko; Tani, Masayuki; Ohno, Taisei; Takayama, Yuko; Iwanami, Akira; Kato, Nobumasa; Hashimoto, Ryuichiro

    2012-01-01

    Individuals with autism spectrum condition (ASC) are known to excel in some perceptual cognitive tasks, but such developed functions have been often regarded as “islets of abilities” that do not significantly contribute to broader intellectual capacities. However, recent behavioral studies have reported that individuals with ASC have advantages for performing Raven's (Standard) Progressive Matrices (RPM/RSPM), a standard neuropsychological test for general fluid intelligence, raising the possibility that ASC′s cognitive strength can be utilized for more general purposes like novel problem solving. Here, the brain activity of 25 adults with high-functioning ASC and 26 matched normal controls (NC) was measured using functional magnetic resonance imaging (fMRI) to examine neural substrates of geometric reasoning during the engagement of a modified version of the RSPM test. Among the frontal and parietal brain regions involved in fluid intelligence, ASC showed larger activation in the left lateral occipitotemporal cortex (LOTC) during an analytic condition with moderate difficulty than NC. Activation in the left LOTC and ventrolateral prefrontal cortex (VLPFC) increased with task difficulty in NC, whereas such modulation of activity was absent in ASC. Furthermore, functional connectivity analysis revealed a significant reduction of activation coupling between the left inferior parietal cortex and the right anterior prefrontal cortex during both figural and analytic conditions in ASC. These results indicate altered pattern of functional specialization and integration in the neural system for geometric reasoning in ASC, which may explain its atypical cognitive pattern, including performance on the Raven's Matrices test. PMID:22912831

  3. Evaluation of Analytical Modeling Functions for the Phonation Onset Process.

    PubMed

    Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael

    2016-01-01

    The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.

  4. Improvement of a Pneumatic Control Valve with Self-Holding Function

    NASA Astrophysics Data System (ADS)

    Dohta, Shujiro; Akagi, Tetsuya; Kobayashi, Wataru; Shimooka, So; Masago, Yusuke

    2017-10-01

    The purpose of this study is to develop a small-sized, lightweight and low-cost control valve with low energy consumption and to apply it to the assistive system. We have developed some control valves; a tiny on/off valve using a vibration motor, and an on/off valve with self-holding function. We have also proposed and tested the digital servo valve with self-holding function using permanent magnets and a small-sized servo motor. In this paper, in order to improve the valve, an analytical model of the digital servo valve is proposed. And the simulated results by using the analytical model and identified parameters were compared with the experimental results. Then, the improved digital servo valve was designed based on the calculated results and tested. As a result, we realized the digital servo valve that can control the flow rate more precisely while maintaining its volume and weight compared with the previous valve. As an application of the improved valve, a position control system of rubber artificial muscle was built and the position control was performed successfully.

  5. Heparin removal by ecteola-cellulose pre-treatment enables the use of plasma samples for accurate measurement of anti-Yellow fever virus neutralizing antibodies.

    PubMed

    Campi-Azevedo, Ana Carolina; Peruhype-Magalhães, Vanessa; Coelho-Dos-Reis, Jordana Grazziela; Costa-Pereira, Christiane; Yamamura, Anna Yoshida; Lima, Sheila Maria Barbosa de; Simões, Marisol; Campos, Fernanda Magalhães Freire; de Castro Zacche Tonini, Aline; Lemos, Elenice Moreira; Brum, Ricardo Cristiano; de Noronha, Tatiana Guimarães; Freire, Marcos Silva; Maia, Maria de Lourdes Sousa; Camacho, Luiz Antônio Bastos; Rios, Maria; Chancey, Caren; Romano, Alessandro; Domingues, Carla Magda; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis

    2017-09-01

    Technological innovations in vaccinology have recently contributed to bring about novel insights for the vaccine-induced immune response. While the current protocols that use peripheral blood samples may provide abundant data, a range of distinct components of whole blood samples are required and the different anticoagulant systems employed may impair some properties of the biological sample and interfere with functional assays. Although the interference of heparin in functional assays for viral neutralizing antibodies such as the functional plaque-reduction neutralization test (PRNT), considered the gold-standard method to assess and monitor the protective immunity induced by the Yellow fever virus (YFV) vaccine, has been well characterized, the development of pre-analytical treatments is still required for the establishment of optimized protocols. The present study intended to optimize and evaluate the performance of pre-analytical treatment of heparin-collected blood samples with ecteola-cellulose (ECT) to provide accurate measurement of anti-YFV neutralizing antibodies, by PRNT. The study was designed in three steps, including: I. Problem statement; II. Pre-analytical steps; III. Analytical steps. Data confirmed the interference of heparin on PRNT reactivity in a dose-responsive fashion. Distinct sets of conditions for ECT pre-treatment were tested to optimize the heparin removal. The optimized protocol was pre-validated to determine the effectiveness of heparin plasma:ECT treatment to restore the PRNT titers as compared to serum samples. The validation and comparative performance was carried out by using a large range of serum vs heparin plasma:ECT 1:2 paired samples obtained from unvaccinated and 17DD-YFV primary vaccinated subjects. Altogether, the findings support the use of heparin plasma:ECT samples for accurate measurement of anti-YFV neutralizing antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. CONSTRUCTING AND DERIVING RECIPROCAL TRIGONOMETRIC RELATIONS: A FUNCTIONAL ANALYTIC APPROACH

    PubMed Central

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires. PMID:19949509

  7. Constructing and deriving reciprocal trigonometric relations: a functional analytic approach.

    PubMed

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires.

  8. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE PAGES

    Luce, Timothy C.

    2017-02-23

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  9. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luce, Timothy C.

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  10. SUBSURFACE RESIDENCE TIMES AS AN ALGORITHM FOR AQUIFER SENSITIVITY MAPPING: TESTING THE CONCEPT WITH ANALYTIC ELEMENT GROUND WATER MODELS IN THE CONTENTNEA CREEK BASIN, NORTH CAROLINA, USA

    EPA Science Inventory

    The objective of this research is to test the utility of simple functions of spatially integrated and temporally averaged ground water residence times in shallow "groundwatersheds" with field observations and more detailed computer simulations. The residence time of water in the...

  11. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  12. Understanding Rasch Measurement: Rasch Techniques for Detecting Bias in Performance Assessments: An Example Comparing the Performance of Native and Non-native Speakers on a Test of Academic English.

    ERIC Educational Resources Information Center

    Elder, Catherine; McNamara, Tim; Congdon, Peter

    2003-01-01

    Used Rasch analytic procedures to study item bias or differential item functioning in both dichotomous and scalar items on a test of English for academic purposes. Results for 139 college students on a pilot English language test model the approach and illustrate the measurement challenges posed by a diagnostic instrument to measure English…

  13. Peculiarities of the momentum distribution functions of strongly correlated charged fermions

    NASA Astrophysics Data System (ADS)

    Larkin, A. S.; Filinov, V. S.; Fortov, V. E.

    2018-01-01

    New numerical version of the Wigner approach to quantum thermodynamics of strongly coupled systems of particles has been developed for extreme conditions, when analytical approximations based on different kinds of perturbation theories cannot be applied. An explicit analytical expression of the Wigner function has been obtained in linear and harmonic approximations. Fermi statistical effects are accounted for by effective pair pseudopotential depending on coordinates, momenta and degeneracy parameter of particles and taking into account Pauli blocking of fermions. A new quantum Monte-Carlo method for calculations of average values of arbitrary quantum operators has been developed. Calculations of the momentum distribution functions and the pair correlation functions of degenerate ideal Fermi gas have been carried out for testing the developed approach. Comparison of the obtained momentum distribution functions of strongly correlated Coulomb systems with the Maxwell-Boltzmann and the Fermi distributions shows the significant influence of interparticle interaction both at small momenta and in high energy quantum ‘tails’.

  14. Electromagnetic Compatibility Testing Studies

    NASA Technical Reports Server (NTRS)

    Trost, Thomas F.; Mitra, Atindra K.

    1996-01-01

    This report discusses the results on analytical models and measurement and simulation of statistical properties from a study of microwave reverberation (mode-stirred) chambers performed at Texas Tech University. Two analytical models of power transfer vs. frequency in a chamber, one for antenna-to-antenna transfer and the other for antenna to D-dot sensor, were experimentally validated in our chamber. Two examples are presented of the measurement and calculation of chamber Q, one for each of the models. Measurements of EM power density validate a theoretical probability distribution on and away from the chamber walls and also yield a distribution with larger standard deviation at frequencies below the range of validity of the theory. Measurements of EM power density at pairs of points which validate a theoretical spatial correlation function on the chamber walls and also yield a correlation function with larger correlation length, R(sub corr), at frequencies below the range of validity of the theory. A numerical simulation, employing a rectangular cavity with a moving wall shows agreement with the measurements. The determination that the lowest frequency at which the theoretical spatial correlation function is valid in our chamber is considerably higher than the lowest frequency recommended by current guidelines for utilizing reverberation chambers in EMC testing. Two suggestions have been made for future studies related to EMC testing.

  15. Testing and Validation of the Dynamic Inertia Measurement Method

    NASA Technical Reports Server (NTRS)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.

  16. 42 CFR 493.845 - Standard; Toxicology.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...

  17. 42 CFR 493.851 - Standard; Hematology.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...

  18. 42 CFR 493.843 - Standard; Endocrinology.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...

  19. 42 CFR 493.845 - Standard; Toxicology.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...

  20. 42 CFR 493.845 - Standard; Toxicology.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...

  1. 42 CFR 493.851 - Standard; Hematology.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...

  2. 42 CFR 493.843 - Standard; Endocrinology.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...

  3. 42 CFR 493.843 - Standard; Endocrinology.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...

  4. 42 CFR 493.851 - Standard; Hematology.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...

  5. Numerical implementation of complex orthogonalization, parallel transport on Stiefel bundles, and analyticity

    NASA Astrophysics Data System (ADS)

    Avitabile, Daniele; Bridges, Thomas J.

    2010-06-01

    Numerical integration of complex linear systems of ODEs depending analytically on an eigenvalue parameter are considered. Complex orthogonalization, which is required to stabilize the numerical integration, results in non-analytic systems. It is shown that properties of eigenvalues are still efficiently recoverable by extracting information from a non-analytic characteristic function. The orthonormal systems are constructed using the geometry of Stiefel bundles. Different forms of continuous orthogonalization in the literature are shown to correspond to different choices of connection one-form on the Stiefel bundle. For the numerical integration, Gauss-Legendre Runge-Kutta algorithms are the principal choice for preserving orthogonality, and performance results are shown for a range of GLRK methods. The theory and methods are tested by application to example boundary value problems including the Orr-Sommerfeld equation in hydrodynamic stability.

  6. Interacting steps with finite-range interactions: Analytical approximation and numerical results

    NASA Astrophysics Data System (ADS)

    Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.

    2013-05-01

    We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

  7. Analytical approximations to seawater optical phase functions of scattering

    NASA Astrophysics Data System (ADS)

    Haltrin, Vladimir I.

    2004-11-01

    This paper proposes a number of analytical approximations to the classic and recently measured seawater light scattering phase functions. The three types of analytical phase functions are derived: individual representations for 15 Petzold, 41 Mankovsky, and 91 Gulf of Mexico phase functions; collective fits to Petzold phase functions; and analytical representations that take into account dependencies between inherent optical properties of seawater. The proposed phase functions may be used for problems of radiative transfer, remote sensing, visibility and image propagation in natural waters of various turbidity.

  8. Comparison of Commercial Electromagnetic Interface Test Techniques to NASA Electromagnetic Interference Test Techniques

    NASA Astrophysics Data System (ADS)

    Smith, V.

    2000-11-01

    This report documents the development of analytical techniques required for interpreting and comparing space systems electromagnetic interference test data with commercial electromagnetic interference test data using NASA Specification SSP 30237A "Space Systems Electromagnetic Emission and Susceptibility Requirements for Electromagnetic Compatibility." The PSpice computer simulation results and the laboratory measurements for the test setups under study compare well. The study results, however, indicate that the transfer function required to translate test results of one setup to another is highly dependent on cables and their actual layout in the test setup. Since cables are equipment specific and are not specified in the test standards, developing a transfer function that would cover all cable types (random, twisted, or coaxial), sizes (gauge number and length), and layouts (distance from the ground plane) is not practical.

  9. Comparison of Commercial Electromagnetic Interface Test Techniques to NASA Electromagnetic Interference Test Techniques

    NASA Technical Reports Server (NTRS)

    Smith, V.; Minor, J. L. (Technical Monitor)

    2000-01-01

    This report documents the development of analytical techniques required for interpreting and comparing space systems electromagnetic interference test data with commercial electromagnetic interference test data using NASA Specification SSP 30237A "Space Systems Electromagnetic Emission and Susceptibility Requirements for Electromagnetic Compatibility." The PSpice computer simulation results and the laboratory measurements for the test setups under study compare well. The study results, however, indicate that the transfer function required to translate test results of one setup to another is highly dependent on cables and their actual layout in the test setup. Since cables are equipment specific and are not specified in the test standards, developing a transfer function that would cover all cable types (random, twisted, or coaxial), sizes (gauge number and length), and layouts (distance from the ground plane) is not practical.

  10. Analytical and experimental studies of leak location and environment characterization for the international space station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woronowicz, Michael; Blackmon, Rebecca; Brown, Martin

    2014-12-09

    The International Space Station program is developing a robotically-operated leak locator tool to be used externally. The tool would consist of a Residual Gas Analyzer for partial pressure measurements and a full range pressure gauge for total pressure measurements. The primary application is to demonstrate the ability to detect NH{sub 3} coolant leaks in the ISS thermal control system. An analytical model of leak plume physics is presented that can account for effusive flow as well as plumes produced by sonic orifices and thruster operations. This model is used along with knowledge of typical RGA and full range gauge performancemore » to analyze the expected instrument sensitivity to ISS leaks of various sizes and relative locations (“directionality”). The paper also presents experimental results of leak simulation testing in a large thermal vacuum chamber at NASA Goddard Space Flight Center. This test characterized instrument sensitivity as a function of leak rates ranging from 1 lb{sub m/}/yr. to about 1 lb{sub m}/day. This data may represent the first measurements collected by an RGA or ion gauge system monitoring off-axis point sources as a function of location and orientation. Test results are compared to the analytical model and used to propose strategies for on-orbit leak location and environment characterization using the proposed instrument while taking into account local ISS conditions and the effects of ram/wake flows and structural shadowing within low Earth orbit.« less

  11. Latent structure of the Wisconsin Card Sorting Test: a confirmatory factor analytic study.

    PubMed

    Greve, Kevin W; Stickle, Timothy R; Love, Jeffrey M; Bianchini, Kevin J; Stanford, Matthew S

    2005-05-01

    The present study represents the first large scale confirmatory factor analysis of the Wisconsin Card Sorting Test (WCST). The results generally support the three factor solutions reported in the exploratory factor analysis literature. However, only the first factor, which reflects general executive functioning, is statistically sound. The secondary factors, while likely reflecting meaningful cognitive abilities, are less stable except when all subjects complete all 128 cards. It is likely that having two discontinuation rules for the WCST has contributed to the varied factor analytic solutions reported in the literature and early discontinuation may result in some loss of useful information. Continued multivariate research will be necessary to better clarify the processes underlying WCST performance and their relationships to one another.

  12. The extended Lennard-Jones potential energy function: A simpler model for direct-potential-fit analysis

    NASA Astrophysics Data System (ADS)

    Hajigeorgiou, Photos G.

    2016-12-01

    An analytical model for the diatomic potential energy function that was recently tested as a universal function (Hajigeorgiou, 2010) has been further modified and tested as a suitable model for direct-potential-fit analysis. Applications are presented for the ground electronic states of three diatomic molecules: oxygen, carbon monoxide, and hydrogen fluoride. The adjustable parameters of the extended Lennard-Jones potential model are determined through nonlinear regression by fits to calculated rovibrational energy term values or experimental spectroscopic line positions. The model is shown to lead to reliable, compact and simple representations for the potential energy functions of these systems and could therefore be classified as a suitable and attractive model for direct-potential-fit analysis.

  13. In Vivo Analytical Performance of Nitric Oxide-Releasing Glucose Biosensors

    PubMed Central

    2015-01-01

    The in vivo analytical performance of percutaneously implanted nitric oxide (NO)-releasing amperometric glucose biosensors was evaluated in swine for 10 d. Needle-type glucose biosensors were functionalized with NO-releasing polyurethane coatings designed to release similar total amounts of NO (3.1 μmol cm–2) for rapid (16.0 ± 4.4 h) or slower (>74.6 ± 16.6 h) durations and remain functional as outer glucose sensor membranes. Relative to controls, NO-releasing sensors were characterized with improved numerical accuracy on days 1 and 3. Furthermore, the clinical accuracy and sensitivity of rapid NO-releasing sensors were superior to control and slower NO-releasing sensors at both 1 and 3 d implantation. In contrast, the slower, extended, NO-releasing sensors were characterized by shorter sensor lag times (<4.2 min) in response to intravenous glucose tolerance tests versus burst NO-releasing and control sensors (>5.8 min) at 3, 7, and 10 d. Collectively, these results highlight the potential for NO release to enhance the analytical utility of in vivo glucose biosensors. Initial results also suggest that this analytical performance benefit is dependent on the NO-release duration. PMID:24984031

  14. In vivo analytical performance of nitric oxide-releasing glucose biosensors.

    PubMed

    Soto, Robert J; Privett, Benjamin J; Schoenfisch, Mark H

    2014-07-15

    The in vivo analytical performance of percutaneously implanted nitric oxide (NO)-releasing amperometric glucose biosensors was evaluated in swine for 10 d. Needle-type glucose biosensors were functionalized with NO-releasing polyurethane coatings designed to release similar total amounts of NO (3.1 μmol cm(-2)) for rapid (16.0 ± 4.4 h) or slower (>74.6 ± 16.6 h) durations and remain functional as outer glucose sensor membranes. Relative to controls, NO-releasing sensors were characterized with improved numerical accuracy on days 1 and 3. Furthermore, the clinical accuracy and sensitivity of rapid NO-releasing sensors were superior to control and slower NO-releasing sensors at both 1 and 3 d implantation. In contrast, the slower, extended, NO-releasing sensors were characterized by shorter sensor lag times (<4.2 min) in response to intravenous glucose tolerance tests versus burst NO-releasing and control sensors (>5.8 min) at 3, 7, and 10 d. Collectively, these results highlight the potential for NO release to enhance the analytical utility of in vivo glucose biosensors. Initial results also suggest that this analytical performance benefit is dependent on the NO-release duration.

  15. Validation of the enthalpy method by means of analytical solution

    NASA Astrophysics Data System (ADS)

    Kleiner, Thomas; Rückamp, Martin; Bondzio, Johannes; Humbert, Angelika

    2014-05-01

    Numerical simulations moved in the recent year(s) from describing the cold-temperate transition surface (CTS) towards an enthalpy description, which allows avoiding incorporating a singular surface inside the model (Aschwanden et al., 2012). In Enthalpy methods the CTS is represented as a level set of the enthalpy state variable. This method has several numerical and practical advantages (e.g. representation of the full energy by one scalar field, no restriction to topology and shape of the CTS). The proposed method is rather new in glaciology and to our knowledge not verified and validated against analytical solutions. Unfortunately we are still lacking analytical solutions for sufficiently complex thermo-mechanically coupled polythermal ice flow. However, we present two experiments to test the implementation of the enthalpy equation and corresponding boundary conditions. The first experiment tests particularly the functionality of the boundary condition scheme and the corresponding basal melt rate calculation. Dependent on the different thermal situations that occur at the base, the numerical code may have to switch to another boundary type (from Neuman to Dirichlet or vice versa). The main idea of this set-up is to test the reversibility during transients. A former cold ice body that run through a warmer period with an associated built up of a liquid water layer at the base must be able to return to its initial steady state. Since we impose several assumptions on the experiment design analytical solutions can be formulated for different quantities during distinct stages of the simulation. The second experiment tests the positioning of the internal CTS in a parallel-sided polythermal slab. We compare our simulation results to the analytical solution proposed by Greve and Blatter (2009). Results from three different ice flow-models (COMIce, ISSM, TIMFD3) are presented.

  16. The Hubbard Dimer: A Complete DFT Solution to a Many-Body Problem

    NASA Astrophysics Data System (ADS)

    Smith, Justin; Carrascal, Diego; Ferrer, Jaime; Burke, Kieron

    2015-03-01

    In this work we explain the relationship between density functional theory and strongly correlated models using the simplest possible example, the two-site asymmetric Hubbard model. We discuss the connection between the lattice and real-space and how this is a simple model for stretched H2. We can solve this elementary example analytically, and with that we can illuminate the underlying logic and aims of DFT. While the many-body solution is analytic, the density functional is given only implicitly. We overcome this difficulty by creating a highly accurate parameterization of the exact function. We use this parameterization to perform benchmark calculations of correlation kinetic energy, the adiabatic connection, etc. We also test Hartree-Fock and the Bethe Ansatz Local Density Approximation. We also discuss and illustrate the derivative discontinuity in the exchange-correlation energy and the infamous gap problem in DFT. DGE-1321846, DE-FG02-08ER46496.

  17. A new multi-domain method based on an analytical control surface for linear and second-order mean drift wave loads on floating bodies

    NASA Astrophysics Data System (ADS)

    Liang, Hui; Chen, Xiaobo

    2017-10-01

    A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.

  18. 42 CFR 493.959 - Immunohematology.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...

  19. 42 CFR 493.959 - Immunohematology.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...

  20. 42 CFR 493.959 - Immunohematology.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...

  1. 42 CFR 493.959 - Immunohematology.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...

  2. Target analyte quantification by isotope dilution LC-MS/MS directly referring to internal standard concentrations--validation for serum cortisol measurement.

    PubMed

    Maier, Barbara; Vogeser, Michael

    2013-04-01

    Isotope dilution LC-MS/MS methods used in the clinical laboratory typically involve multi-point external calibration in each analytical series. Our aim was to test the hypothesis that determination of target analyte concentrations directly derived from the relation of the target analyte peak area to the peak area of a corresponding stable isotope labelled internal standard compound [direct isotope dilution analysis (DIDA)] may be not inferior to conventional external calibration with respect to accuracy and reproducibility. Quality control samples and human serum pools were analysed in a comparative validation protocol for cortisol as an exemplary analyte by LC-MS/MS. Accuracy and reproducibility were compared between quantification either involving a six-point external calibration function, or a result calculation merely based on peak area ratios of unlabelled and labelled analyte. Both quantification approaches resulted in similar accuracy and reproducibility. For specified analytes, reliable analyte quantification directly derived from the ratio of peak areas of labelled and unlabelled analyte without the need for a time consuming multi-point calibration series is possible. This DIDA approach is of considerable practical importance for the application of LC-MS/MS in the clinical laboratory where short turnaround times often have high priority.

  3. Safety and quality of food contact materials. Part 1: evaluation of analytical strategies to introduce migration testing into good manufacturing practice.

    PubMed

    Feigenbaum, A; Scholler, D; Bouquant, J; Brigot, G; Ferrier, D; Franzl, R; Lillemarktt, L; Riquet, A M; Petersen, J H; van Lierop, B; Yagoubi, N

    2002-02-01

    The results of a research project (EU AIR Research Programme CT94-1025) aimed to introduce control of migration into good manufacturing practice and into enforcement work are reported. Representative polymer classes were defined on the basis of chemical structure, technological function, migration behaviour and market share. These classes were characterized by analytical methods. Analytical techniques were investigated for identification of potential migrants. High-temperature gas chromatography was shown to be a powerful method and 1H-magnetic resonance provided a convenient fingerprint of plastic materials. Volatile compounds were characterized by headspace techniques, where it was shown to be essential to differentiate volatile compounds desorbed from those generated during the thermal desorption itself. For metal trace analysis, microwave mineralization followed by atomic absorption was employed. These different techniques were introduced into a systematic testing scheme that is envisaged as being suitable both for industrial control and for enforcement laboratories. Guidelines will be proposed in the second part of this paper.

  4. New Analytical Solution of the Equilibrium Ampere's Law Using the Walker's Method: a Didactic Example

    NASA Astrophysics Data System (ADS)

    Sousa, A. N. Laurindo; Ojeda-González, A.; Prestes, A.; Klausner, V.; Caritá, L. A.

    2018-02-01

    This work aims to demonstrate the analytical solution of the Grad-Shafranov (GS) equation or generalized Ampere's law, which is important in the studies of self-consistent 2.5-D solution for current sheet structures. A detailed mathematical development is presented to obtain the generating function as shown by Walker (RSPSA 91, 410, 1915). Therefore, we study the general solution of the GS equation in terms of the Walker's generating function in details without omitting any step. The Walker's generating function g( ζ) is written in a new way as the tangent of an unspecified function K( ζ). In this trend, the general solution of the GS equation is expressed as exp(- 2Ψ) = 4| K '( ζ)|2/cos2[ K( ζ) - K( ζ ∗)]. In order to investigate whether our proposal would simplify the mathematical effort to find new generating functions, we use Harris's solution as a test, in this case K( ζ) = arctan(exp( i ζ)). In summary, one of the article purposes is to present a review of the Harris's solution. In an attempt to find a simplified solution, we propose a new way to write the GS solution using g( ζ) = tan( K( ζ)). We also present a new analytical solution to the equilibrium Ampere's law using g( ζ) = cosh( b ζ), which includes a generalization of the Harris model and presents isolated magnetic islands.

  5. New Analytical Solution of the Equilibrium Ampere's Law Using the Walker's Method: a Didactic Example

    NASA Astrophysics Data System (ADS)

    Sousa, A. N. Laurindo; Ojeda-González, A.; Prestes, A.; Klausner, V.; Caritá, L. A.

    2017-12-01

    This work aims to demonstrate the analytical solution of the Grad-Shafranov (GS) equation or generalized Ampere's law, which is important in the studies of self-consistent 2.5-D solution for current sheet structures. A detailed mathematical development is presented to obtain the generating function as shown by Walker (RSPSA 91, 410, 1915). Therefore, we study the general solution of the GS equation in terms of the Walker's generating function in details without omitting any step. The Walker's generating function g(ζ) is written in a new way as the tangent of an unspecified function K(ζ). In this trend, the general solution of the GS equation is expressed as exp(- 2Ψ) = 4|K '(ζ)|2/cos2[K(ζ) - K(ζ ∗)]. In order to investigate whether our proposal would simplify the mathematical effort to find new generating functions, we use Harris's solution as a test, in this case K(ζ) = arctan(exp(i ζ)). In summary, one of the article purposes is to present a review of the Harris's solution. In an attempt to find a simplified solution, we propose a new way to write the GS solution using g(ζ) = tan(K(ζ)). We also present a new analytical solution to the equilibrium Ampere's law using g(ζ) = cosh(b ζ), which includes a generalization of the Harris model and presents isolated magnetic islands.

  6. Testing a path-analytic mediation model of how motivational enhancement physiotherapy improves physical functioning in pain patients.

    PubMed

    Cheing, Gladys; Vong, Sinfia; Chan, Fong; Ditchman, Nicole; Brooks, Jessica; Chan, Chetwyn

    2014-12-01

    Pain is a complex phenomenon not easily discerned from psychological, social, and environmental characteristics and is an oft cited barrier to return to work for people experiencing low back pain (LBP). The purpose of this study was to evaluate a path-analytic mediation model to examine how motivational enhancement physiotherapy, which incorporates tenets of motivational interviewing, improves physical functioning of patients with chronic LBP. Seventy-six patients with chronic LBP were recruited from the outpatient physiotherapy department of a government hospital in Hong Kong. The re-specified path-analytic model fit the data very well, χ (2)(3, N = 76) = 3.86, p = .57; comparative fit index = 1.00; and the root mean square error of approximation = 0.00. Specifically, results indicated that (a) using motivational interviewing techniques in physiotherapy was associated with increased working alliance with patients, (b) working alliance increased patients' outcome expectancy and (c) greater outcome expectancy resulted in a reduction of subjective pain intensity and improvement in physical functioning. Change in pain intensity also directly influenced improvement in physical functioning. The effect of motivational enhancement therapy on physical functioning can be explained by social-cognitive factors such as motivation, outcome expectancy, and working alliance. The use of motivational interviewing techniques to increase outcome expectancy of patients and improve working alliance could further strengthen the impact of physiotherapy on rehabilitation outcomes of patients with chronic LBP.

  7. The use of an analytic Hamiltonian matrix for solving the hydrogenic atom

    NASA Astrophysics Data System (ADS)

    Bhatti, Mohammad

    2001-10-01

    The non-relativistic Hamiltonian corresponding to the Shrodinger equation is converted into analytic Hamiltonian matrix using the kth order B-splines functions. The Galerkin method is applied to the solution of the Shrodinger equation for bound states of hydrogen-like systems. The program Mathematica is used to create analytic matrix elements and exact integration is performed over the knot-sequence of B-splines and the resulting generalized eigenvalue problem is solved on a specified numerical grid. The complete basis set and the energy spectrum is obtained for the coulomb potential for hydrogenic systems with Z less than 100 with B-splines of order eight. Another application is given to test the Thomas-Reiche-Kuhn sum rule for the hydrogenic systems.

  8. Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.

    PubMed

    Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F

    1995-02-01

    Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.

  9. The Function Acquisition Speed Test (FAST): A Behavior Analytic Implicit Test for Assessing Stimulus Relations

    ERIC Educational Resources Information Center

    O'Reilly, Anthony; Roche, Bryan; Ruiz, Maria; Tyndall, Ian; Gavin, Amanda

    2012-01-01

    Subjects completed a baseline stimulus matching procedure designed to produce two symmetrical stimulus relations; A1-B1 and A2-B2. Using A1, B1, and two novel stimuli, subjects were then trained to produce a common key-press response for two stimuli and a second key-press response for two further stimuli across two blocks of response training.…

  10. Visual/Verbal-Analytic Reasoning Bias as a Function of Self-Reported Autistic-Like Traits: A Study of Typically Developing Individuals Solving Raven's Advanced Progressive Matrices

    ERIC Educational Resources Information Center

    Fugard, Andrew J. B.; Stewart, Mary E.; Stenning, Keith

    2011-01-01

    People with autism spectrum condition (ASC) perform well on Raven's matrices, a test which loads highly on the general factor in intelligence. However, the mechanisms supporting enhanced performance on the test are poorly understood. Evidence is accumulating that milder variants of the ASC phenotype are present in typically developing individuals,…

  11. Method of and apparatus for determining the similarity of a biological analyte from a model constructed from known biological fluids

    DOEpatents

    Robinson, Mark R.; Ward, Kenneth J.; Eaton, Robert P.; Haaland, David M.

    1990-01-01

    The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.

  12. Safety and Suitability for Service Assessment Testing for Surface and Underwater Launched Munitions

    DTIC Science & Technology

    2014-12-05

    test efficiency that tend to associate the Analytical S3 Test Approach with large, complex munition systems and the Empirical S3 Test Approach with...the smaller, less complex munition systems . 8.1 ANALYTICAL S3 TEST APPROACH. The Analytical S3 test approach, as shown in Figure 3, evaluates...assets than the Analytical S3 Test approach to establish the safety margin of the system . This approach is generally applicable to small munitions

  13. Well test mathematical model for fractures network in tight oil reservoirs

    NASA Astrophysics Data System (ADS)

    Diwu, Pengxiang; Liu, Tongjing; Jiang, Baoyi; Wang, Rui; Yang, Peidie; Yang, Jiping; Wang, Zhaoming

    2018-02-01

    Well test, especially build-up test, has been applied widely in the development of tight oil reservoirs, since it is the only available low cost way to directly quantify flow ability and formation heterogeneity parameters. However, because of the fractures network near wellbore, generated from artificial fracturing linking up natural factures, traditional infinite and finite conductivity fracture models usually result in significantly deviation in field application. In this work, considering the random distribution of natural fractures, physical model of fractures network is proposed, and it shows a composite model feature in the large scale. Consequently, a nonhomogeneous composite mathematical model is established with threshold pressure gradient. To solve this model semi-analytically, we proposed a solution approach including Laplace transform and virtual argument Bessel function, and this method is verified by comparing with existing analytical solution. The matching data of typical type curves generated from semi-analytical solution indicates that the proposed physical and mathematical model can describe the type curves characteristic in typical tight oil reservoirs, which have up warping in late-term rather than parallel lines with slope 1/2 or 1/4. It means the composite model could be used into pressure interpretation of artificial fracturing wells in tight oil reservoir.

  14. Free-Suspension Residual Flexibility Testing of Space Station Pathfinder: Comparison to Fixed-Base Results

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.

    1998-01-01

    Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.

  15. An Exploration of Function Analysis and Function Allocation in the Commercial Flight Domain

    DTIC Science & Technology

    1991-11-01

    therefore, imperative that designers apply the most effective analytical methods available to optimize the baseline crew system design, prior to the test and...International Airport (LAX) to John F. Kennedy International Airport ( JFK ) in New York. This mission was also selected because a detailed task-timeline (TIL...International Airport (LAX) and terminating at New York International Airport ( JFK ). The weather at LAX is fair with temperature at 60° Fahrenheit

  16. A deterministic global optimization using smooth diagonal auxiliary functions

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.

    2015-04-01

    In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.

  17. Fast computation of the Gauss hypergeometric function with all its parameters complex with application to the Pöschl Teller Ginocchio potential wave functions

    NASA Astrophysics Data System (ADS)

    Michel, N.; Stoitsov, M. V.

    2008-04-01

    The fast computation of the Gauss hypergeometric function F12 with all its parameters complex is a difficult task. Although the F12 function verifies numerous analytical properties involving power series expansions whose implementation is apparently immediate, their use is thwarted by instabilities induced by cancellations between very large terms. Furthermore, small areas of the complex plane, in the vicinity of z=e, are inaccessible using F12 power series linear transformations. In order to solve these problems, a generalization of R.C. Forrey's transformation theory has been developed. The latter has been successful in treating the F12 function with real parameters. As in real case transformation theory, the large canceling terms occurring in F12 analytical formulas are rigorously dealt with, but by way of a new method, directly applicable to the complex plane. Taylor series expansions are employed to enter complex areas outside the domain of validity of power series analytical formulas. The proposed algorithm, however, becomes unstable in general when |a|, |b|, |c| are moderate or large. As a physical application, the calculation of the wave functions of the analytical Pöschl-Teller-Ginocchio potential involving F12 evaluations is considered. Program summaryProgram title: hyp_2F1, PTG_wf Catalogue identifier: AEAE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6839 No. of bytes in distributed program, including test data, etc.: 63 334 Distribution format: tar.gz Programming language: C++, Fortran 90 Computer: Intel i686 Operating system: Linux, Windows Word size: 64 bits Classification: 4.7 Nature of problem: The Gauss hypergeometric function F12, with all its parameters complex, is uniquely calculated in the frame of transformation theory with power series summations, thus providing a very fast algorithm. The evaluation of the wave functions of the analytical Pöschl-Teller-Ginocchio potential is treated as a physical application. Solution method: The Gauss hypergeometric function F12 verifies linear transformation formulas allowing consideration of arguments of a small modulus which then can be handled by a power series. They, however, give rise to indeterminate or numerically unstable cases, when b-a and c-a-b are equal or close to integers. They are properly dealt with through analytical manipulations of the Lanczos expression providing the Gamma function. The remaining zones of the complex plane uncovered by transformation formulas are dealt with Taylor expansions of the F12 function around complex points where linear transformations can be employed. The Pöschl-Teller-Ginocchio potential wave functions are calculated directly with F12 evaluations. Restrictions: The algorithm provides full numerical precision in almost all cases for |a|, |b|, and |c| of the order of one or smaller, but starts to be less precise or unstable when they increase, especially through a, b, and c imaginary parts. While it is possible to run the code for moderate or large |a|, |b|, and |c| and obtain satisfactory results for some specified values, the code is very likely to be unstable in this regime. Unusual features: Two different codes, one for the hypergeometric function and one for the Pöschl-Teller-Ginocchio potential wave functions, are provided in C++ and Fortran 90 versions. Running time: 20,000 F12 function evaluations take an average of one second.

  18. Analysis of structural dynamic data from Skylab. Volume 2: Skylab analytical and test model data

    NASA Technical Reports Server (NTRS)

    Demchak, L.; Harcrow, H.

    1976-01-01

    The orbital configuration test modal data, analytical test correlation modal data, and analytical flight configuration modal data are presented. Tables showing the generalized mass contributions (GMCs) for each of the thirty tests modes are given along with the two dimensional mode shape plots and tables of GMCs for the test correlated analytical modes. The two dimensional mode shape plots for the analytical modes and uncoupled and coupled modes of the orbital flight configuration at three development phases of the model are included.

  19. Analyticity without Differentiability

    ERIC Educational Resources Information Center

    Kirillova, Evgenia; Spindler, Karlheinz

    2008-01-01

    In this article we derive all salient properties of analytic functions, including the analytic version of the inverse function theorem, using only the most elementary convergence properties of series. Not even the notion of differentiability is required to do so. Instead, analytical arguments are replaced by combinatorial arguments exhibiting…

  20. Culture-Sensitive Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Vandenberghe, L.

    2008-01-01

    Functional analytic psychotherapy (FAP) is defined as behavior-analytically conceptualized talk therapy. In contrast to the technique-oriented educational format of cognitive behavior therapy and the use of structural mediational models, FAP depends on the functional analysis of the moment-to-moment stream of interactions between client and…

  1. Mathematical and field analysis of longitudinal reservoir infill

    NASA Astrophysics Data System (ADS)

    Ke, W. T.; Capart, H.

    2016-12-01

    In reservoirs, severe problems are caused by infilled sediment deposits. In long term, the sediment accumulation reduces the capacity of reservoir storage and flood control benefits. In the short term, the sediment deposits influence the intakes of water-supply and hydroelectricity generation. For the management of reservoir, it is important to understand the deposition process and then to predict the sedimentation in reservoir. To investigate the behaviors of sediment deposits, we propose a one-dimensional simplified theory derived by the Exner equation to predict the longitudinal sedimentation distribution in idealized reservoirs. The theory models the reservoir infill geomorphic actions for three scenarios: delta progradation, near-dam bottom deposition, and final infill. These yield three kinds of self-similar analytical solutions for the reservoir bed profiles, under different boundary conditions. Three analytical solutions are composed by error function, complementary error function, and imaginary error function, respectively. The theory is also computed by finite volume method to test the analytical solutions. The theoretical and numerical predictions are in good agreement with one-dimensional small-scale laboratory experiment. As the theory is simple to apply with analytical solutions and numerical computation, we propose some applications to simulate the long-profile evolution of field reservoirs and focus on the infill sediment deposit volume resulting the uplift of near-dam bottom elevation. These field reservoirs introduced here are Wushe Reservoir, Tsengwen Reservoir, Mudan Reservoir in Taiwan, Lago Dos Bocas in Puerto Rico, and Sakuma Dam in Japan.

  2. Comparing methods to combine functional loss and mortality in clinical trials for amyotrophic lateral sclerosis

    PubMed Central

    van Eijk, Ruben PA; Eijkemans, Marinus JC; Rizopoulos, Dimitris

    2018-01-01

    Objective Amyotrophic lateral sclerosis (ALS) clinical trials based on single end points only partially capture the full treatment effect when both function and mortality are affected, and may falsely dismiss efficacious drugs as futile. We aimed to investigate the statistical properties of several strategies for the simultaneous analysis of function and mortality in ALS clinical trials. Methods Based on the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database, we simulated longitudinal patterns of functional decline, defined by the revised amyotrophic lateral sclerosis functional rating scale (ALSFRS-R) and conditional survival time. Different treatment scenarios with varying effect sizes were simulated with follow-up ranging from 12 to 18 months. We considered the following analytical strategies: 1) Cox model; 2) linear mixed effects (LME) model; 3) omnibus test based on Cox and LME models; 4) composite time-to-6-point decrease or death; 5) combined assessment of function and survival (CAFS); and 6) test based on joint modeling framework. For each analytical strategy, we calculated the empirical power and sample size. Results Both Cox and LME models have increased false-negative rates when treatment exclusively affects either function or survival. The joint model has superior power compared to other strategies. The composite end point increases false-negative rates among all treatment scenarios. To detect a 15% reduction in ALSFRS-R decline and 34% decline in hazard with 80% power after 18 months, the Cox model requires 524 patients, the LME model 794 patients, the omnibus test 526 patients, the composite end point 1,274 patients, the CAFS 576 patients and the joint model 464 patients. Conclusion Joint models have superior statistical power to analyze simultaneous effects on survival and function and may circumvent pitfalls encountered by other end points. Optimizing trial end points is essential, as selecting suboptimal outcomes may disguise important treatment clues. PMID:29593436

  3. Homogeneous partial differential equations for superpositions of indeterminate functions of several variables

    NASA Astrophysics Data System (ADS)

    Asai, Kazuto

    2009-02-01

    We determine essentially all partial differential equations satisfied by superpositions of tree type and of a further special type. These equations represent necessary and sufficient conditions for an analytic function to be locally expressible as an analytic superposition of the type indicated. The representability of a real analytic function by a superposition of this type is independent of whether that superposition involves real-analytic functions or C^{\\rho}-functions, where the constant \\rho is determined by the structure of the superposition. We also prove that the function u defined by u^n=xu^a+yu^b+zu^c+1 is generally non-representable in any real (resp. complex) domain as f\\bigl(g(x,y),h(y,z)\\bigr) with twice differentiable f and differentiable g, h (resp. analytic f, g, h).

  4. Two approaches to estimating the effect of parenting on the development of executive function in early childhood.

    PubMed

    Blair, Clancy; Raver, C Cybele; Berry, Daniel J

    2014-02-01

    In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory control, working memory, and attention shifting. Residualized change analysis indicated that higher quality parenting as indicated by higher scores on widely used measures of parenting at both earlier and later time points predicted more positive gain in executive function at 60 months. Latent change score models in which parenting and executive function over time were held to standards of longitudinal measurement invariance provided additional evidence of the association between change in parenting quality and change in executive function. In these models, cross-lagged paths indicated that in addition to parenting predicting change in executive function, executive function bidirectionally predicted change in parenting quality. Results were robust with the addition of covariates, including child sex, race, maternal education, and household income-to-need. Strengths and drawbacks of the 2 analytic approaches are discussed, and the findings are considered in light of emerging methodological innovations for testing the extent to which executive function is malleable and open to the influence of experience.

  5. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  6. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  7. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  8. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  9. Linear modeling of steady-state behavioral dynamics.

    PubMed Central

    Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert

    2002-01-01

    The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782

  10. An Analytical Evaluation of Two Common-Odds Ratios as Population Indicators of DIF.

    ERIC Educational Resources Information Center

    Pommerich, Mary; And Others

    The Mantel-Haenszel (MH) statistic for identifying differential item functioning (DIF) commonly conditions on the observed test score as a surrogate for conditioning on latent ability. When the comparison group distributions are not completely overlapping (i.e., are incongruent), the observed score represents different levels of latent ability…

  11. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  12. Dynamic behaviour of a planar micro-beam loaded by a fluid-gap: Analytical and numerical approach in a high frequency range, benchmark solutions

    NASA Astrophysics Data System (ADS)

    Novak, A.; Honzik, P.; Bruneau, M.

    2017-08-01

    Miniaturized vibrating MEMS devices, active (receivers or emitters) or passive devices, and their use for either new applications (hearing, meta-materials, consumer devices,…) or metrological purposes under non-standard conditions, are involved today in several acoustic domains. More in-depth characterisation than the classical ones available until now are needed. In this context, the paper presents analytical and numerical approaches for describing the behaviour of three kinds of planar micro-beams of rectangular shape (suspended rigid or clamped elastic planar beam) loaded by a backing cavity or a fluid-gap, surrounded by very thin slits, and excited by an incident acoustic field. The analytical approach accounts for the coupling between the vibrating structure and the acoustic field in the backing cavity, the thermal and viscous diffusion processes in the boundary layers in the slits and the cavity, the modal behaviour for the vibrating structure, and the non-uniformity of the acoustic field in the backing cavity which is modelled in using an integral formulation with a suitable Green's function. Benchmark solutions are proposed in terms of beam motion (from which the sensitivity, input impedance, and pressure transfer function can be calculated). A numerical implementation (FEM) is handled against which the analytical results are tested.

  13. High-throughput fabrication and screening improves gold nanoparticle chemiresistor sensor performance.

    PubMed

    Hubble, Lee J; Cooper, James S; Sosa-Pintos, Andrea; Kiiveri, Harri; Chow, Edith; Webster, Melissa S; Wieczorek, Lech; Raguse, Burkhard

    2015-02-09

    Chemiresistor sensor arrays are a promising technology to replace current laboratory-based analysis instrumentation, with the advantage of facile integration into portable, low-cost devices for in-field use. To increase the performance of chemiresistor sensor arrays a high-throughput fabrication and screening methodology was developed to assess different organothiol-functionalized gold nanoparticle chemiresistors. This high-throughput fabrication and testing methodology was implemented to screen a library consisting of 132 different organothiol compounds as capping agents for functionalized gold nanoparticle chemiresistor sensors. The methodology utilized an automated liquid handling workstation for the in situ functionalization of gold nanoparticle films and subsequent automated analyte testing of sensor arrays using a flow-injection analysis system. To test the methodology we focused on the discrimination and quantitation of benzene, toluene, ethylbenzene, p-xylene, and naphthalene (BTEXN) mixtures in water at low microgram per liter concentration levels. The high-throughput methodology identified a sensor array configuration consisting of a subset of organothiol-functionalized chemiresistors which in combination with random forests analysis was able to predict individual analyte concentrations with overall root-mean-square errors ranging between 8-17 μg/L for mixtures of BTEXN in water at the 100 μg/L concentration. The ability to use a simple sensor array system to quantitate BTEXN mixtures in water at the low μg/L concentration range has direct and significant implications to future environmental monitoring and reporting strategies. In addition, these results demonstrate the advantages of high-throughput screening to improve the performance of gold nanoparticle based chemiresistors for both new and existing applications.

  14. Exact hierarchical clustering in one dimension. [in universe

    NASA Technical Reports Server (NTRS)

    Williams, B. G.; Heavens, A. F.; Peacock, J. A.; Shandarin, S. F.

    1991-01-01

    The present adhesion model-based one-dimensional simulations of gravitational clustering have yielded bound-object catalogs applicable in tests of analytical approaches to cosmological structure formation. Attention is given to Press-Schechter (1974) type functions, as well as to their density peak-theory modifications and the two-point correlation function estimated from peak theory. The extent to which individual collapsed-object locations can be predicted by linear theory is significant only for objects of near-characteristic nonlinear mass.

  15. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  16. FAPRS Manual: Manual for the Functional Analytic Psychotherapy Rating Scale

    ERIC Educational Resources Information Center

    Callaghan, Glenn M.; Follette, William C.

    2008-01-01

    The Functional Analytic Psychotherapy Rating Scale (FAPRS) is behavioral coding system designed to capture those essential client and therapist behaviors that occur during Functional Analytic Psychotherapy (FAP). The FAPRS manual presents the purpose and rules for documenting essential aspects of FAP. The FAPRS codes are exclusive and exhaustive…

  17. Experiences with semiautomatic aerotriangulation on digital photogrammetric stations

    NASA Astrophysics Data System (ADS)

    Kersten, Thomas P.; Stallmann, Dirk

    1995-12-01

    With the development of higher-resolution scanners, faster image-handling capabilities, and higher-resolution screens, digital photogrammetric workstations promise to rival conventional analytical plotters in functionality, i.e. in the degree of automation in data capture and processing, and in accuracy. The availability of high quality digital image data and inexpensive high capacity fast mass storage offers the capability to perform accurate semi- automatic or automatic triangulation of digital aerial photo blocks on digital photogrammetric workstations instead of analytical plotters. In this paper, we present our investigations and results on two photogrammetric triangulation blocks, the OEEPE (European Organisation for Experimental Photogrammetric Research) test block (scale 1;4'000) and a Swiss test block (scale 1:12'000) using digitized images. Twenty-eight images of the OEEPE test block were scanned on the Zeiss/Intergraph PS1 and the digital images were delivered with a resolution of 15 micrometer and 30 micrometer, while 20 images of the Swiss test block were scanned on the Desktop Publishing Scanner Agfa Horizon with a resolution of 42 micrometer and on the PS1 with 15 micrometer. Measurements in the digital images were performed on the commercial Digital photogrammetric Station Leica/Helava DPW770 and with basic hard- and software components of the Digital Photogrammetric Station DIPS II, an experimental system of the Institute of Geodesy and Photogrammetry, ETH Zurich. As a reference, the analog images of both photogrammetric test blocks were measured at analytical plotters. On DIPS II measurements of fiducial marks, signalized and natural tie points were performed by least squares template and image matching, while on DPW770 all points were measured by the cross correlation technique. The observations were adjusted in a self-calibrating bundle adjustment. The comparisons between these results and the experiences with the functionality of the commercial and the experimental system are presented.

  18. Decision analytic models for Alzheimer's disease: state of the art and future directions.

    PubMed

    Cohen, Joshua T; Neumann, Peter J

    2008-05-01

    Decision analytic policy models for Alzheimer's disease (AD) enable researchers and policy makers to investigate questions about the costs and benefits of a wide range of existing and potential screening, testing, and treatment strategies. Such models permit analysts to compare existing alternatives, explore hypothetical scenarios, and test the strength of underlying assumptions in an explicit, quantitative, and systematic way. Decision analytic models can best be viewed as complementing clinical trials both by filling knowledge gaps not readily addressed by empirical research and by extrapolating beyond the surrogate markers recorded in a trial. We identified and critiqued 13 distinct AD decision analytic policy models published since 1997. Although existing models provide useful insights, they also have a variety of limitations. (1) They generally characterize disease progression in terms of cognitive function and do not account for other distinguishing features, such as behavioral symptoms, functional performance, and the emotional well-being of AD patients and caregivers. (2) Many describe disease progression in terms of a limited number of discrete states, thus constraining the level of detail that can be used to characterize both changes in patient status and the relationships between disease progression and other factors, such as residential status, that influence outcomes of interest. (3) They have focused almost exclusively on evaluating drug treatments, thus neglecting other disease management strategies and combinations of pharmacologic and nonpharmacologic interventions. Future AD models should facilitate more realistic and compelling evaluations of various interventions to address the disease. An improved model will allow decision makers to better characterize the disease, to better assess the costs and benefits of a wide range of potential interventions, and to better evaluate the incremental costs and benefits of specific interventions used in conjunction with other disease management strategies.

  19. dPotFit: A computer program to fit diatomic molecule spectral data to potential energy functions

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.

    2017-01-01

    This paper describes program dPotFit, which performs least-squares fits of diatomic molecule spectroscopic data consisting of any combination of microwave, infrared or electronic vibrational bands, fluorescence series, and tunneling predissociation level widths, involving one or more electronic states and one or more isotopologs, and for appropriate systems, second virial coefficient data, to determine analytic potential energy functions defining the observed levels and other properties of each state. Four families of analytical potential functions are available for fitting in the current version of dPotFit: the Expanded Morse Oscillator (EMO) function, the Morse/Long-Range (MLR) function, the Double-Exponential/Long-Range (DELR) function, and the 'Generalized Potential Energy Function' (GPEF) of Šurkus, which incorporates a variety of polynomial functional forms. In addition, dPotFit allows sets of experimental data to be tested against predictions generated from three other families of analytic functions, namely, the 'Hannover Polynomial' (or "X-expansion") function, and the 'Tang-Toennies' and Scoles-Aziz 'HFD', exponential-plus-van der Waals functions, and from interpolation-smoothed pointwise potential energies, such as those obtained from ab initio or RKR calculations. dPotFit also allows the fits to determine atomic-mass-dependent Born-Oppenheimer breakdown functions, and singlet-state Λ-doubling, or 2Σ splitting radial strength functions for one or more electronic states. dPotFit always reports both the 95% confidence limit uncertainty and the "sensitivity" of each fitted parameter; the latter indicates the number of significant digits that must be retained when rounding fitted parameters, in order to ensure that predictions remain in full agreement with experiment. It will also, if requested, apply a "sequential rounding and refitting" procedure to yield a final parameter set defined by a minimum number of significant digits, while ensuring no significant loss of accuracy in the predictions yielded by those parameters.

  20. Why noise is useful in functional and neural mechanisms of interval timing?

    PubMed Central

    2013-01-01

    Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391

  1. Basic emotion processing and the adolescent brain: Task demands, analytic approaches, and trajectories of changes.

    PubMed

    Del Piero, Larissa B; Saxbe, Darby E; Margolin, Gayla

    2016-06-01

    Early neuroimaging studies suggested that adolescents show initial development in brain regions linked with emotional reactivity, but slower development in brain structures linked with emotion regulation. However, the increased sophistication of adolescent brain research has made this picture more complex. This review examines functional neuroimaging studies that test for differences in basic emotion processing (reactivity and regulation) between adolescents and either children or adults. We delineated different emotional processing demands across the experimental paradigms in the reviewed studies to synthesize the diverse results. The methods for assessing change (i.e., analytical approach) and cohort characteristics (e.g., age range) were also explored as potential factors influencing study results. Few unifying dimensions were found to successfully distill the results of the reviewed studies. However, this review highlights the potential impact of subtle methodological and analytic differences between studies, need for standardized and theory-driven experimental paradigms, and necessity of analytic approaches that are can adequately test the trajectories of developmental change that have recently been proposed. Recommendations for future research highlight connectivity analyses and non-linear developmental trajectories, which appear to be promising approaches for measuring change across adolescence. Recommendations are made for evaluating gender and biological markers of development beyond chronological age. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Theory for the three-dimensional Mercedes-Benz model of water.

    PubMed

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A

    2009-11-21

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.

  3. Theory for the three-dimensional Mercedes-Benz model of water

    PubMed Central

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.

    2009-01-01

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the “right answer,” we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim’s Ornstein–Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation. PMID:19929057

  4. Theory for the three-dimensional Mercedes-Benz model of water

    NASA Astrophysics Data System (ADS)

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.

    2009-11-01

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.

  5. Basic emotion processing and the adolescent brain: Task demands, analytic approaches, and trajectories of changes

    PubMed Central

    Del Piero, Larissa B.; Saxbe, Darby E.; Margolin, Gayla

    2016-01-01

    Early neuroimaging studies suggested that adolescents show initial development in brain regions linked with emotional reactivity, but slower development in brain structures linked with emotion regulation. However, the increased sophistication of adolescent brain research has made this picture more complex. This review examines functional neuroimaging studies that test for differences in basic emotion processing (reactivity and regulation) between adolescents and either children or adults. We delineated different emotional processing demands across the experimental paradigms in the reviewed studies to synthesize the diverse results. The methods for assessing change (i.e., analytical approach) and cohort characteristics (e.g., age range) were also explored as potential factors influencing study results. Few unifying dimensions were found to successfully distill the results of the reviewed studies. However, this review highlights the potential impact of subtle methodological and analytic differences between studies, need for standardized and theory-driven experimental paradigms, and necessity of analytic approaches that are can adequately test the trajectories of developmental change that have recently been proposed. Recommendations for future research highlight connectivity analyses and nonlinear developmental trajectories, which appear to be promising approaches for measuring change across adolescence. Recommendations are made for evaluating gender and biological markers of development beyond chronological age. PMID:27038840

  6. Prediction of the chromatographic retention of acid-base compounds in pH buffered methanol-water mobile phases in gradient mode by a simplified model.

    PubMed

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2015-03-13

    Retention of ionizable analytes under gradient elution depends on the pH of the mobile phase, the pKa of the analyte and their evolution along the programmed gradient. In previous work, a model depending on two fitting parameters was recommended because of its very favorable relationship between accuracy and required experimental work. It was developed using acetonitrile as the organic modifier and involves pKa modeling by means of equations that take into account the acidic functional group of the compound (carboxylic acid, protonated amine, etc.). In this work, the two-parameter predicting model is tested and validated using methanol as the organic modifier of the mobile phase and several compounds of higher pharmaceutical relevance and structural complexity as testing analytes. The results have been quite good overall, showing that the predicting model is applicable to a wide variety of acid-base compounds using mobile phases prepared with acetonitrile or methanol. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Body fluid analysis: clinical utility and applicability of published studies to guide interpretation of today's laboratory testing in serous fluids.

    PubMed

    Block, Darci R; Algeciras-Schimnich, Alicia

    2013-01-01

    Requests for testing various analytes in serous fluids (e.g., pleural, peritoneal, pericardial effusions) are submitted daily to clinical laboratories. Testing of these fluids deviates from assay manufacturers' specifications, as most laboratory assays are optimized for testing blood or urine specimens. These requests add a burden to clinical laboratories, which need to validate assay performance characteristics in these fluids to exclude matrix interferences (given the different composition of body fluids) while maintaining regulatory compliance. Body fluid testing for a number of analytes has been reported in the literature; however, understanding the clinical utility of these analytes is critical because laboratories must address the analytic and clinical validation requirements, while educating clinicians on proper test utilization. In this article, we review the published data to evaluate the clinical utility of testing for numerous analytes in body fluid specimens. We also highlight the pre-analytic and analytic variables that need to be considered when reviewing published studies in body fluid testing. Finally, we provide guidance on how published studies might (or might not) guide interpretation of test results in today's clinical laboratories.

  8. An all-purpose metric for the exterior of any kind of rotating neutron star

    NASA Astrophysics Data System (ADS)

    Pappas, George; Apostolatos, Theocharis A.

    2013-03-01

    We have tested the appropriateness of two-soliton analytic metric to describe the exterior of all types of neutron stars, no matter what their equation of state or rotation rate is. The particular analytic solution of the vacuum Einstein equations proved quite adjustable to mimic the metric functions of all numerically constructed neutron star models that we used as a testbed. The neutron star models covered a wide range of stiffness, with regard to the equation of state of their interior, and all rotation rates up to the maximum possible rotation rate allowed for each such star. Apart from the metric functions themselves, we have compared the radius of the innermost stable circular orbit RISCO, the orbital frequency Ω equiv dφ /dt of circular geodesics, and their epicyclic frequencies Ωρ, Ωz, as well as the change of the energy of circular orbits per logarithmic change of orbital frequency Δ tilde{E}. All these quantities, calculated by means of the two-soliton analytic metric, fitted with good accuracy the corresponding numerical ones as in previous analogous comparisons (although previous attempts were restricted to neutron star models with either high or low rotation rates). We believe that this particular analytic solution could be considered as an analytic faithful representation of the gravitation field of any rotating neutron star with such accuracy, that one could explore the interior structure of a neutron star by using this space-time to interpret observations of astrophysical processes that take place around it.

  9. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  10. 42 CFR 493.17 - Test categorization.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...

  11. 42 CFR 493.17 - Test categorization.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...

  12. A Functional Analytic Approach to Group Psychotherapy

    ERIC Educational Resources Information Center

    Vandenberghe, Luc

    2009-01-01

    This article provides a particular view on the use of Functional Analytical Psychotherapy (FAP) in a group therapy format. This view is based on the author's experiences as a supervisor of Functional Analytical Psychotherapy Groups, including groups for women with depression and groups for chronic pain patients. The contexts in which this approach…

  13. Prevalidation in pharmaceutical analysis. Part I. Fundamentals and critical discussion.

    PubMed

    Grdinić, Vladimir; Vuković, Jadranka

    2004-05-28

    A complete prevalidation, as a basic prevalidation strategy for quality control and standardization of analytical procedure was inaugurated. Fast and simple, the prevalidation methodology based on mathematical/statistical evaluation of a reduced number of experiments (N < or = 24) was elaborated and guidelines as well as algorithms were given in detail. This strategy has been produced for the pharmaceutical applications and dedicated to the preliminary evaluation of analytical methods where linear calibration model, which is very often occurred in practice, could be the most appropriate to fit experimental data. The requirements presented in this paper should therefore help the analyst to design and perform the minimum number of prevalidation experiments needed to obtain all the required information to evaluate and demonstrate the reliability of its analytical procedure. In complete prevalidation process, characterization of analytical groups, checking of two limiting groups, testing of data homogeneity, establishment of analytical functions, recognition of outliers, evaluation of limiting values and extraction of prevalidation parameters were included. Moreover, system of diagnosis for particular prevalidation step was suggested. As an illustrative example for demonstration of feasibility of prevalidation methodology, among great number of analytical procedures, Vis-spectrophotometric procedure for determination of tannins with Folin-Ciocalteu's phenol reagent was selected. Favourable metrological characteristics of this analytical procedure, as prevalidation figures of merit, recognized the metrological procedure as a valuable concept in preliminary evaluation of quality of analytical procedures.

  14. "Analytical" vector-functions I

    NASA Astrophysics Data System (ADS)

    Todorov, Vladimir Todorov

    2017-12-01

    In this note we try to give a new (or different) approach to the investigation of analytical vector functions. More precisely a notion of a power xn; n ∈ ℕ+ of a vector x ∈ ℝ3 is introduced which allows to define an "analytical" function f : ℝ3 → ℝ3. Let furthermore f (ξ )= ∑n =0 ∞ anξn be an analytical function of the real variable ξ. Here we replace the power ξn of the number ξ with the power of a vector x ∈ ℝ3 to obtain a vector "power series" f (x )= ∑n =0 ∞ anxn . We research some properties of the vector series as well as some applications of this idea. Note that an "analytical" vector function does not depend of any basis, which may be used in research into some problems in physics.

  15. An analytic modeling and system identification study of rotor/fuselage dynamics at hover

    NASA Technical Reports Server (NTRS)

    Hong, Steven W.; Curtiss, H. C., Jr.

    1993-01-01

    A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.

  16. An analytic modeling and system identification study of rotor/fuselage dynamics at hover

    NASA Technical Reports Server (NTRS)

    Hong, Steven W.; Curtiss, H. C., Jr.

    1993-01-01

    A combination of analytic modeling and system identification methods have been used to develop an improved dynamic model describing the response of articulated rotor helicopters to control inputs. A high-order linearized model of coupled rotor/body dynamics including flap and lag degrees of freedom and inflow dynamics with literal coefficients is compared to flight test data from single rotor helicopters in the near hover trim condition. The identification problem was formulated using the maximum likelihood function in the time domain. The dynamic model with literal coefficients was used to generate the model states, and the model was parametrized in terms of physical constants of the aircraft rather than the stability derivatives, resulting in a significant reduction in the number of quantities to be identified. The likelihood function was optimized using the genetic algorithm approach. This method proved highly effective in producing an estimated model from flight test data which included coupled fuselage/rotor dynamics. Using this approach it has been shown that blade flexibility is a significant contributing factor to the discrepancies between theory and experiment shown in previous studies. Addition of flexible modes, properly incorporating the constraint due to the lag dampers, results in excellent agreement between flight test and theory, especially in the high frequency range.

  17. An analytical and experimental investigation of sandwich composites subjected to low-velocity impact

    NASA Astrophysics Data System (ADS)

    Anderson, Todd Alan

    1999-12-01

    This study involves an experimental and analytical investigation of low-velocity impact phenomenon in sandwich composite structures. The analytical solution of a three-dimensional finite-geometry multi-layer specially orthotropic panel subjected to static and transient transverse loading cases is presented. The governing equations of the static and dynamic formulations are derived from Reissner's functional and solved by enforcing the continuity of traction and displacement components between adjacent layers. For the dynamic loading case, the governing equations are solved by applying Fourier or Laplace transformation in time. Additionally, the static solution is extended to solve the contact problem between the sandwich laminate and a rigid sphere. An iterative method is employed to determine the sphere's unknown contact area and pressure distribution. A failure criterion is then applied to the sandwich laminate's stress and strain field to predict impact damage. The analytical accuracy of the present study is verified through comparisons with finite element models, other analyses, and through experimentation. Low-velocity impact tests were conducted to characterize the type and extent of the damage observed in a variety of sandwich configurations with graphite/epoxy face sheets and foam or honeycomb cores. Correlation of the residual indentation and cross-sectional views of the impacted specimens provides a criterion for the extent of damage. Quasi-static indentation tests are also performed and show excellent agreement when compared with the analytical predictions. Finally, piezoelectric polyvinylidene fluoride (PVF2) film sensors are found to be effective in detecting low-velocity impact.

  18. Space station structures and dynamics test program

    NASA Technical Reports Server (NTRS)

    Moore, Carleton J.; Townsend, John S.; Ivey, Edward W.

    1987-01-01

    The design, construction, and operation of a low-Earth orbit space station poses unique challenges for development and implementation of new technology. The technology arises from the special requirement that the station be built and constructed to function in a weightless environment, where static loads are minimal and secondary to system dynamics and control problems. One specific challenge confronting NASA is the development of a dynamics test program for: (1) defining space station design requirements, and (2) identifying the characterizing phenomena affecting the station's design and development. A general definition of the space station dynamic test program, as proposed by MSFC, forms the subject of this report. The test proposal is a comprehensive structural dynamics program to be launched in support of the space station. The test program will help to define the key issues and/or problems inherent to large space structure analysis, design, and testing. Development of a parametric data base and verification of the math models and analytical analysis tools necessary for engineering support of the station's design, construction, and operation provide the impetus for the dynamics test program. The philosophy is to integrate dynamics into the design phase through extensive ground testing and analytical ground simulations of generic systems, prototype elements, and subassemblies. On-orbit testing of the station will also be used to define its capability.

  19. Solid rocket booster performance evaluation model. Volume 1: Engineering description

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.

  20. Development of a Refined Space Vehicle Rollout Forcing Function

    NASA Technical Reports Server (NTRS)

    James, George; Tucker, Jon-Michael; Valle, Gerard; Grady, Robert; Schliesing, John; Fahling, James; Emory, Benjamin; Armand, Sasan

    2016-01-01

    For several decades, American manned spaceflight vehicles and the associated launch platforms have been transported from final assembly to the launch pad via a pre-launch phase called rollout. The rollout environment is rich with forced harmonics and higher order effects can be used for extracting structural dynamics information. To enable this utilization, processing tools are needed to move from measured and analytical data to dynamic metrics such as transfer functions, mode shapes, modal frequencies, and damping. This paper covers the range of systems and tests that are available to estimate rollout forcing functions for the Space Launch System (SLS). The specific information covered in this paper includes: the different definitions of rollout forcing functions; the operational and developmental data sets that are available; the suite of analytical processes that are currently in-place or in-development; and the plans and future work underway to solve two immediate problems related to rollout forcing functions. Problem 1 involves estimating enforced accelerations to drive finite element models for developing design requirements for the SLS class of launch vehicles. Problem 2 involves processing rollout measured data in near real time to understand structural dynamics properties of a specific vehicle and the class to which it belongs.

  1. Doubling immunochemistry laboratory testing efficiency with the cobas e 801 module while maintaining consistency in analytical performance.

    PubMed

    Findeisen, P; Zahn, I; Fiedler, G M; Leichtle, A B; Wang, S; Soria, G; Johnson, P; Henzell, J; Hegel, J K; Bendavid, C; Collet, N; McGovern, M; Klopprogge, K

    2018-06-04

    The new immunochemistry cobas e 801 module (Roche Diagnostics) was developed to meet increasing demands on routine laboratories to further improve testing efficiency, while maintaining high quality and reliable data. During a non-interventional multicenter evaluation study, the overall performance, functionality and reliability of the new module was investigated under routine-like conditions. It was tested as a dedicated immunochemistry system at four sites and as a consolidator combined with clinical chemistry at three sites. We report on testing efficiency and analytical performance of the new module. Evaluation of sample workloads with site-specific routine request patterns demonstrated increased speed and almost doubled throughput (maximal 300 tests per h), thus revealing that one cobas e 801 module can replace two cobas e 602 modules while saving up to 44% floor space. Result stability was demonstrated by QC analysis per assay throughout the study. Precision testing over 21 days yielded excellent results within and between labs, and, method comparison performed versus the cobas e 602 module routine results showed high consistency of results for all assays under study. In a practicability assessment related to performance and handling, 99% of graded features met (44%) or even exceeded (55%) laboratory expectations, with enhanced reagent management and loading during operation being highlighted. By nearly doubling immunochemistry testing efficiency on the same footprint as a cobas e 602 module, the new module has a great potential to further consolidate and enhance laboratory testing while maintaining high quality analytical performance with Roche platforms. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  2. An Analytic Solution to the Computation of Power and Sample Size for Genetic Association Studies under a Pleiotropic Mode of Inheritance.

    PubMed

    Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A

    2016-01-01

    Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.

  3. Horizon-absorbed energy flux in circularized, nonspinning black-hole binaries, and its effective-one-body representation

    NASA Astrophysics Data System (ADS)

    Nagar, Alessandro; Akcay, Sarp

    2012-02-01

    We propose, within the effective-one-body approach, a new, resummed analytical representation of the gravitational-wave energy flux absorbed by a system of two circularized (nonspinning) black holes. This expression is such that it is well-behaved in the strong-field, fast-motion regime, notably up to the effective-one-body-defined last unstable orbit. Building conceptually upon the procedure adopted to resum the multipolar asymptotic energy flux, we introduce a multiplicative decomposition of the multipolar absorbed flux made by three factors: (i) the leading-order contribution, (ii) an “effective source” and (iii) a new residual amplitude correction (ρ˜ℓmH)2ℓ. In the test-mass limit, we use a frequency-domain perturbative approach to accurately compute numerically the horizon-absorbed fluxes along a sequence of stable and unstable circular orbits, and we extract from them the functions ρ˜ℓmH. These quantities are then fitted via rational functions. The resulting analytically represented test-mass knowledge is then suitably hybridized with lower-order analytical information that is valid for any mass ratio. This yields a resummed representation of the absorbed flux for a generic, circularized, nonspinning black-hole binary. Our result adds new information to the state-of-the-art calculation of the absorbed flux at fractional 5 post-Newtonian order [S. Taylor and E. Poisson, Phys. Rev. D 78, 084016 (2008)], which is recovered in the weak-field limit approximation by construction.

  4. Statistical correlation of structural mode shapes from test measurements and NASTRAN analytical values

    NASA Technical Reports Server (NTRS)

    Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.

    1983-01-01

    The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.

  5. Flat-plate solar array project. Volume 6: Engineering sciences and reliability

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.; Smokler, M. I.

    1986-01-01

    The Flat-Plate Solar Array (FSA) Project activities directed at developing the engineering technology base required to achieve modules that meet the functional, safety, and reliability requirements of large scale terrestrial photovoltaic systems applications are reported. These activities included: (1) development of functional, safety, and reliability requirements for such applications; (2) development of the engineering analytical approaches, test techniques, and design solutions required to meet the requirements; (3) synthesis and procurement of candidate designs for test and evaluation; and (4) performance of extensive testing, evaluation, and failure analysis of define design shortfalls and, thus, areas requiring additional research and development. A summary of the approach and technical outcome of these activities are provided along with a complete bibliography of the published documentation covering the detailed accomplishments and technologies developed.

  6. Morse oscillator propagator in the high temperature limit II: Quantum dynamics and spectroscopy

    NASA Astrophysics Data System (ADS)

    Toutounji, Mohamad

    2018-04-01

    This paper is a continuation of Paper I (Toutounji, 2017) of which motivation was testing the applicability of Morse oscillator propagator whose analytical form was derived by Duru (1983). This is because the Morse oscillator propagator was reported (Duru, 1983) in a triple-integral form of a functional of modified Bessel function of the first kind, which considerably limits its applicability. For this reason, I was prompted to find a regime under which Morse oscillator propagator may be simplified and hence be expressed in a closed-form. This was well accomplished in Paper I. Because Morse oscillator is of central importance and widely used in modelling vibrations, its propagator applicability will be extended to applications in quantum dynamics and spectroscopy as will be reported in this paper using the off-diagonal propagator of Morse oscillator whose analytical form is derived.

  7. 42 CFR 493.859 - Standard; ABO group and D (Rho) typing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... attain a score of at least 100 percent of acceptable responses for each analyte or test in each testing event is unsatisfactory analyte performance for the testing event. (b) Failure to attain an overall.... (2) For any unacceptable analyte or unsatisfactory testing event score, remedial action must be taken...

  8. 42 CFR 493.859 - Standard; ABO group and D (Rho) typing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... attain a score of at least 100 percent of acceptable responses for each analyte or test in each testing event is unsatisfactory analyte performance for the testing event. (b) Failure to attain an overall.... (2) For any unacceptable analyte or unsatisfactory testing event score, remedial action must be taken...

  9. 42 CFR 493.859 - Standard; ABO group and D (Rho) typing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... attain a score of at least 100 percent of acceptable responses for each analyte or test in each testing event is unsatisfactory analyte performance for the testing event. (b) Failure to attain an overall.... (2) For any unacceptable analyte or unsatisfactory testing event score, remedial action must be taken...

  10. Aspirating Seal Development: Analytical Modeling and Seal Test Rig

    NASA Technical Reports Server (NTRS)

    Bagepalli, Bharat

    1996-01-01

    This effort is to develop large diameter (22 - 36 inch) Aspirating Seals for application in aircraft engines. Stein Seal Co. will be fabricating the 36-inch seal(s) for testing. GE's task is to establish a thorough understanding of the operation of Aspirating Seals through analytical modeling and full-scale testing. The two primary objectives of this project are to develop the analytical models of the aspirating seal system, to upgrade using GE's funds, GE's 50-inch seal test rig for testing the Aspirating Seal (back-to-back with a corresponding brush seal), test the aspirating seal(s) for seal closure, tracking and maneuver transients (tilt) at operating pressures and temperatures, and validate the analytical model. The objective of the analytical model development is to evaluate the transient and steady-state dynamic performance characteristics of the seal designed by Stein. The transient dynamic model uses a multi-body system approach: the Stator, Seal face and the rotor are treated as individual bodies with relative degrees of freedom. Initially, the thirty-six springs are represented as a single one trying to keep open the aspirating face. Stops (Contact elements) are provided between the stator and the seal (to compensate the preload in the fully-open position) and between the rotor face and Seal face (to detect rub). The secondary seal is considered as part of the stator. The film's load, damping and stiffness characteristics as functions of pressure and clearance are evaluated using a separate (NASA) code GFACE. Initially, a laminar flow theory is used. Special two-dimensional interpolation routines are written to establish exact film load and damping values at each integration time step. Additionally, other user-routines are written to read-in actual pressure, rpm, stator-growth and rotor growth data and, later, to transfer these as appropriate loads/motions in the system-dynamic model. The transient dynamic model evaluates the various motions, clearances and forces as the seals are subjected to different aircraft maneuvers: Windmilling restart; start-ground idle; ground idle-takeoff; takeoff-burst chop, etc. Results of this model show that the seal closes appropriately and does not ram into the rotor for all of the conditions analyzed. The rig upgrade design for testing Aspirating Seals has been completed. Long lead-time items (forgings, etc.) have been ordered.

  11. PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.

    2018-03-01

    The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.

  12. Two Approaches to Estimating the Effect of Parenting on the Development of Executive Function in Early Childhood

    PubMed Central

    Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.

    2015-01-01

    In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory control, working memory, and attention shifting. Residualized change analysis indicated that higher quality parenting as indicated by higher scores on widely used measures of parenting at both earlier and later time points predicted more positive gain in executive function at 60 months. Latent change score models in which parenting and executive function over time were held to standards of longitudinal measurement invariance provided additional evidence of the association between change in parenting quality and change in executive function. In these models, cross-lagged paths indicated that in addition to parenting predicting change in executive function, executive function bidirectionally predicted change in parenting quality. Results were robust with the addition of covariates, including child sex, race, maternal education, and household income-to-need. Strengths and drawbacks of the 2 analytic approaches are discussed, and the findings are considered in light of emerging methodological innovations for testing the extent to which executive function is malleable and open to the influence of experience. PMID:23834294

  13. Preoperative vestibular assessment protocol of cochlear implant surgery: an analytical descriptive study.

    PubMed

    Bittar, Roseli Saraiva Moreira; Sato, Eduardo Setsuo; Ribeiro, Douglas Jósimo Silva; Tsuji, Robinson Koji

    Cochlear implants are undeniably an effective method for the recovery of hearing function in patients with hearing loss. To describe the preoperative vestibular assessment protocol in subjects who will be submitted to cochlear implants. Our institutional protocol provides the vestibular diagnosis through six simple tests: Romberg and Fukuda tests, assessment for spontaneous nystagmus, Head Impulse Test, evaluation for Head Shaking Nystagmus and caloric test. 21 patients were evaluated with a mean age of 42.75±14.38 years. Only 28% of the sample had all normal test results. The presence of asymmetric vestibular information was documented through the caloric test in 32% of the sample and spontaneous nystagmus was an important clue for the diagnosis. Bilateral vestibular areflexia was present in four subjects, unilateral arreflexia in three and bilateral hyporeflexia in two. The Head Impulse Test was a significant indicator for the diagnosis of areflexia in the tested ear (p=0.0001). The sensitized Romberg test using a foam pad was able to diagnose severe vestibular function impairment (p=0.003). The six clinical tests were able to identify the presence or absence of vestibular function and function asymmetry between the ears of the same individual. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  14. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE PAGES

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...

    2017-10-06

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  15. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  16. Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.

    PubMed

    Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli

    2018-03-13

    The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.

  17. Modeling Sound Propagation Through Non-Axisymmetric Jets

    NASA Technical Reports Server (NTRS)

    Leib, Stewart J.

    2014-01-01

    A method for computing the far-field adjoint Green's function of the generalized acoustic analogy equations under a locally parallel mean flow approximation is presented. The method is based on expanding the mean-flow-dependent coefficients in the governing equation and the scalar Green's function in truncated Fourier series in the azimuthal direction and a finite difference approximation in the radial direction in circular cylindrical coordinates. The combined spectral/finite difference method yields a highly banded system of algebraic equations that can be efficiently solved using a standard sparse system solver. The method is applied to test cases, with mean flow specified by analytical functions, corresponding to two noise reduction concepts of current interest: the offset jet and the fluid shield. Sample results for the Green's function are given for these two test cases and recommendations made as to the use of the method as part of a RANS-based jet noise prediction code.

  18. 40 CFR 1066.101 - Overview.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROCEDURES Equipment, Fuel, and Gas Specifications § 1066.101 Overview. (a) This subpart addresses equipment related to emission testing, as well as test fuels and analytical gases. This section addresses emission sampling and analytical equipment, test fuels, and analytical gases. (b) The provisions of 40 CFR part 1065...

  19. 40 CFR 1066.101 - Overview.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROCEDURES Equipment, Fuel, and Gas Specifications § 1066.101 Overview. (a) This subpart addresses equipment related to emission testing, as well as test fuels and analytical gases. This section addresses emission sampling and analytical equipment, test fuels, and analytical gases. (b) The provisions of 40 CFR part 1065...

  20. Analytical solutions for efficient interpretation of single-well push-pull tracer tests

    EPA Science Inventory

    Single-well push-pull tracer tests have been used to characterize the extent, fate, and transport of subsurface contamination. Analytical solutions provide one alternative for interpreting test results. In this work, an exact analytical solution to two-dimensional equations descr...

  1. Edge softening of the Shuttle TPS strain isolation pad. [Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Ransone, P. O.; Rummler, D. R.

    1982-01-01

    Tensile tests and an analytical investigation were performed to characterize the edge softening behavior of the strain isolation pad (SIP) between the Orbiter skin and thermal protection system. The tensile tests were carried out with varying sizes of disk-shaped specimens bonded between aluminum disks. The specimens strength and stiffness were determined on the basis of specimen size, and an analytical model of the microstructural stress-strain characteristics was developed. Strength and stiffness were found to decrease near the free edges because through-the-thickness fibers located there were not anchored. No size dependence at maximum load was observed in specimens between 0.75-4.0 in. thick. In-plane and out-of-plane coupling in deformation was detected. The model gave accurate predictions of the tensile behavior of the SIP as a function of distance to a free edge.

  2. Stability and control of the Gossamer human powered aircraft by analysis and flight test

    NASA Technical Reports Server (NTRS)

    Jex, H. R.; Mitchell, D. G.

    1982-01-01

    The slow flight speed, very light wing loading, and neutral stability of the Gossamer Condor and the Gossamer Albatross emphasized apparent-mass aerodynamic effects and unusual modes of motion response. These are analyzed, approximated, and discussed, and the resulting transfer functions and dynamic properties are summarized and compared. To verify these analytical models, flight tests were conducted with and electrically powered Gossamer Albatross II. Sensors were installed and their outputs were telemetered to records on the ground. Frequency sweeps of the various controls were made and the data were reduced to frequency domain measures. Results are given for the response of: pitch rate, airspeed and normal acceleration from canard-elevator deflection; roll rate and yaw rate from canard-rudder tilt; and roll rate and yaw rate from wing warp. The reliable data are compared with the analytical predictions.

  3. Analytic Methods Used in Quality Control in a Compounding Pharmacy.

    PubMed

    Allen, Loyd V

    2017-01-01

    Analytical testing will no doubt become a more important part of pharmaceutical compounding as the public and regulatory agencies demand increasing documentation of the quality of compounded preparations. Compounding pharmacists must decide what types of testing and what amount of testing to include in their quality-control programs, and whether testing should be done in-house or outsourced. Like pharmaceutical compounding, analytical testing should be performed only by those who are appropriately trained and qualified. This article discusses the analytical methods that are used in quality control in a compounding pharmacy. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  4. An exactly solvable coarse-grained model for species diversity

    NASA Astrophysics Data System (ADS)

    Suweis, Samir; Rinaldo, Andrea; Maritan, Amos

    2012-07-01

    We present novel analytical results concerning ecosystem species diversity that stem from a proposed coarse-grained neutral model based on birth-death processes. The relevance of the problem lies in the urgency for understanding and synthesizing both theoretical results from ecological neutral theory and empirical evidence on species diversity preservation. The neutral model of biodiversity deals with ecosystems at the same trophic level, where per capita vital rates are assumed to be species independent. Closed-form analytical solutions for the neutral theory are obtained within a coarse-grained model, where the only input is the species persistence time distribution. Our results pertain to: the probability distribution function of the number of species in the ecosystem, both in transient and in stationary states; the n-point connected time correlation function; and the survival probability, defined as the distribution of time spans to local extinction for a species randomly sampled from the community. Analytical predictions are also tested on empirical data from an estuarine fish ecosystem. We find that emerging properties of the ecosystem are very robust and do not depend on specific details of the model, with implications for biodiversity and conservation biology.

  5. Blade Tip Rubbing Stress Prediction

    NASA Technical Reports Server (NTRS)

    Davis, Gary A.; Clough, Ray C.

    1991-01-01

    An analytical model was constructed to predict the magnitude of stresses produced by rubbing a turbine blade against its tip seal. This model used a linearized approach to the problem, after a parametric study, found that the nonlinear effects were of insignificant magnitude. The important input parameters to the model were: the arc through which rubbing occurs, the turbine rotor speed, normal force exerted on the blade, and the rubbing coefficient of friction. Since it is not possible to exactly specify some of these parameters, values were entered into the model which bracket likely values. The form of the forcing function was another variable which was impossible to specify precisely, but the assumption of a half-sine wave with a period equal to the duration of the rub was taken as a realistic assumption. The analytical model predicted resonances between harmonics of the forcing function decomposition and known harmonics of the blade. Thus, it seemed probable that blade tip rubbing could be at least a contributor to the blade-cracking phenomenon. A full-scale, full-speed test conducted on the space shuttle main engine high pressure fuel turbopump Whirligig tester was conducted at speeds between 33,000 and 28,000 RPM to confirm analytical predictions.

  6. A Cameron-Storvick Theorem for Analytic Feynman Integrals on Product Abstract Wiener Space and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jae Gil, E-mail: jgchoi@dankook.ac.kr; Chang, Seung Jun, E-mail: sejchang@dankook.ac.kr

    In this paper we derive a Cameron-Storvick theorem for the analytic Feynman integral of functionals on product abstract Wiener space B{sup 2}. We then apply our result to obtain an evaluation formula for the analytic Feynman integral of unbounded functionals on B{sup 2}. We also present meaningful examples involving functionals which arise naturally in quantum mechanics.

  7. Functionalized magnetic nanoparticle analyte sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yantasee, Wassana; Warner, Maryin G; Warner, Cynthia L

    2014-03-25

    A method and system for simply and efficiently determining quantities of a preselected material in a particular solution by the placement of at least one superparamagnetic nanoparticle having a specified functionalized organic material connected thereto into a particular sample solution, wherein preselected analytes attach to the functionalized organic groups, these superparamagnetic nanoparticles are then collected at a collection site and analyzed for the presence of a particular analyte.

  8. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  9. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  10. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  11. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  12. 10 CFR 26.168 - Blind performance testing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...

  13. Two-dimensional analytic weighting functions for limb scattering

    NASA Astrophysics Data System (ADS)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  14. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  15. Analysis of user equilibrium for staggered shifts in a single-entry traffic corridor with no late arrivals

    NASA Astrophysics Data System (ADS)

    Li, Chuan-Yao; Huang, Hai-Jun; Tang, Tie-Qiao

    2017-05-01

    In this paper, we investigate the effects of staggered shifts on the user equilibrium (UE) state in a single-entry traffic corridor with no late arrivals from the analytical and numerical perspective. The LWR (Lighthill-Whitham-Richards) model and the Greenshields' velocity-density function are used to describe the dynamic properties of traffic flow. Propositions for the properties of flow patterns in UE, and the quasi-analytic solutions for three possible situations in UE are deduced. Numerical tests are carried out to testify the analytical results, where the three-dimensional evolution diagram of traffic flow illustrates that shock and rarefaction wave exist in UE and the space-time diagram indicates that UE solutions satisfy the propagation properties of traffic flow. In addition, the cost curves show that the UE solutions satisfy the UE trip-timing condition.

  16. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  17. Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2001-01-01

    A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.

  18. Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration

    NASA Technical Reports Server (NTRS)

    Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.

    1993-01-01

    Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.

  19. Wigner distribution function of Hermite-cosine-Gaussian beams through an apertured optical system.

    PubMed

    Sun, Dong; Zhao, Daomu

    2005-08-01

    By introducing the hard-aperture function into a finite sum of complex Gaussian functions, the approximate analytical expressions of the Wigner distribution function for Hermite-cosine-Gaussian beams passing through an apertured paraxial ABCD optical system are obtained. The analytical results are compared with the numerically integrated ones, and the absolute errors are also given. It is shown that the analytical results are proper and that the calculation speed for them is much faster than for the numerical results.

  20. On the Gibbs phenomenon 1: Recovering exponential accuracy from the Fourier partial sum of a non-periodic analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve

    1992-01-01

    It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.

  1. Some elements of a theory of multidimensional complex variables. I - General theory. II - Expansions of analytic functions and application to fluid flows

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1989-01-01

    The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.

  2. 40 CFR 600.108-08 - Analytical gases.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy and Carbon-Related Exhaust Emission Test Procedures § 600.108-08 Analytical gases. The analytical gases for all fuel economy testing...

  3. 40 CFR 600.108-08 - Analytical gases.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy and Carbon-Related Exhaust Emission Test Procedures § 600.108-08 Analytical gases. The analytical gases for all fuel economy testing...

  4. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control. Part 2; Validation Results

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem

    2010-01-01

    Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, Goddard Space Fight Center has conducted a Thermal Loop experiment to advance the maturity of the Thermal Loop technology from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. The thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for the TRL 4 and TRL 5 validations, respectively, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. The MLHP demonstrated excellent performance during experimental tests and the analytical model predictions agreed very well with experimental data. All success criteria at various TRLs were met. Hence, the Thermal Loop technology has reached a TRL of 6. This paper presents the validation results, both experimental and analytical, of such a technology development effort.

  5. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  6. Sustained prediction ability of net analyte preprocessing methods using reduced calibration sets. Theoretical and experimental study involving the spectrophotometric analysis of multicomponent mixtures.

    PubMed

    Goicoechea, H C; Olivieri, A C

    2001-07-01

    A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.

  7. DROMO formulation for planar motions: solution to the Tsien problem

    NASA Astrophysics Data System (ADS)

    Urrutxua, Hodei; Morante, David; Sanjurjo-Rivo, Manuel; Peláez, Jesús

    2015-06-01

    The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen's ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo-Stiefel methods.

  8. Creating Synthetic Coronal Observational Data From MHD Models: The Forward Technique

    NASA Technical Reports Server (NTRS)

    Rachmeler, Laurel A.; Gibson, Sarah E.; Dove, James; Kucera, Therese Ann

    2010-01-01

    We present a generalized forward code for creating simulated corona) observables off the limb from numerical and analytical MHD models. This generalized forward model is capable of creating emission maps in various wavelengths for instruments such as SXT, EIT, EIS, and coronagraphs, as well as spectropolari metric images and line profiles. The inputs to our code can be analytic models (of which four come with the code) or 2.5D and 3D numerical datacubes. We present some examples of the observable data created with our code as well as its functional capabilities. This code is currently available for beta-testing (contact authors), with the ultimate goal of release as a SolarSoft package

  9. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    ERIC Educational Resources Information Center

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  10. Analytical excited state forces for the time-dependent density-functional tight-binding method.

    PubMed

    Heringer, D; Niehaus, T A; Wanko, M; Frauenheim, Th

    2007-12-01

    An analytical formulation for the geometrical derivatives of excitation energies within the time-dependent density-functional tight-binding (TD-DFTB) method is presented. The derivation is based on the auxiliary functional approach proposed in [Furche and Ahlrichs, J Chem Phys 2002, 117, 7433]. To validate the quality of the potential energy surfaces provided by the method, adiabatic excitation energies, excited state geometries, and harmonic vibrational frequencies were calculated for a test set of molecules in excited states of different symmetry and multiplicity. According to the results, the TD-DFTB scheme surpasses the performance of configuration interaction singles and the random phase approximation but has a lower quality than ab initio time-dependent density-functional theory. As a consequence of the special form of the approximations made in TD-DFTB, the scaling exponent of the method can be reduced to three, similar to the ground state. The low scaling prefactor and the satisfactory accuracy of the method makes TD-DFTB especially suitable for molecular dynamics simulations of dozens of atoms as well as for the computation of luminescence spectra of systems containing hundreds of atoms. (c) 2007 Wiley Periodicals, Inc.

  11. 42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...

  12. 42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...

  13. 42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...

  14. 42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...

  15. 42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...

  16. Implementing Speed and Separation Monitoring in Collaborative Robot Workcells.

    PubMed

    Marvel, Jeremy A; Norcross, Rick

    2017-04-01

    We provide an overview and guidance for the Speed and Separation Monitoring methodology as presented in the International Organization of Standardization's technical specification 15066 on collaborative robot safety. Such functionality is provided by external, intelligent observer systems integrated into a robotic workcell. The SSM minimum protective distance function equation is discussed in detail, with consideration for the input values, implementation specifications, and performance expectations. We provide analytical analyses and test results of the current equation, discuss considerations for implementing SSM in human-occupied environments, and provide directions for technological advancements toward standardization.

  17. Combined distribution functions: A powerful tool to identify cation coordination geometries in liquid systems

    NASA Astrophysics Data System (ADS)

    Sessa, Francesco; D'Angelo, Paola; Migliorati, Valentina

    2018-01-01

    In this work we have developed an analytical procedure to identify metal ion coordination geometries in liquid media based on the calculation of Combined Distribution Functions (CDFs) starting from Molecular Dynamics (MD) simulations. CDFs provide a fingerprint which can be easily and unambiguously assigned to a reference polyhedron. The CDF analysis has been tested on five systems and has proven to reliably identify the correct geometries of several ion coordination complexes. This tool is simple and general and can be efficiently applied to different MD simulations of liquid systems.

  18. Spectral response analysis of PVDF capacitive sensors

    NASA Astrophysics Data System (ADS)

    Reyes-Ramírez, B.; García-Segundo, C.; García-Valenzuela, A.

    2013-06-01

    We investigate the spectral response to ultrasound waves in water of low-noise capacitive sensors based on PVDF polymer piezoelectric films. First, we analyze theoretically the mechanical-to-electrical transduction as a function of the frequency of ultrasonic signals and derive an analytic expression of the sensor's transfer function. Then we present experimental results of the frequency response of a home-made PDVF in water to test signals from 1 to 20 MHz induced by a commercial hydrophone powered by a signal generator and compare with our theoretical model.

  19. Implementing Speed and Separation Monitoring in Collaborative Robot Workcells

    PubMed Central

    Marvel, Jeremy A.; Norcross, Rick

    2016-01-01

    We provide an overview and guidance for the Speed and Separation Monitoring methodology as presented in the International Organization of Standardization's technical specification 15066 on collaborative robot safety. Such functionality is provided by external, intelligent observer systems integrated into a robotic workcell. The SSM minimum protective distance function equation is discussed in detail, with consideration for the input values, implementation specifications, and performance expectations. We provide analytical analyses and test results of the current equation, discuss considerations for implementing SSM in human-occupied environments, and provide directions for technological advancements toward standardization. PMID:27885312

  20. CEval: All-in-one software for data processing and statistical evaluations in affinity capillary electrophoresis.

    PubMed

    Dubský, Pavel; Ördögová, Magda; Malý, Michal; Riesová, Martina

    2016-05-06

    We introduce CEval software (downloadable for free at echmet.natur.cuni.cz) that was developed for quicker and easier electrophoregram evaluation and further data processing in (affinity) capillary electrophoresis. This software allows for automatic peak detection and evaluation of common peak parameters, such as its migration time, area, width etc. Additionally, the software includes a nonlinear regression engine that performs peak fitting with the Haarhoff-van der Linde (HVL) function, including automated initial guess of the HVL function parameters. HVL is a fundamental peak-shape function in electrophoresis, based on which the correct effective mobility of the analyte represented by the peak is evaluated. Effective mobilities of an analyte at various concentrations of a selector can be further stored and plotted in an affinity CE mode. Consequently, the mobility of the free analyte, μA, mobility of the analyte-selector complex, μAS, and the apparent complexation constant, K('), are first guessed automatically from the linearized data plots and subsequently estimated by the means of nonlinear regression. An option that allows two complexation dependencies to be fitted at once is especially convenient for enantioseparations. Statistical processing of these data is also included, which allowed us to: i) express the 95% confidence intervals for the μA, μAS and K(') least-squares estimates, ii) do hypothesis testing on the estimated parameters for the first time. We demonstrate the benefits of the CEval software by inspecting complexation of tryptophan methyl ester with two cyclodextrins, neutral heptakis(2,6-di-O-methyl)-β-CD and charged heptakis(6-O-sulfo)-β-CD. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A free energy-based surface tension force model for simulation of multiphase flows by level-set method

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.

    2017-09-01

    In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.

  2. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L =1

    NASA Astrophysics Data System (ADS)

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-01

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L =1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  3. Energy and energy gradient matrix elements with N-particle explicitly correlated complex Gaussian basis functions with L=1.

    PubMed

    Bubin, Sergiy; Adamowicz, Ludwik

    2008-03-21

    In this work we consider explicitly correlated complex Gaussian basis functions for expanding the wave function of an N-particle system with the L=1 total orbital angular momentum. We derive analytical expressions for various matrix elements with these basis functions including the overlap, kinetic energy, and potential energy (Coulomb interaction) matrix elements, as well as matrix elements of other quantities. The derivatives of the overlap, kinetic, and potential energy integrals with respect to the Gaussian exponential parameters are also derived and used to calculate the energy gradient. All the derivations are performed using the formalism of the matrix differential calculus that facilitates a way of expressing the integrals in an elegant matrix form, which is convenient for the theoretical analysis and the computer implementation. The new method is tested in calculations of two systems: the lowest P state of the beryllium atom and the bound P state of the positronium molecule (with the negative parity). Both calculations yielded new, lowest-to-date, variational upper bounds, while the number of basis functions used was significantly smaller than in previous studies. It was possible to accomplish this due to the use of the analytic energy gradient in the minimization of the variational energy.

  4. The relationship between intelligence and cognitive function in schizophrenic

    NASA Astrophysics Data System (ADS)

    Catherine; Amin, M. M.; Effendy, E.

    2018-03-01

    The most common of psychotic disorders is schizophrenia. While evaluating the cognitive function with a standardized test, the intelligence test is by using the IQ test. For schizophrenic patients, intelligence is usually reported to be lower than average. This research is an analytical study that commenced in January and ended in March 2014. Primary criteria are schizophrenics who are in-patients in Prof. dr. M. Ildrem Mental Hospital, aged between 15 to 55 years old, with the highest qualification of secondary high school. The secondary criteria are those patients with other psychotic disorders, head injuries and other neurological disorders, endocrine disorders. The total sample is 100 subjects. From this study, the correlation value is 0.876 shows a very strong correlation. And the p-value 0.001.The results of this study show that there is a direct correlation (p=0.001) and a correlation (r=0.876) between intelligence and cognitive function on schizophrenic. And it is also necessary to do more researches by using other rating scales and examination to measure the relationship between intelligence and cognitive function, and other factors that may affect results.

  5. The Challenge of Understanding Process in Clinical Behavior Analysis: The Case of Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Follette, William C.; Bonow, Jordan T.

    2009-01-01

    Whether explicitly acknowledged or not, behavior-analytic principles are at the heart of most, if not all, empirically supported therapies. However, the change process in psychotherapy is only now being rigorously studied. Functional analytic psychotherapy (FAP; Kohlenberg & Tsai, 1991; Tsai et al., 2009) explicitly identifies behavioral-change…

  6. Executive Function and Reading Comprehension: A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Follmer, D. Jake

    2018-01-01

    This article presents a meta-analytic review of the relation between executive function and reading comprehension. Results (N = 6,673) supported a moderate positive association between executive function and reading comprehension (r = 0.36). Moderator analyses suggested that correlations between executive function and reading comprehension did not…

  7. [Quality Management and Quality Specifications of Laboratory Tests in Clinical Studies--Challenges in Pre-Analytical Processes in Clinical Laboratories].

    PubMed

    Ishibashi, Midori

    2015-01-01

    The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.

  8. A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison

    NASA Technical Reports Server (NTRS)

    Kreshock, Andrew R.; Thornburgh, Robert P.; Wilbur, Matthew L.

    2017-01-01

    This paper presents the results from an ongoing effort to produce improved correlation between analytical hub force and moment prediction and those measured during wind-tunnel testing on the Aeroelastic Rotor Experimental System (ARES), a conventional rotor testbed commonly used at the Langley Transonic Dynamics Tunnel (TDT). A frequency-dependent transformation between loads at the rotor hub and outputs of the testbed balance is produced from frequency response functions measured during vibration testing of the system. The resulting transformation is used as a dynamic calibration of the balance to transform hub loads predicted by comprehensive analysis into predicted balance outputs. In addition to detailing the transformation process, this paper also presents a set of wind-tunnel test cases, with comparisons between the measured balance outputs and transformed predictions from the comprehensive analysis code CAMRAD II. The modal response of the testbed is discussed and compared to a detailed finite-element model. Results reveal that the modal response of the testbed exhibits a number of characteristics that make accurate dynamic balance predictions challenging, even with the use of the balance transformation.

  9. Modal testing of a rotating wind turbine

    NASA Astrophysics Data System (ADS)

    Carne, T. G.; Nord, A. R.

    1982-11-01

    A testing technique was developed to measure the modes of vibration of a rotating vertical-axis wind turbine. This technique was applied to the Sandia Two-Meter Turbine, where the changes in individual modal frequencies as a function of the rotational speed were tracked from 0 rpm (parked) to 600 rpm. During rotational testing, the structural response was measured using a combination of strain gages and accelerometers, passing the signals through slip rings. Excitation of the turbine structure was provided by a scheme which suddenly released a pretensioned cable, thus plucking the turbine as it was rotating at a set speed. In addition to calculating the real modes of the parked turbine, the modes of the rotating turbine were also determined at several rotational speeds. The modes of the rotating system proved to be complex due to centrifugal and Coriolis effects. The modal data for the parked turbine were used to update a finite-element model. Also, the measured modal parameters for the rotating turbine were compared to the analytical results, thus verifying the analytical procedures used to incorporate the effects of the rotating coordinate system.

  10. BETA (Bitter Electromagnet Testing Apparatus) Design and Testing

    NASA Astrophysics Data System (ADS)

    Bates, Evan; Birmingham, William; Rivera, William; Romero-Talamas, Carlos

    2016-10-01

    BETA is a 1T water cooled Bitter-type magnetic system that has been designed and constructed at the Dusty Plasma Laboratory of the University of Maryland, Baltimore County to serve as a prototype of a scaled 10T version. Currently the system is undergoing magnetic, thermal and mechanical testing to ensure safe operating conditions and to prove analytical design optimizations. These magnets will function as experimental tools for future dusty plasma based and collaborative experiments. An overview of design methods used for building a custom made Bitter magnet with user defined experimental constraints is reviewed. The three main design methods consist of minimizing the following: ohmic power, peak conductor temperatures, and stresses induced by Lorentz forces. We will also discuss the design of BETA which includes: the magnet core, pressure vessel, cooling system, power storage bank, high powered switching system, diagnostics with safety cutoff feedback, and data acquisition (DAQ)/magnet control Matlab code. Furthermore, we present experimental data from diagnostics for validation of our analytical preliminary design methodologies and finite element analysis calculations. BETA will contribute to the knowledge necessary to finalize the 10 T magnet design.

  11. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-07

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Towards tests of quark-hadron duality with functional analysis and spectral function data

    NASA Astrophysics Data System (ADS)

    Boito, Diogo; Caprini, Irinel

    2017-04-01

    The presence of terms that violate quark-hadron duality in the expansion of QCD Green's functions is a generally accepted fact. Recently, a new approach was proposed for the study of duality violations (DVs), which exploits the existence of a rigorous lower bound on the functional distance, measured in a certain norm, between a "true" correlator and its approximant calculated theoretically along a contour in the complex energy plane. In the present paper, we pursue the investigation of functional-analysis-based tests towards their application to real spectral function data. We derive a closed analytic expression for the minimal functional distance based on the general weighted L2 norm and discuss its relation with the distance measured in the L∞ norm. Using fake data sets obtained from a realistic toy model in which we allow for covariances inspired from the publicly available ALEPH spectral functions, we obtain, by Monte Carlo simulations, the statistical distribution of the strength parameter that measures the magnitude of the DV term added to the usual operator product expansion. The results show that, if the region with large errors near the end point of the spectrum in τ decays is excluded, the functional-analysis-based tests using either L2 or L∞ norms are able to detect, in a statistically significant way, the presence of DVs in realistic spectral function pseudodata.

  13. A new method for constructing analytic elements for groundwater flow.

    NASA Astrophysics Data System (ADS)

    Strack, O. D.

    2007-12-01

    The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.

  14. GENERAL: The Analytic Solution of Schrödinger Equation with Potential Function Superposed by Six Terms with Positive-power and Inverse-power Potentials

    NASA Astrophysics Data System (ADS)

    Hu, Xian-Quan; Luo, Guang; Cui, Li-Peng; Li, Fang-Yu; Niu, Lian-Bin

    2009-03-01

    The analytic solution of the radial Schrödinger equation is studied by using the tight coupling condition of several positive-power and inverse-power potential functions in this article. Furthermore, the precisely analytic solutions and the conditions that decide the existence of analytic solution have been searched when the potential of the radial Schrödinger equation is V(r) = α1r8 + α2r3 + α3r2 + β3r-1 + β2r-3 + β1r-4. Generally speaking, there is only an approximate solution, but not analytic solution for Schrödinger equation with several potentials' superposition. However, the conditions that decide the existence of analytic solution have been found and the analytic solution and its energy level structure are obtained for the Schrödinger equation with the potential which is motioned above in this paper. According to the single-value, finite and continuous standard of wave function in a quantum system, the authors firstly solve the asymptotic solution through the radial coordinate r → and r → 0; secondly, they make the asymptotic solutions combining with the series solutions nearby the neighborhood of irregular singularities; and then they compare the power series coefficients, deduce a series of analytic solutions of the stationary state wave function and corresponding energy level structure by tight coupling among the coefficients of potential functions for the radial Schrödinger equation; and lastly, they discuss the solutions and make conclusions.

  15. Test and Analysis Capabilities of the Space Environment Effects Team at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Finckenor, M. M.; Edwards, D. L.; Vaughn, J. A.; Schneider, T. A.; Hovater, M. A.; Hoppe, D. T.

    2002-01-01

    Marshall Space Flight Center has developed world-class space environmental effects testing facilities to simulate the space environment. The combined environmental effects test system exposes temperature-controlled samples to simultaneous protons, high- and low-energy electrons, vacuum ultraviolet (VUV) radiation, and near-ultraviolet (NUV) radiation. Separate chambers for studying the effects of NUV and VUV at elevated temperatures are also available. The Atomic Oxygen Beam Facility exposes samples to atomic oxygen of 5 eV energy to simulate low-Earth orbit (LEO). The LEO space plasma simulators are used to study current collection to biased spacecraft surfaces, arcing from insulators and electrical conductivity of materials. Plasma propulsion techniques are analyzed using the Marshall magnetic mirror system. The micro light gas gun simulates micrometeoroid and space debris impacts. Candidate materials and hardware for spacecraft can be evaluated for durability in the space environment with a variety of analytical techniques. Mass, solar absorptance, infrared emittance, transmission, reflectance, bidirectional reflectance distribution function, and surface morphology characterization can be performed. The data from the space environmental effects testing facilities, combined with analytical results from flight experiments, enable the Environmental Effects Group to determine optimum materials for use on spacecraft.

  16. Updating the Finite Element Model of the Aerostructures Test Wing Using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Lung, Shun-Fat; Pak, Chan-Gi

    2009-01-01

    Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the aerostructures test wing (ATW), which was designed and tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.

  17. Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2009-01-01

    Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.

  18. Comparison of thermal analytic model with experimental test results for 30-sentimeter-diameter engineering model mercury ion thruster

    NASA Technical Reports Server (NTRS)

    Oglebay, J. C.

    1977-01-01

    A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.

  19. Major advances in testing of dairy products: milk component and dairy product attribute testing.

    PubMed

    Barbano, D M; Lynch, J M

    2006-04-01

    Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.

  20. Cognitive-analytical therapy for a patient with functional neurological symptom disorder-conversion disorder (psychogenic myopia): A case study.

    PubMed

    Nasiri, Hamid; Ebrahimi, Amrollah; Zahed, Arash; Arab, Mostafa; Samouei, Rahele

    2015-05-01

    Functional neurological symptom disorder commonly presents with symptoms and defects of sensory and motor functions. Therefore, it is often mistaken for a medical condition. It is well known that functional neurological symptom disorder more often caused by psychological factors. There are three main approaches namely analytical, cognitive and biological to manage conversion disorder. Any of such approaches can be applied through short-term treatment programs. In this case, study a 12-year-old boy with the diagnosed functional neurological symptom disorder (psychogenic myopia) was put under a cognitive-analytical treatment. The outcome of this treatment modality was proved successful.

  1. Recent Studies in Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Garcia, Rafael Ferro

    2008-01-01

    Functional Analytic Psychotherapy (FAP), based on the principles of radical behaviorism, emphasizes the impact of eventualities that occur during therapeutic sessions, the therapist-client interaction context, functional equivalence between environments, natural reinforcement and shaping by the therapist. This paper reviews recent studies of FAP…

  2. Dynamic Modeling and Testing of MSRR-1 for Use in Microgravity Environments Analysis

    NASA Technical Reports Server (NTRS)

    Gattis, Christy; LaVerde, Bruce; Howell, Mike; Phelps, Lisa H. (Technical Monitor)

    2001-01-01

    Delicate microgravity science is unlikely to succeed on the International Space Station if vibratory and transient disturbers corrupt the environment. An analytical approach to compute the on-orbit acceleration environment at science experiment locations within a standard payload rack resulting from these disturbers is presented. This approach has been grounded by correlation and comparison to test verified transfer functions. The method combines the results of finite element and statistical energy analysis using tested damping and modal characteristics to provide a reasonable approximation of the total root-mean-square (RMS) acceleration spectra at the interface to microgravity science experiment hardware.

  3. The NASA JSC Hypervelocity Impact Test Facility (HIT-F)

    NASA Technical Reports Server (NTRS)

    Crews, Jeanne L.; Christiansen, Eric L.

    1992-01-01

    The NASA Johnson Space Center Hypervelocity Impact Test Facility was created in 1980 to study the hypervelocity impact characteristics of composite materials. The facility consists of the Hypervelocity Impact Laboratory (HIRL) and the Hypervelocity Analysis Laboratory (HAL). The HIRL supports three different-size light-gas gun ranges which provide the capability of launching particle sizes from 100 micron spheres to 12.7 mm cylinders. The HAL performs three functions: (1) the analysis of data collected from shots in the HIRL, (2) numerical and analytical modeling to predict impact response beyond test conditions, and (3) risk and damage assessments for spacecraft exposed to the meteoroid and orbital debris environments.

  4. Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators

    NASA Technical Reports Server (NTRS)

    Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)

    2002-01-01

    Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.

  5. Functional Interfaces Constructed by Controlled/Living Radical Polymerization for Analytical Chemistry.

    PubMed

    Wang, Huai-Song; Song, Min; Hang, Tai-Jun

    2016-02-10

    The high-value applications of functional polymers in analytical science generally require well-defined interfaces, including precisely synthesized molecular architectures and compositions. Controlled/living radical polymerization (CRP) has been developed as a versatile and powerful tool for the preparation of polymers with narrow molecular weight distributions and predetermined molecular weights. Among the CRP system, atom transfer radical polymerization (ATRP) and reversible addition-fragmentation chain transfer (RAFT) are well-used to develop new materials for analytical science, such as surface-modified core-shell particles, monoliths, MIP micro- or nanospheres, fluorescent nanoparticles, and multifunctional materials. In this review, we summarize the emerging functional interfaces constructed by RAFT and ATRP for applications in analytical science. Various polymers with precisely controlled architectures including homopolymers, block copolymers, molecular imprinted copolymers, and grafted copolymers were synthesized by CRP methods for molecular separation, retention, or sensing. We expect that the CRP methods will become the most popular technique for preparing functional polymers that can be broadly applied in analytical chemistry.

  6. Method and apparatus for processing a test sample to concentrate an analyte in the sample from a solvent in the sample

    DOEpatents

    Turner, Terry D.; Beller, Laurence S.; Clark, Michael L.; Klingler, Kerry M.

    1997-01-01

    A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus are also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container.

  7. Method and apparatus for processing a test sample to concentrate an analyte in the sample from a solvent in the sample

    DOEpatents

    Turner, T.D.; Beller, L.S.; Clark, M.L.; Klingler, K.M.

    1997-10-14

    A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: (a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; (b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; (c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and (d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus is also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container. 8 figs.

  8. Measuring myokines with cardiovascular functions: pre-analytical variables affecting the analytical output.

    PubMed

    Lombardi, Giovanni; Sansoni, Veronica; Banfi, Giuseppe

    2017-08-01

    In the last few years, a growing number of molecules have been associated to an endocrine function of the skeletal muscle. Circulating myokine levels, in turn, have been associated with several pathophysiological conditions including the cardiovascular ones. However, data from different studies are often not completely comparable or even discordant. This would be due, at least in part, to the whole set of situations related to the preparation of the patient prior to blood sampling, blood sampling procedure, processing and/or store. This entire process constitutes the pre-analytical phase. The importance of the pre-analytical phase is often not considered. However, in routine diagnostics, the 70% of the errors are in this phase. Moreover, errors during the pre-analytical phase are carried over in the analytical phase and affects the final output. In research, for example, when samples are collected over a long time and by different laboratories, a standardized procedure for sample collecting and the correct procedure for sample storage are acknowledged. In this review, we discuss the pre-analytical variables potentially affecting the measurement of myokines with cardiovascular functions.

  9. Assuring the Quality of Test Results in the Field of Nuclear Techniques and Ionizing Radiation. The Practical Implementation of Section 5.9 of the EN ISO/IEC 17025 Standard

    NASA Astrophysics Data System (ADS)

    Cucu, Daniela; Woods, Mike

    2008-08-01

    The paper aims to present a practical approach for testing laboratories to ensure the quality of their test results. It is based on the experience gained in assessing a large number of testing laboratories, discussing with management and staff, reviewing results obtained in national and international PTs and ILCs and exchanging information in the EA laboratory committee. According to EN ISO/IEC 17025, an accredited laboratory has to implement a programme to ensure the quality of its test results for each measurand. Pre-analytical, analytical and post-analytical measures shall be applied in a systematic manner. They shall include both quality control and quality assurance measures. When designing the quality assurance programme a laboratory should consider pre-analytical activities (like personnel training, selection and validation of test methods, qualifying equipment), analytical activities ranging from sampling, sample preparation, instrumental analysis and post-analytical activities (like decoding, calculation, use of statistical tests or packages, management of results). Designed on different levels (analyst, quality manager and technical manager), including a variety of measures, the programme shall ensure the validity and accuracy of test results, the adequacy of the management system, prove the laboratory's competence in performing tests under accreditation and last but not least show the comparability of test results. Laboratory management should establish performance targets and review periodically QC/QA results against them, implementing appropriate measures in case of non-compliance.

  10. Scaling relations for a needle-like electron beam plasma from the self-similar behavior in beam propagation

    NASA Astrophysics Data System (ADS)

    Bai, Xiaoyan; Chen, Chen; Li, Hong; Liu, Wandong; Chen, Wei

    2017-10-01

    Scaling relations of the main parameters of a needle-like electron beam plasma (EBP) to the initial beam energy, beam current, and discharge pressures are presented. The relations characterize the main features of the plasma in three parameter space and can provide great convenience in plasma design with electron beams. First, starting from the self-similar behavior of electron beam propagation, energy and charge depositions in beam propagation were expressed analytically as functions of the three parameters. Second, according to the complete coupled theoretical model of an EBP and appropriate assumptions, independent equations controlling the density and space charges were derived. Analytical expressions for the density and charges versus functions of energy and charge depositions were obtained. Finally, with the combination of the expressions derived in the above two steps, scaling relations of the density and potential to the three parameters were constructed. Meanwhile, numerical simulations were used to test part of the scaling relations.

  11. The net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR.

    PubMed

    van de Geijn, J; Fraass, B A

    1984-01-01

    The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.

  12. Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van de Geijn, J.; Fraass, B.A.

    The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less

  13. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  14. Quantum calculus of classical vortex images, integrable models and quantum states

    NASA Astrophysics Data System (ADS)

    Pashaev, Oktay K.

    2016-10-01

    From two circle theorem described in terms of q-periodic functions, in the limit q→1 we have derived the strip theorem and the stream function for N vortex problem. For regular N-vortex polygon we find compact expression for the velocity of uniform rotation and show that it represents a nonlinear oscillator. We describe q-dispersive extensions of the linear and nonlinear Schrodinger equations, as well as the q-semiclassical expansions in terms of Bernoulli and Euler polynomials. Different kind of q-analytic functions are introduced, including the pq-analytic and the golden analytic functions.

  15. Study of lubricant circulation in HVAC systems. Volume 1: Description of technical effort and results; Final technical report, March 1995--April 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biancardi, F.R.; Michels, H.H.; Sienel, T.H.

    1996-10-01

    The purpose of this program was to conduct experimental and analytical efforts to determine lubricant circulation characteristics of new HFC/POE pairs and HFC/mineral oil pairs in a representative central residential HVAC system and to compare their behavior with the traditional HCFC-22/mineral oil (refrigerant/lubricant) pair. A dynamic test facility was designed and built to conduct the experimental efforts. This facility provided a unique capability to visually and physically measure oil circulation rates, on-line, in operating systems. A unique on-line ultraviolet-based measurement device was used to obtain detailed data on the rate and level of lubricant oil circulated within the operating heatmore » pump system. The experimental and analytical data developed during the program are presented as a function of vapor velocity, refrigerant/lubricant viscosity, system features and equipment. Both visual observations and instrumentation were used to understand ``worst case`` oil circulation situations. This report is presented in two volumes. Volume 1 contains a complete description of the program scope, objective, test results summary, conclusions, description of test facility and recommendations for future effort. Volume 2 contains all of the program test data essentially as taken from the laboratory dynamic test facility during the sequence of runs.« less

  16. Novel immunoassay formats for integrated microfluidic circuits: diffusion immunoassays (DIA)

    NASA Astrophysics Data System (ADS)

    Weigl, Bernhard H.; Hatch, Anson; Kamholz, Andrew E.; Yager, Paul

    2000-03-01

    Novel designs of integrated fluidic microchips allow separations, chemical reactions, and calibration-free analytical measurements to be performed directly in very small quantities of complex samples such as whole blood and contaminated environmental samples. This technology lends itself to applications such as clinical diagnostics, including tumor marker screening, and environmental sensing in remote locations. Lab-on-a-Chip based systems offer many *advantages over traditional analytical devices: They consume extremely low volumes of both samples and reagents. Each chip is inexpensive and small. The sampling-to-result time is extremely short. They perform all analytical functions, including sampling, sample pretreatment, separation, dilution, and mixing steps, chemical reactions, and detection in an integrated microfluidic circuit. Lab-on-a-Chip systems enable the design of small, portable, rugged, low-cost, easy to use, yet extremely versatile and capable diagnostic instruments. In addition, fluids flowing in microchannels exhibit unique characteristics ('microfluidics'), which allow the design of analytical devices and assay formats that would not function on a macroscale. Existing Lab-on-a-chip technologies work very well for highly predictable and homogeneous samples common in genetic testing and drug discovery processes. One of the biggest challenges for current Labs-on-a-chip, however, is to perform analysis in the presence of the complexity and heterogeneity of actual samples such as whole blood or contaminated environmental samples. Micronics has developed a variety of Lab-on-a-Chip assays that can overcome those shortcomings. We will now present various types of novel Lab- on-a-Chip-based immunoassays, including the so-called Diffusion Immunoassays (DIA) that are based on the competitive laminar diffusion of analyte molecules and tracer molecules into a region of the chip containing antibodies that target the analyte molecules. Advantages of this technique are a reduction in reagents, higher sensitivity, minimal preparation of complex samples such as blood, real-time calibration, and extremely rapid analysis.

  17. Dynamic response of gold nanoparticle chemiresistors to organic analytes in aqueous solution.

    PubMed

    Müller, Karl-Heinz; Chow, Edith; Wieczorek, Lech; Raguse, Burkhard; Cooper, James S; Hubble, Lee J

    2011-10-28

    We investigate the response dynamics of 1-hexanethiol-functionalized gold nanoparticle chemiresistors exposed to the analyte octane in aqueous solution. The dynamic response is studied as a function of the analyte-water flow velocity, the thickness of the gold nanoparticle film and the analyte concentration. A theoretical model for analyte limited mass-transport is used to model the analyte diffusion into the film, the partitioning of the analyte into the 1-hexanethiol capping layers and the subsequent swelling of the film. The degree of swelling is then used to calculate the increase of the electron tunnel resistance between adjacent nanoparticles which determines the resistance change of the film. In particular, the effect of the nonlinear relationship between resistance and swelling on the dynamic response is investigated at high analyte concentration. Good agreement between experiment and the theoretical model is achieved. This journal is © the Owner Societies 2011

  18. Analytic solution of field distribution and demagnetization function of ideal hollow cylindrical field source

    NASA Astrophysics Data System (ADS)

    Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min

    2017-09-01

    The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.

  19. Liquid Drop Model for Charged Spherical Metal Clusters

    NASA Astrophysics Data System (ADS)

    Seidl, M.; Brack, M.

    1996-02-01

    The average ground-state energy of a charged spherical metal cluster withNatoms andzexcessive valence electrons, i.e., with net chargeQ=-ezand radiusR=rsN1/3, is presented in the liquid drop model (LDM) expansionE(N, z)=avN+asN2/3+acN1/3+a0(z)+a-1(z) N-1/3+O(N-2/3). We derive analytical expressions for the leading LDM coefficientsav,as,ac, and, in particular, for the charge dependence of the further LDM coefficientsa0anda-1, using the jellium model and density functional theory in the local density approximation. We obtain for the ionization energyI(R)=W+α(e2/R)+O(R-2), with the bulk work functionW=[Φ(+∞)-Φ(0)]-eb, given first by Mahan and Schaich in terms of the electrostatic potentialΦand the bulk energy per electroneb, and a new analytical expression for the dimensionless coefficientα. We demonstrate that within classical theoryα={1}/{2} but, in agreement with experimental information,αtends to ∼0.4 if quantum-mechanical contributions are included. In order to test and confirm our analytical expressions, we discuss the numerical results of semiclassical density variational calculations in the extended Thomas-Fermi model.

  20. Determination of functional iron deficiency status in haemodialysis patients in central South Africa.

    PubMed

    Haupt, L; Weyers, R

    2016-08-01

    Functional iron deficiency (FID) is characterized by adequate body iron stores with an inadequate rate of iron delivery for erythropoiesis. In chronic kidney failure (CKD), iron availability is best assessed using the percentage of hypochromic red cells (%Hypo). The aim of our study was to determine the FID status of haemodialysis patients in central South Africa, using the %Hypo analyte and to evaluate the ability of the currently used biochemical tests, transferrin saturation (TSat) and serum ferritin to diagnose FID. For this study, 49 patients on haemodialysis were recruited. Haemoglobin (Hb), mean cell volume (MCV) and %Hypo were measured on the Advia 2120i. Biochemical analytes (serum ferritin, TSat) and C-reactive protein (CRP) levels were also recorded. Of the 49 participants, 21 (42.9%) were diagnosed with FID (%Hypo >6%). A large number of patients (91.8%) were anaemic. The TSat demonstrated poor sensitivity and specificity for diagnosing FID compared with %Hypo. The use of %Hypo (rather than TSat) to guide intravenous iron use spared 16 patients the potential harmful effects thereof. Using %Hypo as a single analyte to diagnose FID will lead to more appropriate use of limited resources and a reduction in treatment-related complications. © 2016 John Wiley & Sons Ltd.

  1. The shape parameter and its modification for defining coastal profiles

    NASA Astrophysics Data System (ADS)

    Türker, Umut; Kabdaşli, M. Sedat

    2009-03-01

    The shape parameter is important for the theoretical description of the sandy coastal profiles. This parameter has previously been defined as a function of the sediment-settling velocity. However, the settling velocity cannot be characterized over a wide range of sediment grains. This, in turn, limits the calculation of the shape parameter over a wide range. This paper provides a simpler and faster analytical equation to describe the shape parameter. The validity of the equation has been tested and compared with the previously estimated values given in both graphical and tabular forms. The results of this study indicate that the analytical solutions of the shape parameter improved the usability of profile better than graphical solutions, predicting better results both at the surf zone and offshore.

  2. Ultrathin gas permeable oxide membranes for chemical sensing: Nanoporous Ta 2O 5 test study

    DOE PAGES

    Imbault, Alexander; Wang, Yue; Kruse, Peter; ...

    2015-09-25

    Conductometric gas sensors made of gas permeable metal oxide ultrathin membranes can combine the functions of a selective filter, preconcentrator, and sensing element and thus can be particularly promising for the active sampling of diluted analytes. Here we report a case study of the electron transport and gas sensing properties of such a membrane made of nanoporous Ta 2O 5. These membranes demonstrated a noticeable chemical sensitivity toward ammonia, ethanol, and acetone at high temperatures above 400 °C. Furthermore, different from traditional thin films, such gas permeable, ultrathin gas sensing elements can be made suspended enabling advanced architectures of ultrasensitivemore » analytical systems operating at high temperatures and in harsh environments.« less

  3. 75 FR 5722 - Procedures for Transportation Workplace Drug and Alcohol Testing Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-04

    ... drugs in a DOT drug test. You must not test ``DOT specimens'' for any other drugs. (a) Marijuana... test analyte concentration analyte concentration Marijuana metabolites 50 ng/mL THCA \\1\\ 15 ng/mL...

  4. A density functional theory study of the correlation between analyte basicity, ZnPc adsorption strength, and sensor response.

    PubMed

    Tran, N L; Bohrer, F I; Trogler, W C; Kummel, A C

    2009-05-28

    Density functional theory (DFT) simulations were used to determine the binding strength of 12 electron-donating analytes to the zinc metal center of a zinc phthalocyanine molecule (ZnPc monomer). The analyte binding strengths were compared to the analytes' enthalpies of complex formation with boron trifluoride (BF(3)), which is a direct measure of their electron donating ability or Lewis basicity. With the exception of the most basic analyte investigated, the ZnPc binding energies were found to correlate linearly with analyte basicities. Based on natural population analysis calculations, analyte complexation to the Zn metal of the ZnPc monomer resulted in limited charge transfer from the analyte to the ZnPc molecule, which increased with analyte-ZnPc binding energy. The experimental analyte sensitivities from chemiresistor ZnPc sensor data were proportional to an exponential of the binding energies from DFT calculations consistent with sensitivity being proportional to analyte coverage and binding strength. The good correlation observed suggests DFT is a reliable method for the prediction of chemiresistor metallophthalocyanine binding strengths and response sensitivities.

  5. 42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...

  6. 42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...

  7. 42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...

  8. Fourier Transform Infrared Absorption Spectroscopy for Quantitative Analysis of Gas Mixtures at Low Temperatures for Homeland Security Applications.

    PubMed

    Meier, D C; Benkstein, K D; Hurst, W S; Chu, P M

    2017-05-01

    Performance standard specifications for point chemical vapor detectors are established in ASTM E 2885-13 and ASTM E 2933-13. The performance evaluation of the detectors requires the accurate delivery of known concentrations of the chemical target to the system under test. Referee methods enable the analyte test concentration and associated uncertainties in the analyte test concentration to be validated by independent analysis, which is especially important for reactive analytes. This work extends the capability of a previously demonstrated method for using Fourier transform infrared (FT-IR) absorption spectroscopy for quantitatively evaluating the composition of vapor streams containing hazardous materials at Acute Exposure Guideline Levels (AEGL) to include test conditions colder than laboratory ambient temperatures. The described method covers the use of primary reference spectra to establish analyte concentrations, the generation of secondary reference spectra suitable for measuring analyte concentrations under specified testing environments, and the use of additional reference spectra and spectral profile strategies to mitigate the uncertainties due to impurities and water condensation within the low-temperature (7 °C, -5 °C) test cell. Important benefits of this approach include verification of the test analyte concentration with characterized uncertainties by in situ measurements co-located with the detector under test, near-real-time feedback, and broad applicability to toxic industrial chemicals.

  9. Fourier Transform Infrared Absorption Spectroscopy for Quantitative Analysis of Gas Mixtures at Low Temperatures for Homeland Security Applications

    PubMed Central

    Meier, D.C.; Benkstein, K.D.; Hurst, W.S.; Chu, P.M.

    2016-01-01

    Performance standard specifications for point chemical vapor detectors are established in ASTM E 2885-13 and ASTM E 2933-13. The performance evaluation of the detectors requires the accurate delivery of known concentrations of the chemical target to the system under test. Referee methods enable the analyte test concentration and associated uncertainties in the analyte test concentration to be validated by independent analysis, which is especially important for reactive analytes. This work extends the capability of a previously demonstrated method for using Fourier transform infrared (FT-IR) absorption spectroscopy for quantitatively evaluating the composition of vapor streams containing hazardous materials at Acute Exposure Guideline Levels (AEGL) to include test conditions colder than laboratory ambient temperatures. The described method covers the use of primary reference spectra to establish analyte concentrations, the generation of secondary reference spectra suitable for measuring analyte concentrations under specified testing environments, and the use of additional reference spectra and spectral profile strategies to mitigate the uncertainties due to impurities and water condensation within the low-temperature (7 °C, −5 °C) test cell. Important benefits of this approach include verification of the test analyte concentration with characterized uncertainties by in situ measurements co-located with the detector under test, near-real-time feedback, and broad applicability to toxic industrial chemicals. PMID:28090126

  10. Platelet Function Analyzed by Light Transmission Aggregometry.

    PubMed

    Hvas, Anne-Mette; Favaloro, Emmanuel J

    2017-01-01

    Analysis of platelet function is widely used for diagnostic work-up in patients with increased bleeding tendency. During the last decades, platelet function testing has also been introduced for evaluation of antiplatelet therapy, but this is still recommended for research purposes only. Platelet function can also be assessed for hyper-aggregability, but this is less often evaluated. Light transmission aggregometry (LTA) was introduced in the early 1960s and has since been considered the gold standard. This optical detection system is based on changes in turbidity measured as a change in light transmission, which is proportional to the extent of platelet aggregation induced by addition of an agonist. LTA is a flexible method, as different agonists can be used in varying concentrations, but performance of the test requires large blood volumes and experienced laboratory technicians as well as specialized personal to interpret results. In the present chapter, a protocol for LTA is described including all steps from pre-analytical preparation to interpretation of results.

  11. BIOTIN INTERFERENCE WITH ROUTINE CLINICAL IMMUNOASSAYS: UNDERSTAND THE CAUSES AND MITIGATE THE RISKS.

    PubMed

    Samarasinghe, Shanika; Meah, Farah; Singh, Vinita; Basit, Arshi; Emanuele, Nicholas; Emanuele, Mary Ann; Mazhari, Alaleh; Holmes, Earle W

    2017-08-01

    The objectives of this report are to review the mechanisms of biotin interference with streptavidin/biotin-based immunoassays, identify automated immunoassay systems vulnerable to biotin interference, describe how to estimate and minimize the risk of biotin interference in vulnerable assays, and review the literature pertaining to biotin interference in endocrine function tests. The data in the manufacturer's "Instructions for Use" for each of the methods utilized by seven immunoassay system were evaluated. We also conducted a systematic search of PubMed/MEDLINE for articles containing terms associated with biotin interference. Available original reports and case series were reviewed. Abstracts from recent scientific meetings were also identified and reviewed. The recent, marked, increase in the use of over-the-counter, high-dose biotin supplements has been accompanied by a steady increase in the number of reports of analytical interference by exogenous biotin in the immunoassays used to evaluate endocrine function. Since immunoassay methods of similar design are also used for the diagnosis and management of anemia, malignancies, autoimmune and infectious diseases, cardiac damage, etc., biotin-related analytical interference is a problem that touches every area of internal medicine. It is important for healthcare personnel to become more aware of immunoassay methods that are vulnerable to biotin interference and to consider biotin supplements as potential sources of falsely increased or decreased test results, especially in cases where a lab result does not correlate with the clinical scenario. FDA = U.S. Food & Drug Administration FT3 = free tri-iodothyronine FT4 = free thyroxine IFUs = instructions for use LH = luteinizing hormone PTH = parathyroid hormone SA/B = streptavidin/biotin TFT = thyroid function test TSH = thyroid-stimulating hormone.

  12. Strongdeco: Expansion of analytical, strongly correlated quantum states into a many-body basis

    NASA Astrophysics Data System (ADS)

    Juliá-Díaz, Bruno; Graß, Tobias

    2012-03-01

    We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock-Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin's famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations. Program summaryProgram title: Strongdeco Catalogue identifier: AELA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5475 No. of bytes in distributed program, including test data, etc.: 31 071 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which Mathematica can be installed Operating system: Linux, Windows, Mac Classification: 2.9 Nature of problem: Analysis of strongly correlated quantum states. Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools. Running time: The distributed notebook takes a couple of minutes to run.

  13. Using Functional Analytic Therapy to Train Therapists in Acceptance and Commitment Therapy, a Conceptual and Practical Framework

    ERIC Educational Resources Information Center

    Schoendorff, Benjamin; Steinwachs, Joanne

    2012-01-01

    How can therapists be effectively trained in clinical functional contextualism? In this conceptual article we propose a new way of training therapists in Acceptance and Commitment Therapy skills using tools from Functional Analytic Psychotherapy in a training context functionally similar to the therapeutic relationship. FAP has been successfully…

  14. MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.

    PubMed

    Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang

    2005-01-01

    MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.

  15. MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory

    PubMed Central

    Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang

    2005-01-01

    MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721

  16. Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-380

    NASA Technical Reports Server (NTRS)

    Kramer, W. H.; Smith, J. J.

    1978-01-01

    The performance testing and analytical teardown of CF6-6D engine serial number 451-380 which was recently removed from a DC-10 aircraft is summarized. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as-received deteriorated hardware. Aspects discussed include the analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both specific fuel comsumption and exhaust gas temperature.

  17. Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-479

    NASA Technical Reports Server (NTRS)

    Kramer, W. H.; Smith, J. J.

    1978-01-01

    The performance testing and analytical teardown of CF6-6D engine is summarized. This engine had completed its initial installation on DC-10 aircraft. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as received deteriorated hardware. Included in this report is a detailed analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both SFC (specific fuel consumption) and EGT (exhaust gas temperature).

  18. S-2 stage 1/25 scale model base region thermal environment test. Volume 1: Test results, comparison with theory and flight data

    NASA Technical Reports Server (NTRS)

    Sadunas, J. A.; French, E. P.; Sexton, H.

    1973-01-01

    A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.

  19. Interlaboratory comparability, bias, and precision for four laboratories measuring constituents in precipitation, November 1982-August 1983

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Malo, B.A.

    1985-01-01

    Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)

  20. betaFIT: A computer program to fit pointwise potentials to selected analytic functions

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.; Pashov, Asen

    2017-01-01

    This paper describes program betaFIT, which performs least-squares fits of sets of one-dimensional (or radial) potential function values to four different types of sophisticated analytic potential energy functional forms. These families of potential energy functions are: the Expanded Morse Oscillator (EMO) potential [J Mol Spectrosc 1999;194:197], the Morse/Long-Range (MLR) potential [Mol Phys 2007;105:663], the Double Exponential/Long-Range (DELR) potential [J Chem Phys 2003;119:7398], and the "Generalized Potential Energy Function (GPEF)" form introduced by Šurkus et al. [Chem Phys Lett 1984;105:291], which includes a wide variety of polynomial potentials, such as the Dunham [Phys Rev 1932;41:713], Simons-Parr-Finlan [J Chem Phys 1973;59:3229], and Ogilvie-Tipping [Proc R Soc A 1991;378:287] polynomials, as special cases. This code will be useful for providing the realistic sets of potential function shape parameters that are required to initiate direct fits of selected analytic potential functions to experimental data, and for providing better analytical representations of sets of ab initio results.

  1. Sierra/Aria 4.48 Verification Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal Fluid Development Team

    Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.

  2. Identification of overlapping communities and their hierarchy by locally calculating community-changing resolution levels

    NASA Astrophysics Data System (ADS)

    Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen

    2011-01-01

    We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.

  3. Superfund CLP National Functional Guidelines for Data Review

    EPA Pesticide Factsheets

    A collection of all the national functional guidelines for data review written and maintained by EPA OSWER OSRTI's Analytical Services Branch (ASB). Used for review of analytical data generated using CLP SOWs.

  4. Analytical mass formula and nuclear surface properties in the ETF approximation. Part I: symmetric nuclei

    NASA Astrophysics Data System (ADS)

    Aymard, François; Gulminelli, Francesca; Margueron, Jérôme

    2016-08-01

    The problem of determination of nuclear surface energy is addressed within the framework of the extended Thomas Fermi (ETF) approximation using Skyrme functionals. We propose an analytical model for the density profiles with variationally determined diffuseness parameters. In this first paper, we consider the case of symmetric nuclei. In this situation, the ETF functional can be exactly integrated, leading to an analytical formula expressing the surface energy as a function of the couplings of the energy functional. The importance of non-local terms is stressed and it is shown that they cannot be deduced simply from the local part of the functional, as it was suggested in previous works.

  5. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  6. Novel approach for dam break flow modeling using computational intelligence

    NASA Astrophysics Data System (ADS)

    Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar

    2018-04-01

    A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.

  7. The statistical theory of the fracture of fragile bodies. Part 2: The integral equation method

    NASA Technical Reports Server (NTRS)

    Kittl, P.

    1984-01-01

    It is demonstrated how with the aid of a bending test, the Weibull fracture risk function can be determined - without postulating its analytical form - by resolving an integral equation. The respective solutions for rectangular and circular section beams are given. In the first case the function is expressed as an algorithm and in the second, in the form of series. Taking into account that the cumulative fracture probability appearing in the solution to the integral equation must be continuous and monotonically increasing, any case of fabrication or selection of samples can be treated.

  8. Literature search of publications concerning the prediction of dynamic inlet flow distortion and related topics

    NASA Technical Reports Server (NTRS)

    Schweikhhard, W. G.; Chen, Y. S.

    1983-01-01

    Publications prior to March 1981 were surveyed to determine inlet flow dynamic distortion prediction methods and to catalog experimental and analytical information concerning inlet flow dynamic distortion prediction methods and to catalog experimental and analytical information concerning inlet flow dynamics at the engine-inlet interface of conventional aircraft (excluding V/STOL). The sixty-five publications found are briefly summarized and tabulated according to topic and are cross-referenced according to content and nature of the investigation (e.g., predictive, experimental, analytical and types of tests). Three appendices include lists of references, authors, organizations and agencies conducting the studies. Also, selected materials summaries, introductions and conclusions - from the reports are included. Few reports were found covering methods for predicting the probable maximum distortion. The three predictive methods found are those of Melick, Jacox and Motycka. The latter two require extensive high response pressure measurements at the compressor face, while the Melick Technique can function with as few as one or two measurements.

  9. Development of paper-based microfluidic analytical device for iron assay using photomask printed with 3D printer for fabrication of hydrophilic and hydrophobic zones on paper by photolithography.

    PubMed

    Asano, Hitoshi; Shiraishi, Yukihide

    2015-07-09

    This paper describes a paper-based microfluidic analytical device for iron assay using a photomask printed with a 3D printer for fabrication of hydrophilic and hydrophobic zones on the paper by photolithography. Several designed photomasks for patterning paper-based microfluidic analytical devices can be printed with a 3D printer easily, rapidly and inexpensively. A chromatography paper was impregnated with the octadecyltrichlorosilane n-hexane solution and hydrophobized. After the hydrophobic zone of the paper was exposed to the UV light through the photomask, the hydrophilic zone was generated. The smallest functional hydrophilic channel and hydrophobic barrier were ca. 500 μm and ca. 100 μm in width, respectively. The fabrication method has high stability, resolution and precision for hydrophilic channel and hydrophobic barrier. This test paper was applied to the analysis of iron in water samples using a colorimetry with phenanthroline. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Wireless Instantaneous Neurotransmitter Concentration Sensing System (WINCS) for intraoperative neurochemical monitoring.

    PubMed

    Kimble, Christopher J; Johnson, David M; Winter, Bruce A; Whitlock, Sidney V; Kressin, Kenneth R; Horne, April E; Robinson, Justin C; Bledsoe, Jonathan M; Tye, Susannah J; Chang, Su-Youne; Agnesi, Filippo; Griessenauer, Christoph J; Covey, Daniel; Shon, Young-Min; Bennet, Kevin E; Garris, Paul A; Lee, Kendall H

    2009-01-01

    The Wireless Instantaneous Neurotransmitter Concentration Sensing System (WINCS) measures extracellular neurotransmitter concentration in vivo and displays the data graphically in nearly real time. WINCS implements two electroanalytical methods, fast-scan cyclic voltammetry (FSCV) and fixed-potential amperometry (FPA), to measure neurotransmitter concentrations at an electrochemical sensor, typically a carbon-fiber microelectrode. WINCS comprises a battery-powered patient module and a custom software application (WINCSware) running on a nearby personal computer. The patient module impresses upon the electrochemical sensor either a constant potential (for FPA) or a time-varying waveform (for FSCV). A transimpedance amplifier converts the resulting current to a signal that is digitized and transmitted to the base station via a Bluetooth radio link. WINCSware controls the operational parameters for FPA or FSCV, and records the transmitted data stream. Filtered data is displayed in various formats, including a background-subtracted plot of sequential FSCV scans - a representation that enables users to distinguish the signatures of various analytes with considerable specificity. Dopamine, glutamate, adenosine and serotonin were selected as analytes for test trials. Proof-of-principle tests included in vitro flow-injection measurements and in vivo measurements in rat and pig. Further testing demonstrated basic functionality in a 3-Tesla MRI unit. WINCS was designed in compliance with consensus standards for medical electrical device safety, and it is anticipated that its capability for real-time intraoperative monitoring of neurotransmitter release at an implanted sensor will prove useful for advancing functional neurosurgery.

  11. A Semianalytical Model for Pumping Tests in Finite Heterogeneous Confined Aquifers With Arbitrarily Shaped Boundary

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Dai, Cheng; Xue, Liang

    2018-04-01

    This study presents a Laplace-transform-based boundary element method to model the groundwater flow in a heterogeneous confined finite aquifer with arbitrarily shaped boundaries. The boundary condition can be Dirichlet, Neumann or Robin-type. The derived solution is analytical since it is obtained through the Green's function method within the domain. However, the numerical approximation is required on the boundaries, which essentially renders it a semi-analytical solution. The proposed method can provide a general framework to derive solutions for zoned heterogeneous confined aquifers with arbitrarily shaped boundary. The requirement of the boundary element method presented here is that the Green function must exist for a specific PDE equation. In this study, the linear equations for the two-zone and three-zone confined aquifers with arbitrarily shaped boundary is established in Laplace space, and the solution can be obtained by using any linear solver. Stehfest inversion algorithm can be used to transform it back into time domain to obtain the transient solution. The presented solution is validated in the two-zone cases by reducing the arbitrarily shaped boundaries to circular ones and comparing it with the solution in Lin et al. (2016, https://doi.org/10.1016/j.jhydrol.2016.07.028). The effect of boundary shape and well location on dimensionless drawdown in two-zone aquifers is investigated. Finally the drawdown distribution in three-zone aquifers with arbitrarily shaped boundary for constant-rate tests (CRT) and flow rate distribution for constant-head tests (CHT) are analyzed.

  12. Wireless Instantaneous Neurotransmitter Concentration Sensing System (WINCS) for Intraoperative Neurochemical Monitoring

    PubMed Central

    Kimble, Christopher J.; Johnson, David M.; Winter, Bruce A.; Whitlock, Sidney V.; Kressin, Kenneth R.; Horne, April E.; Robinson, Justin C.; Bledsoe, Jonathan M.; Tye, Susannah J.; Chang, Su-Youne; Agnesi, Filippo; Griessenauer, Christoph J.; Covey, Daniel; Shon, Young-Min; Bennet, Kevin E.; Garris, Paul A.; Lee, Kendall H.

    2010-01-01

    The Wireless Instantaneous Neurotransmitter Concentration Sensing System (WINCS) measures extracellular neurotransmitter concentration in vivo and displays the data graphically in nearly real time. WINCS implements two electroanalytical methods, fast-scan cyclic voltammetry (FSCV) and fixed-potential amperometry (FPA), to measure neurotransmitter concentrations at an electrochemical sensor, typically a carbon-fiber microelectrode. WINCS comprises a battery-powered patient module and a custom software application (WINCSware) running on a nearby personal computer. The patient module impresses upon the electrochemical sensor either a constant potential (for FPA) or a time-varying waveform (for FSCV). A transimpedance amplifier converts the resulting current to a signal that is digitized and transmitted to the base station via a Bluetooth® radio link. WINCSware controls the operational parameters for FPA or FSCV, and records the transmitted data stream. Filtered data is displayed in various formats, including a background-subtracted plot of sequential FSCV scans—a representation that enables users to distinguish the signatures of various analytes with considerable specificity. Dopamine, glutamate, adenosine and serotonin were selected as analytes for test trials. Proof-of-principle tests included in vitro flow-injection measurements and in vivo measurements in rat and pig. Further testing demonstrated basic functionality in a 3-Tesla MRI unit. WINCS was designed in compliance with consensus standards for medical electrical device safety, and it is anticipated that its capability for real-time intraoperative monitoring of neurotransmitter release at an implanted sensor will prove useful for advancing functional neurosurgery. PMID:19963865

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report was prepared at the request of the Lawrence Livermore Laboratory (LLL) to provide background information for analyzing soil-structure interaction by the frequency-independent impedance function approach. LLL is conducting such analyses as part of its seismic review of selected operating plants under the Systematic Evaluation Program for the US Nuclear Regulatory Commission. The analytical background and basic assumptionsof the impedance function theory are briefly reviewed, and the role of radiation damping in soil-structure interaction analysis is discussed. The validity of modeling soil-structure interaction by using frequency-independent functions is evaluated based on data from several field tests. Finally, the recommendedmore » procedures for performing soil-structure interaction analyses are discussed with emphasis on the modal superposition method.« less

  14. Bayesian Community Detection in the Space of Group-Level Functional Differences

    PubMed Central

    Venkataraman, Archana; Yang, Daniel Y.-J.; Pelphrey, Kevin A.; Duncan, James S.

    2017-01-01

    We propose a unified Bayesian framework to detect both hyper- and hypo-active communities within whole-brain fMRI data. Specifically, our model identifies dense subgraphs that exhibit population-level differences in functional synchrony between a control and clinical group. We derive a variational EM algorithm to solve for the latent posterior distributions and parameter estimates, which subsequently inform us about the afflicted network topology. We demonstrate that our method provides valuable insights into the neural mechanisms underlying social dysfunction in autism, as verified by the Neurosynth meta-analytic database. In contrast, both univariate testing and community detection via recursive edge elimination fail to identify stable functional communities associated with the disorder. PMID:26955022

  15. Bayesian Community Detection in the Space of Group-Level Functional Differences.

    PubMed

    Venkataraman, Archana; Yang, Daniel Y-J; Pelphrey, Kevin A; Duncan, James S

    2016-08-01

    We propose a unified Bayesian framework to detect both hyper- and hypo-active communities within whole-brain fMRI data. Specifically, our model identifies dense subgraphs that exhibit population-level differences in functional synchrony between a control and clinical group. We derive a variational EM algorithm to solve for the latent posterior distributions and parameter estimates, which subsequently inform us about the afflicted network topology. We demonstrate that our method provides valuable insights into the neural mechanisms underlying social dysfunction in autism, as verified by the Neurosynth meta-analytic database. In contrast, both univariate testing and community detection via recursive edge elimination fail to identify stable functional communities associated with the disorder.

  16. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  17. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  18. Managing laboratory test ordering through test frequency filtering.

    PubMed

    Janssens, Pim M W; Wasser, Gerd

    2013-06-01

    Modern computer systems allow limits to be set on the periods allowed for repetitive testing. We investigated a computerised system for managing potentially overtly frequent laboratory testing, calculating the financial savings obtained. In consultation with hospital physicians, tests were selected for which 'spare periods' (periods during which tests are barred) might be set to control repetitive testing. The tests were selected and spare periods determined based on known analyte variations in health and disease, variety of tissues or cells giving rise to analytes, clinical conditions and rate of change determining analyte levels, frequency with which doctors need information about the analytes and the logistical needs of the clinic. The operation and acceptance of the system was explored with 23 analytes. Frequency filtering was subsequently introduced for 44 tests, each with their own spare periods. The proportion of tests barred was 0.56%, the most frequent of these being for total cholesterol, uric acid and HDL-cholesterol. The financial savings were 0.33% of the costs of all testing, with HbA1c, HDL-cholesterol and vitamin B12 yielding the largest savings. Following the introduction of the system the number of barred tests ultimately decreased, suggesting accommodation by the test requestors. Managing laboratory testing through computerised limits to prevent overtly frequent testing is feasible. The savings were relatively low, but sustaining the system takes little effort, giving little reason not to apply it. The findings will serve as a basis for improving the system and may guide others in introducing similar systems.

  19. Depth-resolved monitoring of analytes diffusion in ocular tissues

    NASA Astrophysics Data System (ADS)

    Larin, Kirill V.; Ghosn, Mohamad G.; Tuchin, Valery V.

    2007-02-01

    Optical coherence tomography (OCT) is a noninvasive imaging technique with high in-depth resolution. We employed OCT technique for monitoring and quantification of analyte and drug diffusion in cornea and sclera of rabbit eyes in vitro. Different analytes and drugs such as metronidazole, dexamethasone, ciprofloxacin, mannitol, and glucose solution were studied and whose permeability coefficients were calculated. Drug diffusion monitoring was performed as a function of time and as a function of depth. Obtained results suggest that OCT technique might be used for analyte diffusion studies in connective and epithelial tissues.

  20. The Effect of Contingent Reinforcement on Target Variables in Outpatient Psychotherapy for Depression: A Successful and Unsuccessful Case Using Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Kanter, Jonathan W.; Landes, Sara J.; Busch, Andrew M.; Rusch, Laura C.; Brown, Keri R.; Baruch, David E.; Holman, Gareth I.

    2006-01-01

    The current study investigated a behavior-analytic treatment, functional analytic psychotherapy (FAP), for outpatient depression utilizing two single-subject A/A+B designs. The baseline condition was cognitive behavioral therapy. Results demonstrated treatment success in 1 client after the addition of FAP and treatment failure in the 2nd. This…

  1. Post-analytical Issues in Hemostasis and Thrombosis Testing.

    PubMed

    Favaloro, Emmanuel J; Lippi, Giuseppe

    2017-01-01

    Analytical concerns within hemostasis and thrombosis testing are continuously decreasing. This is essentially attributable to modern instrumentation, improvements in test performance and reliability, as well as the application of appropriate internal quality control and external quality assurance measures. Pre-analytical issues are also being dealt with in some newer instrumentation, which are able to detect hemolysis, icteria and lipemia, and, in some cases, other issues related to sample collection such as tube under-filling. Post-analytical issues are generally related to appropriate reporting and interpretation of test results, and these are the focus of the current overview, which provides a brief description of these events, as well as guidance for their prevention or minimization. In particular, we propose several strategies for improved post-analytical reporting of hemostasis assays and advise that this may provide the final opportunity to prevent serious clinical errors in diagnosis.

  2. Gel-based immunotest for simultaneous detection of 2,4,6-trichlorophenol and ochratoxin A in red wine.

    PubMed

    Beloglazova, N V; Goryacheva, I Yu; Rusanova, T Yu; Yurasov, N A; Galve, R; Marco, M-P; De Saeger, S

    2010-07-05

    A new rapid method which allows simultaneous one step detection of two analytes of different nature (2,4,6,-trichlorophenol (TCP) and ochratoxin A (OTA)) in red wine was developed. It was based on a column test with three separate immunolayers: two test layers and one control layer. Each layer consisted of sepharose gel with immobilized anti-OTA (OTA test layer), anti-TCP (TCP test layer) or anti-HRP (control layer) antibodies. Analytes bind to the antibodies in the corresponding test layer while sample flows through the column. Then a mixture of OTA-HRP and TCP-HRP in appropriate dilutions was used, followed by the application of chromogenic substrate. Colour development of the test layer occurred when the corresponding analyte was absent in the sample. HRP-conjugates bound to anti-HRP antibody in the control layer independently of presence or absence of analytes and a blue colour developed in the control layer. Cut-off values for both analytes were 2 microg L(-1). The described method was applied to the simultaneous detection of TCP and OTA in wine samples. To screen the analytes in red wine samples, clean-up columns were used for sample pre-treatment in combination with the test column. Results were confirmed by chromatographic methods. Copyright 2010 Elsevier B.V. All rights reserved.

  3. Urea functionalized surface-bonded sol-gel coating for on-line hyphenation of capillary microextraction with high-performance liquid chromatography.

    PubMed

    Jillani, Shehzada Muhammad Sajid; Alhooshani, Khalid

    2018-03-30

    Sol-gel urea functionalized-[bis(hydroxyethyl)amine] terminated polydimethylsiloxane coating was developed for capillary microextraction-high performance liquid chromatographic analysis from aqueous samples. A fused silica capillary is coated from the inside with surface bonded coating material and is created through in-situ sol-gel reaction. The urea-functionalized coating was immobilized to the inner surface of the capillary by the condensation reaction of silanol groups of capillary and sol-solution. The characterization of the coating material was successfully done by using X-ray photoelectron spectroscopy, thermogravimetric analysis, field emission scanning electron microscope, and energy dispersive X-ray spectrometer. To make a setup of online capillary microextraction-high performance liquid chromatography, the urea functionalized capillary was installed in the HPLC manual injection port. The analytes of interest were pre-concentrated in the coated sampling loop, desorbed by the mobile phase, chromatographically separated on C-18 column, and analyzed by UV detector. Sol-gel coated capillaries were used for online extraction and high-performance liquid chromatographic analysis of phenols, ketones, aldehydes, and polyaromatic hydrocarbons. This newly developed coating showed excellent extraction for a variety of analytes ranging from highly polar to non-polar in nature. The analysis using sol-gel coating showed excellent overall sensitivity in terms of lower detection limits (S/N = 3) for the analytes (0.10 ng mL -1 -14.29 ng mL -1 ) with acceptable reproducibility that is less than 12.0%RSD (n = 3). Moreover, the capillary to capillary reproducibility of the analysis was also tested by changing the capillary of the same size. This provided excellent%RSD of less than 10.0% (n = 3). Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Analytical validation of a novel multiplex test for detection of advanced adenoma and colorectal cancer in symptomatic patients.

    PubMed

    Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce

    2018-05-30

    Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as a clinical test for evaluating CRC and AA risk in symptomatic individuals. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. The singular values of the imbedding operators of some classes of analytic functions of several variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parfenov, O.G.

    1994-12-25

    We discuss three results. The first exhibits the order of decrease of the s-values as a function of the CR-dimension of a compact set on which we approximate the class of analytic functions being studied. The second is an asymptotic formula for the case when the domain of analyticity and the compact set are Reinhart domains. The third is the computation of the s-values of a special operator that is of interest for approximation theory on one-dimensional manifolds.

  6. Comparative spectral analysis of veterinary powder product by continuous wavelet and derivative transforms

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru

    2007-10-01

    Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.

  7. Numerical and analytic models of spontaneous frequency sweeping for energetic particle-driven Alfven eigenmodes

    NASA Astrophysics Data System (ADS)

    Wang, Ge; Berk, H. L.

    2011-10-01

    The frequency chirping signal arising from spontaneous a toroidial Alfven eigenmode (TAE) excited by energetic particles is studied for both numerical and analytic models. The time-dependent numerical model is based on the 1D Vlasov equation. We use a sophisticated tracking method to lock onto the resonant structure to enable the chirping frequency to be nearly constant in the calculation frame. The accuracy of the adiabatic approximation is tested during the simulation which justifies the appropriateness of our analytic model. The analytic model uses the adiabatic approximation which allows us to solve the wave evolution equation in frequency space. Then, the resonant interactions between energetic particles and TAE yield predictions for the chirping rate, wave frequency and amplitudes vs. time. Here, an adiabatic invariant J is defined on the separatrix of a chirping mode to determine the region of confinement of the wave trapped distribution function. We examine the asymptotic behavior of the chirping signal for its long time evolution and find agreement in essential features with the results of the simulation. Work supported by Department of Energy contract DE-FC02-08ER54988.

  8. Helios: Understanding Solar Evolution Through Text Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randazzese, Lucien

    This proof-of-concept project focused on developing, testing, and validating a range of bibliometric, text analytic, and machine-learning based methods to explore the evolution of three photovoltaic (PV) technologies: Cadmium Telluride (CdTe), Dye-Sensitized solar cells (DSSC), and Multi-junction solar cells. The analytical approach to the work was inspired by previous work by the same team to measure and predict the scientific prominence of terms and entities within specific research domains. The goal was to create tools that could assist domain-knowledgeable analysts in investigating the history and path of technological developments in general, with a focus on analyzing step-function changes in performance,more » or “breakthroughs,” in particular. The text-analytics platform developed during this project was dubbed Helios. The project relied on computational methods for analyzing large corpora of technical documents. For this project we ingested technical documents from the following sources into Helios: Thomson Scientific Web of Science (papers), the U.S. Patent & Trademark Office (patents), the U.S. Department of Energy (technical documents), the U.S. National Science Foundation (project funding summaries), and a hand curated set of full-text documents from Thomson Scientific and other sources.« less

  9. Aeroelastic loads and stability investigation of a full-scale hingeless rotor

    NASA Technical Reports Server (NTRS)

    Peterson, Randall L.; Johnson, Wayne

    1991-01-01

    An analytical investigation was conducted to study the influence of various parameters on predicting the aeroelastic loads and stability of a full-scale hingeless rotor in hover and forward flight. The CAMRAD/JA (Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics, Johnson Aeronautics) analysis code is used to obtain the analytical predictions. Data are presented for rotor blade bending and torsional moments as well as inplane damping data obtained for rotor operation in hover at a constant rotor rotational speed of 425 rpm and thrust coefficients between 0.0 and 0.12. Experimental data are presented from a test in the wind tunnel. Validation of the rotor system structural model with experimental rotor blade loads data shows excellent correlation with analytical results. Using this analysis, the influence of different aerodynamic inflow models, the number of generalized blade and body degrees of freedom, and the control-system stiffness at predicted stability levels are shown. Forward flight predictions of the BO-105 rotor system for 1-G thrust conditions at advance ratios of 0.0 to 0.35 are presented. The influence of different aerodynamic inflow models, dynamic inflow models and shaft angle variations on predicted stability levels are shown as a function of advance ratio.

  10. Applying the design-build-test paradigm in microbiome engineering.

    PubMed

    Pham, Hoang Long; Ho, Chun Loong; Wong, Adison; Lee, Yung Seng; Chang, Matthew Wook

    2017-12-01

    The recently discovered roles of human microbiome in health and diseases have inspired research efforts across many disciplines to engineer microbiome for health benefits. In this review, we highlight recent progress in human microbiome research and how modifications to the microbiome could result in implications to human health. Furthermore, we discuss the application of a 'design-build-test' framework to expedite microbiome engineering efforts by reviewing current literature on three key aspects: design principles to engineer the human microbiome, methods to engineer microbiome with desired functions, and analytical techniques to examine complex microbiome samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Space Shuttle Plume and Plume Impingement Study

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Penny, M. M.

    1977-01-01

    The extent of the influence of the propulsion system exhaust plumes on the vehicle performance and control characteristics is a complex function of vehicle geometry, propulsion system geometry, engine operating conditions and vehicle flight trajectory were investigated. Analytical support of the plume technology test program was directed at the two latter problem areas: (1) definition of the full-scale exhaust plume characteristics, (2) application of appropriate similarity parameters; and (3) analysis of wind tunnel test data. Verification of the two-phase plume and plume impingement models was directed toward the definition of the full-scale exhaust plume characteristics and the separation motor impingement problem.

  12. Nodal Green’s Function Method Singular Source Term and Burnable Poison Treatment in Hexagonal Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A.A. Bingham; R.M. Ferrer; A.M. ougouag

    2009-09-01

    An accurate and computationally efficient two or three-dimensional neutron diffusion model will be necessary for the development, safety parameters computation, and fuel cycle analysis of a prismatic Very High Temperature Reactor (VHTR) design under Next Generation Nuclear Plant Project (NGNP). For this purpose, an analytical nodal Green’s function solution for the transverse integrated neutron diffusion equation is developed in two and three-dimensional hexagonal geometry. This scheme is incorporated into HEXPEDITE, a code first developed by Fitzpatrick and Ougouag. HEXPEDITE neglects non-physical discontinuity terms that arise in the transverse leakage due to the transverse integration procedure application to hexagonal geometry andmore » cannot account for the effects of burnable poisons across nodal boundaries. The test code being developed for this document accounts for these terms by maintaining an inventory of neutrons by using the nodal balance equation as a constraint of the neutron flux equation. The method developed in this report is intended to restore neutron conservation and increase the accuracy of the code by adding these terms to the transverse integrated flux solution and applying the nodal Green’s function solution to the resulting equation to derive a semi-analytical solution.« less

  13. In-orbit evaluation of the control system/structural mode interactions of the OSO-8 spacecraft

    NASA Technical Reports Server (NTRS)

    Slafer, L. I.

    1979-01-01

    The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. The paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments, and have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system.

  14. A continuum of executive function deficits in early subcortical vascular cognitive impairment: A systematic review and meta-analysis.

    PubMed

    Sudo, Felipe Kenji; Amado, Patricia; Alves, Gilberto Sousa; Laks, Jerson; Engelhardt, Eliasz

    2017-01-01

    Subcortical Vascular Cognitive Impairment (SVCI) is a clinical continuum of vascular-related cognitive impairment, including Vascular Mild Cognitive Impairment (VaMCI) and Vascular Dementia. Deficits in Executive Function (EF) are hallmarks of the disorder, but the best methods to assess this function have yet to be determined. The insidious and almost predictable course of SVCI and the multidimensional concept of EF suggest that a temporal dissociation of impairments in EF domains exists early in the disorder. This study aims to review and analyze data from the literature about performance of VaMCI patients on the most used EF tests through a meta-analytic approach. Medline, Web of Knowledge and PsycINFO were searched, using the terms: "vascular mild cognitive impairment" OR "vascular cognitive impairment no dementia" OR "vascular mild neurocognitive disorder" AND "dysexecutive" OR "executive function". Meta-analyses were conducted for each of the selected tests, using random-effect models. Systematic review showed major discrepancies among the results of the studies included. Meta-analyses evidenced poorer performance on the Trail-Making Test part B and the Stroop color test by VaMCI patients compared to controls. A continuum of EF impairments has been proposed in SVCI. Early deficits appear to occur in cognitive flexibility and inhibitory control.

  15. Analytic complexity of functions of two variables

    NASA Astrophysics Data System (ADS)

    Beloshapka, V. K.

    2007-09-01

    The definition of analytic complexity of an analytic function of two variables is given. It is proved that the class of functions of a chosen complexity is a differentialalgebraic set. A differential polynomial defining the functions of first class is constructed. An algorithm for obtaining relations defining an arbitrary class is described. Examples of functions are given whose order of complexity is equal to zero, one, two, and infinity. It is shown that the formal order of complexity of the Cardano and Ferrari formulas is significantly higher than their analytic complexity. The complexity classes turn out to be invariant with respect to a certain infinite-dimensional transformation pseudogroup. In this connection, we describe the orbits of the action of this pseudogroup in the jets of orders one, two, and three. The notion of complexity order is extended to plane (or “planar”) 3-webs. It is discovered that webs of complexity order one are the hexagonal webs. Some problems are posed.

  16. Double Wigner distribution function of a first-order optical system with a hard-edge aperture.

    PubMed

    Pan, Weiqing

    2008-01-01

    The effect of an apertured optical system on Wigner distribution can be expressed as a superposition integral of the input Wigner distribution function and the double Wigner distribution function of the apertured optical system. By introducing a hard aperture function into a finite sum of complex Gaussian functions, the double Wigner distribution functions of a first-order optical system with a hard aperture outside and inside it are derived. As an example of application, the analytical expressions of the Wigner distribution for a Gaussian beam passing through a spatial filtering optical system with an internal hard aperture are obtained. The analytical results are also compared with the numerical integral results, and they show that the analytical results are proper and ascendant.

  17. On the Application of Euler Deconvolution to the Analytic Signal

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.; Pasteka, R.

    2005-05-01

    In the last years papers on Euler deconvolution (ED) used formulations that accounted for the unknown background field, allowing to consider the structural index (N) an unknown to be solved for, together with the source coordinates. Among them, Hsu (2002) and Fedi and Florio (2002) independently pointed out that the use of an adequate m-order derivative of the field, instead than the field itself, allowed solving for both N and source position. For the same reason, Keating and Pilkington (2004) proposed the ED of the analytic signal. A function being analyzed by ED must be homogeneous but also harmonic, because it must be possible to compute its vertical derivative, as well known from potential field theory. Huang et al. (1995), demonstrated that analytic signal is a homogeneous function, but, for instance, it is rather obvious that the magnetic field modulus (corresponding to the analytic signal of a gravity field) is not a harmonic function (e.g.: Grant & West, 1965). Thus, it appears that a straightforward application of ED to the analytic signal is not possible because a vertical derivation of this function is not correct by using standard potential fields analysis tools. In this note we want to theoretically and empirically check what kind of error are caused in the ED by such wrong assumption about analytic signal harmonicity. We will discuss results on profile and map synthetic data, and use a simple method to compute the vertical derivative of non-harmonic functions measured on a horizontal plane. Our main conclusions are: 1. To approximate a correct evaluation of the vertical derivative of a non-harmonic function it is useful to compute it with finite-difference, by using upward continuation. 2. We found that the errors on the vertical derivative computed as if the analytic signal was harmonic reflects mainly on the structural index estimate; these errors can mislead an interpretation even though the depth estimates are almost correct. 3. Consistent estimates of depth and S.I. are instead obtained by using a finite-difference vertical derivative of the analytic signal. 4. Analysis of a case history confirms the strong error in the estimation of structural index if the analytic signal is treated as an harmonic function.

  18. Annual banned-substance review: Analytical approaches in human sports drug testing.

    PubMed

    Thevis, Mario; Kuuranne, Tiia; Geyer, Hans

    2018-01-01

    Several high-profile revelations concerning anti-doping rule violations over the past 12 months have outlined the importance of tackling prevailing challenges and reducing the limitations of the current anti-doping system. At this time, the necessity to enhance, expand, and improve analytical test methods in response to the substances outlined in the World Anti-Doping Agency's (WADA) Prohibited List represents an increasingly crucial task for modern sports drug-testing programs. The ability to improve analytical testing methods often relies on the expedient application of novel information regarding superior target analytes for sports drug-testing assays, drug elimination profiles, alternative test matrices, together with recent advances in instrumental developments. This annual banned-substance review evaluates literature published between October 2016 and September 2017 offering an in-depth evaluation of developments in these arenas and their potential application to substances reported in WADA's 2017 Prohibited List. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Analytic family of post-merger template waveforms

    NASA Astrophysics Data System (ADS)

    Del Pozzo, Walter; Nagar, Alessandro

    2017-06-01

    Building on the analytical description of the post-merger (ringdown) waveform of coalescing, nonprecessing, spinning binary black holes introduced by Damour and Nagar [Phys. Rev. D 90, 024054 (2014), 10.1103/PhysRevD.90.024054], we propose an analytic, closed form, time-domain, representation of the ℓ=m =2 gravitational radiation mode emitted after merger. This expression is given as a function of the component masses and dimensionless spins (m1 ,2,χ1 ,2) of the two inspiraling objects, as well as of the mass MBH and (complex) frequency σ1 of the fundamental quasinormal mode of the remnant black hole. Our proposed template is obtained by fitting the post-merger waveform part of several publicly available numerical relativity simulations from the Simulating eXtreme Spacetimes (SXS) catalog and then suitably interpolating over (symmetric) mass ratio and spins. We show that this analytic expression accurately reproduces (˜0.01 rad ) the phasing of the post-merger data of other data sets not used in its construction. This is notably the case of the spin-aligned run SXS:BBH:0305, whose intrinsic parameters are consistent with the 90% credible intervals reported in the parameter-estimation followup of GW150914 by B.P. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016), 10.1103/PhysRevLett.116.241102]. Using SXS waveforms as "experimental" data, we further show that our template could be used on the actual GW150914 data to perform a new measure of the complex frequency of the fundamental quasinormal mode so as to exploit the complete (high signal-to-noise-ratio) post-merger waveform. We assess the usefulness of our proposed template by analyzing, in a realistic setting, SXS full inspiral-merger-ringdown waveforms and constructing posterior probability distribution functions for the central frequency damping time of the first overtone of the fundamental quasinormal mode as well as for the physical parameters of the systems. We also briefly explore the possibility opened by our waveform model to test the second law of black hole dynamics. Our model will help improve current tests of general relativity, in particular the general-relativistic no-hair theorem, and allow for novel tests, such as that of the area theorem.

  20. A novel octadecylsilane functionalized graphene oxide/silica composite stationary phase for high performance liquid chromatography.

    PubMed

    Liang, Xiaojing; Wang, Shuai; Liu, Shujuan; Liu, Xia; Jiang, Shengxiang

    2012-08-01

    An octadecylsilane functionalized graphene oxide/silica stationary phase was fabricated by assembling graphene oxide onto the silica particles through an amide bond and subsequent immobilization of octadecylsilane. The chromatographic properties of the stationary phase were investigated by reversed-phase chromatography with alkylbenzenes, polycyclic aromatic hydrocarbons, amines, and phenolic compounds as the analytes. All the compounds achieved good separation on the column. The comparison between a C18 commercial column and the new stationary phase indicated that the existence of π-electron system of graphene oxide allows π-π interaction between analyte and octadecylsilane functionalized graphene oxide/silica stationary phase except for hydrophobic interaction, while only hydrophobic interaction presented between analyte and C18 commercial column. This suggests that some analytes can be better separated on the octadecylsilane functionalized graphene oxide/silica column. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Aptamer-based microfluidic beads array sensor for simultaneous detection of multiple analytes employing multienzyme-linked nanoparticle amplification and quantum dots labels.

    PubMed

    Zhang, He; Hu, Xinjiang; Fu, Xin

    2014-07-15

    This study reports the development of an aptamer-mediated microfluidic beads-based sensor for multiple analytes detection and quantification using multienzyme-linked nanoparticle amplification and quantum dots labels. Adenosine and cocaine were selected as the model analytes to validate the assay design based on strand displacement induced by target-aptamer complex. Microbeads functionalized with the aptamers and modified electron rich proteins were arrayed within a microfluidic channel and were connected with the horseradish peroxidases (HRP) and capture DNA probe derivative gold nanoparticles (AuNPs) via hybridization. The conformational transition of aptamer induced by target-aptamer complex contributes to the displacement of functionalized AuNPs and decreases the fluorescence signal of microbeads. In this approach, increased binding events of HRP on each nanosphere and enhanced mass transport capability inherent from microfluidics are integrated for enhancing the detection sensitivity of analytes. Based on the dual signal amplification strategy, the developed aptamer-based microfluidic bead array sensor could discriminate as low as 0.1 pM of adenosine and 0.5 pM cocaine, and showed a 500-fold increase in detection limit of adenosine compared to the off-chip test. The results proved the microfluidic-based method was a rapid and efficient system for aptamer-based targets assays (adenosine (0.1 pM) and cocaine (0.5 pM)), requiring only minimal (microliter) reagent use. This work demonstrated the successful application of aptamer-based microfluidic beads array sensor for detection of important molecules in biomedical fields. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Analytical performance of a bronchial genomic classifier.

    PubMed

    Hu, Zhanzhi; Whitney, Duncan; Anderson, Jessica R; Cao, Manqiu; Ho, Christine; Choi, Yoonha; Huang, Jing; Frink, Robert; Smith, Kate Porta; Monroe, Robert; Kennedy, Giulia C; Walsh, P Sean

    2016-02-26

    The current standard practice of lung lesion diagnosis often leads to inconclusive results, requiring additional diagnostic follow up procedures that are invasive and often unnecessary due to the high benign rate in such lesions (Chest 143:e78S-e92, 2013). The Percepta bronchial genomic classifier was developed and clinically validated to provide more accurate classification of lung nodules and lesions that are inconclusive by bronchoscopy, using bronchial brushing specimens (N Engl J Med 373:243-51, 2015, BMC Med Genomics 8:18, 2015). The analytical performance of the Percepta test is reported here. Analytical performance studies were designed to characterize the stability of RNA in bronchial brushing specimens during collection and shipment; analytical sensitivity defined as input RNA mass; analytical specificity (i.e. potentially interfering substances) as tested on blood and genomic DNA; and assay performance studies including intra-run, inter-run, and inter-laboratory reproducibility. RNA content within bronchial brushing specimens preserved in RNAprotect is stable for up to 20 days at 4 °C with no changes in RNA yield or integrity. Analytical sensitivity studies demonstrated tolerance to variation in RNA input (157 ng to 243 ng). Analytical specificity studies utilizing cancer positive and cancer negative samples mixed with either blood (up to 10 % input mass) or genomic DNA (up to 10 % input mass) demonstrated no assay interference. The test is reproducible from RNA extraction through to Percepta test result, including variation across operators, runs, reagent lots, and laboratories (standard deviation of 0.26 for scores on > 6 unit scale). Analytical sensitivity, analytical specificity and robustness of the Percepta test were successfully verified, supporting its suitability for clinical use.

  3. The Challenge of Developing a Universal Case Conceptualization for Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Bonow, Jordan T.; Maragakis, Alexandros; Follette, William C.

    2012-01-01

    Functional Analytic Psychotherapy (FAP) targets a client's interpersonal behavior for change with the goal of improving his or her quality of life. One question guiding FAP case conceptualization is, "What interpersonal behavioral repertoires will allow a specific client to function optimally?" Previous FAP writings have suggested that a therapist…

  4. Intimacy Is a Transdiagnostic Problem for Cognitive Behavior Therapy: Functional Analytical Psychotherapy Is a Solution

    ERIC Educational Resources Information Center

    Wetterneck, Chad T.; Hart, John M.

    2012-01-01

    Problems with intimacy and interpersonal issues are exhibited across most psychiatric disorders. However, most of the targets in Cognitive Behavioral Therapy are primarily intrapersonal in nature, with few directly involved in interpersonal functioning and effective intimacy. Functional Analytic Psychotherapy (FAP) provides a behavioral basis for…

  5. First Steps in FAP: Experiences of Beginning Functional Analytic Psychotherapy Therapist with an Obsessive-Compulsive Personality Disorder Client

    ERIC Educational Resources Information Center

    Manduchi, Katia; Schoendorff, Benjamin

    2012-01-01

    Practicing Functional Analytic Psychotherapy (FAP) for the first time can seem daunting to therapists. Establishing a deep and intense therapeutic relationship, identifying FAP's therapeutic targets of clinically relevant behaviors, and using contingent reinforcement to help clients emit more functional behavior in the therapeutic relationship all…

  6. Analytic and numeric Green's functions for a two-dimensional electron gas in an orthogonal magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cresti, Alessandro; Grosso, Giuseppe; Parravicini, Giuseppe Pastori

    2006-05-15

    We have derived closed analytic expressions for the Green's function of an electron in a two-dimensional electron gas threaded by a uniform perpendicular magnetic field, also in the presence of a uniform electric field and of a parabolic spatial confinement. A workable and powerful numerical procedure for the calculation of the Green's functions for a large infinitely extended quantum wire is considered exploiting a lattice model for the wire, the tight-binding representation for the corresponding matrix Green's function, and the Peierls phase factor in the Hamiltonian hopping matrix element to account for the magnetic field. The numerical evaluation of themore » Green's function has been performed by means of the decimation-renormalization method, and quite satisfactorily compared with the analytic results worked out in this paper. As an example of the versatility of the numerical and analytic tools here presented, the peculiar semilocal character of the magnetic Green's function is studied in detail because of its basic importance in determining magneto-transport properties in mesoscopic systems.« less

  7. On-orbit evaluation of the control system/structural mode interactions on OSO-8

    NASA Technical Reports Server (NTRS)

    Slafer, L. I.

    1980-01-01

    The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. This paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments. The test results have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system, and also verified the approach taken to vehicle and servo ground testing.

  8. An analytical solution for two-dimensional vacuum preloading combined with electro-osmosis consolidation using EKG electrodes

    PubMed Central

    Qiu, Chenchen; Li, Yande

    2017-01-01

    China is a country with vast territory, but economic development and population growth have reduced the usable land resources in recent years. Therefore, reclamation by pumping and filling is carried out in eastern coastal regions of China in order to meet the needs of urbanization. However, large areas of reclaimed land need rapid drainage consolidation treatment. Based on past researches on how to improve the treatment efficiency of soft clay using vacuum preloading combined with electro-osmosis, a two-dimensional drainage plane model was proposed according to the Terzaghi and Esrig consolidation theory. However, the analytical solution using two-dimensional plane model was never involved. Current analytical solutions can’t have a thorough theoretical analysis of practical engineering and give relevant guidance. Considering the smearing effect and the rectangle arrangement pattern, an analytical solution is derived to describe the behavior of pore-water and the consolidation process by using EKG (electro-kinetic geo synthetics) materials. The functions of EKG materials include drainage, electric conduction and corrosion resistance. Comparison with test results is carried out to verify the analytical solution. It is found that the measured value is larger than the applied vacuum degree because of the stacking effect of the vacuum preloading and electro-osmosis. The trends of the mean measured value and the mean analytical value processes are comparable. Therefore, the consolidation model can accurately assess the change in pore-water pressure and the consolidation process during vacuum preloading combined with electro-osmosis. PMID:28771496

  9. Analytical and experimental investigation of a 1/8-scale dynamic model of the shuttle orbiter. Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.

    1974-01-01

    The methods and procedures used in the analysis and testing of the scale model are reported together with the correlation of the analytical and experimental results. The model, the NASTRAN finite element analysis, and results are discussed. Tests and analytical investigations are also reported.

  10. Intagliated phosphor screen image tube project

    NASA Technical Reports Server (NTRS)

    Hertzel, R. J.

    1982-01-01

    The production and evaluation of a magnetic focus image tube for astronomical photography that has an intagliated phosphor screen is described. The modulation transfer function of such a tube was measured by electronic means and by film tests, and the results compared with tubes of more conventional construction. The physical properties of the image tube and film combination, the analytical model of the optical interface, and the salient features of the intagliated screen tube are described. The results of electronic MTF tests of the intagliated image tube and of the densitometry of the tube and film test samples are presented. It is concluded that the intagliated screen is a help, but that the thickness of the photographic film is also important.

  11. The equivalent thermal properties of a single fracture

    NASA Astrophysics Data System (ADS)

    Sangaré, D.; Thovert, J.-F.; Adler, P. M.

    2008-10-01

    The normal resistance and the tangential conductivity of a single fracture with Gaussian or self-affine surfaces are systematically studied as functions of the nature of the materials in contact and of the geometrical parameters. Analytical formulas are provided in the lubrication limit for fractures with sinusoidal apertures; these formulas are used to substantiate empirical formulas for resistance and conductivity. Other approximations based on the combination of series and parallel formulas are tested.

  12. SURVIAC Bulletin. Issue 1

    DTIC Science & Technology

    2013-01-01

    settings that cover the range of environmental conditions in which the rations are expected to function. These vitally important state-of-the- art ...rf ig h te ri N u tr it io n iM o d e lin g Warfighter Nutrition Modeling continued from page 1. (Health Affairs) and the Joint Culinary Center of...Food Pilot Plant for production and testing of food to facilitate state-of-the- art ration development. The Analytical Microbiology and Food

  13. Nonlinear viscoelastic characterization of structural adhesives

    NASA Technical Reports Server (NTRS)

    Rochefort, M. A.; Brinson, H. F.

    1983-01-01

    Measurements of the nonliner viscoelastic behavior of two adhesives, FM-73 and FM-300, are presented and discussed. Analytical methods to quantify the measurements are given and fitted into a framework of an accelerated testing and analysis procedure. The single integral model used is shown to function well and is analogous to a time-temperature stress-superposition procedure (TTSSP). Advantages and disadvantages of the creep power law method used in this study are given.

  14. Expert Assessment of Conditions for Accredited Quality Management System Functioning in Testing Laboratories

    NASA Astrophysics Data System (ADS)

    Mytych, Joanna; Ligarski, Mariusz J.

    2018-03-01

    The quality management systems compliant with the ISO 9001:2009 have been thoroughly researched and described in detail in the world literature. The accredited management systems used in the testing laboratories and compliant with the ISO/IEC 17025:2005 have been mainly described in terms of the system design and implementation. They have also been investigated from the analytical point of view. Unfortunately, a low number of studies concerned the management system functioning in the accredited testing laboratories. The aim of following study was to assess the management system functioning in the accredited testing laboratories in Poland. On 8 October 2015, 1,213 accredited testing laboratories were present in Poland. They investigated various scientific areas and substances/objects. There are more and more such laboratories that have various problems and different long-term experience when it comes to the implementation, maintenance and improvement of the management systems. The article describes the results of the conducted expert assessment (survey) carried out to examine the conditions for the functioning of a management system in an accredited laboratory. It also focuses on the characteristics of the accredited research laboratories in Poland. The authors discuss the selection of the external and internal conditions that may affect the accredited management system. They show how the experts assessing the selected conditions were chosen. The survey results are also presented.

  15. External quality assessment of medical laboratories in Croatia: preliminary evaluation of post-analytical laboratory testing.

    PubMed

    Krleza, Jasna Lenicek; Dorotic, Adrijana; Grzunov, Ana

    2017-02-15

    Proper standardization of laboratory testing requires assessment of performance after the tests are performed, known as the post-analytical phase. A nationwide external quality assessment (EQA) scheme implemented in Croatia in 2014 includes a questionnaire on post-analytical practices, and the present study examined laboratory responses in order to identify current post-analytical phase practices and identify areas for improvement. In four EQA exercises between September 2014 and December 2015, 145-174 medical laboratories across Croatia were surveyed using the Module 11 questionnaire on the post-analytical phase of testing. Based on their responses, the laboratories were evaluated on four quality indicators: turnaround time (TAT), critical values, interpretative comments and procedures in the event of abnormal results. Results were presented as absolute numbers and percentages. Just over half of laboratories (56.3%) monitored TAT. Laboratories varied substantially in how they dealt with critical values. Most laboratories (65-97%) issued interpretative comments with test results. One third of medical laboratories (30.6-33.3%) issued abnormal test results without confirming them in additional testing. Our results suggest that the nationwide post-analytical EQA scheme launched in 2014 in Croatia has yet to be implemented to the full. To close the gaps between existing recommendations and laboratory practice, laboratory professionals should focus on ensuring that TAT is monitored and lists of critical values are established within laboratories. Professional bodies/institutions should focus on clarify and harmonized rules to standardized practices and applied for adding interpretative comments to laboratory test results and for dealing with abnormal test results.

  16. Lanthanide-functionalized silver nanoparticles for detection of an anthrax biomarker and test paper fabrication

    NASA Astrophysics Data System (ADS)

    Tan, Hongliang; Li, Qian; Ma, Chanjiao; Song, Yonghai; Xu, Fugang; Chen, Shouhui; Wang, Li

    2014-01-01

    It is highly desirable to develop a simple and sensitive analytical method for detection of anthrax biomarker (dipicolinic acid, DPA) because of its dangerous nature. In this work, we developed a fluorescent sensor for DPA detection based on terbium ion-functionalized silver nanoparticles with an average size of 6.7 nm (AgNPs-Tb3+). The fluorescent intensity of Tb-DPA complex on the surface of AgNPs was two times higher than that of Tb-DPA complex alone in a solution phase due to the metal-enhanced fluorescence (MEF) effect of AgNPs. The proposed fluorescent sensor exhibits excellent selectivity and high sensitivity for DPA. Importantly, a test paper for DPA detection was fabricated for the first time by the integration of AgNPs-Tb3+ onto the nitrocellulose membrane. Owing to the MEF effect of AgNPs, the lowest detectable concentration of the test paper-integrated AgNPs-Tb3+ for DPA by naked eyes is 10 times lower than that of the test paper-integrated Tb3+ alone. We believe that the presented strategy may open up new avenues to the development of portable and robust-sensing platforms based on functional hybrid materials.

  17. 42 CFR 493.801 - Condition: Enrollment and testing of samples.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...

  18. 42 CFR 493.801 - Condition: Enrollment and testing of samples.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...

  19. 42 CFR 493.801 - Condition: Enrollment and testing of samples.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...

  20. 42 CFR 493.801 - Condition: Enrollment and testing of samples.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...

  1. 42 CFR 493.801 - Condition: Enrollment and testing of samples.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...

  2. Non-Gaussian Distributions Affect Identification of Expression Patterns, Functional Annotation, and Prospective Classification in Human Cancer Genomes

    PubMed Central

    Marko, Nicholas F.; Weil, Robert J.

    2012-01-01

    Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863

  3. Numerical and analytical modeling of the end-loaded split (ELS) test specimens made of multi-directional coupled composite laminates

    NASA Astrophysics Data System (ADS)

    Samborski, Sylwester; Valvo, Paolo S.

    2018-01-01

    The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.

  4. Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.

    PubMed

    Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok

    2015-01-01

    Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.

  5. Analytical time-domain Green’s functions for power-law media

    PubMed Central

    Kelly, James F.; McGough, Robert J.; Meerschaert, Mark M.

    2008-01-01

    Frequency-dependent loss and dispersion are typically modeled with a power-law attenuation coefficient, where the power-law exponent ranges from 0 to 2. To facilitate analytical solution, a fractional partial differential equation is derived that exactly describes power-law attenuation and the Szabo wave equation [“Time domain wave-equations for lossy media obeying a frequency power-law,” J. Acoust. Soc. Am. 96, 491–500 (1994)] is an approximation to this equation. This paper derives analytical time-domain Green’s functions in power-law media for exponents in this range. To construct solutions, stable law probability distributions are utilized. For exponents equal to 0, 1∕3, 1∕2, 2∕3, 3∕2, and 2, the Green’s function is expressed in terms of Dirac delta, exponential, Airy, hypergeometric, and Gaussian functions. For exponents strictly less than 1, the Green’s functions are expressed as Fox functions and are causal. For exponents greater than or equal than 1, the Green’s functions are expressed as Fox and Wright functions and are noncausal. However, numerical computations demonstrate that for observation points only one wavelength from the radiating source, the Green’s function is effectively causal for power-law exponents greater than or equal to 1. The analytical time-domain Green’s function is numerically verified against the material impulse response function, and the results demonstrate excellent agreement. PMID:19045774

  6. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  7. Vapor-liquid equilibrium and equation of state of two-dimensional fluids from a discrete perturbation theory

    NASA Astrophysics Data System (ADS)

    Trejos, Víctor M.; Santos, Andrés; Gámez, Francisco

    2018-05-01

    The interest in the description of the properties of fluids of restricted dimensionality is growing for theoretical and practical reasons. In this work, we have firstly developed an analytical expression for the Helmholtz free energy of the two-dimensional square-well fluid in the Barker-Henderson framework. This equation of state is based on an approximate analytical radial distribution function for d-dimensional hard-sphere fluids (1 ≤ d ≤ 3) and is validated against existing and new simulation results. The so-obtained equation of state is implemented in a discrete perturbation theory able to account for general potential shapes. The prototypical Lennard-Jones and Yukawa fluids are tested in its two-dimensional version against available and new simulation data with semiquantitative agreement.

  8. Compression failure mechanisms of uni-ply composite plates with a circular cutout

    NASA Technical Reports Server (NTRS)

    Khamseh, A. R.; Waas, A. M.

    1992-01-01

    The effect of circular-hole size on the failure mode of uniply graphite-epoxy composite plates is investigated experimentally and analytically for uniaxial compressive loading. The test specimens are sandwiched between polyetherimide plastic for nondestructive evaluations of the uniply failure mechanisms associated with a range of hole sizes. Finite-element modeling based on classical lamination theory is conducted for the corresponding materials and geometries to reproduce the experimental results analytically. The type of compressive failure is found to be a function of hole size, with fiber buckling/kinking at the hole being the dominant failure mechanism for hole diam/plate width ratios exceeding 0.062. The results of the finite-element analysis supported the experimental data for these failure mechanisms and for those corresponding to smaller hole sizes.

  9. Analytical approximations to the Hotelling trace for digital x-ray detectors

    NASA Astrophysics Data System (ADS)

    Clarkson, Eric; Pineda, Angel R.; Barrett, Harrison H.

    2001-06-01

    The Hotelling trace is the signal-to-noise ratio for the ideal linear observer in a detection task. We provide an analytical approximation for this figure of merit when the signal is known exactly and the background is generated by a stationary random process, and the imaging system is an ideal digital x-ray detector. This approximation is based on assuming that the detector is infinite in extent. We test this approximation for finite-size detectors by comparing it to exact calculations using matrix inversion of the data covariance matrix. After verifying the validity of the approximation under a variety of circumstances, we use it to generate plots of the Hotelling trace as a function of pairs of parameters of the system, the signal and the background.

  10. Morse oscillator propagator in the high temperature limit I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toutounji, Mohamad, E-mail: Mtoutounji@uaeu.ac.ae

    2017-02-15

    In an earlier work of the author the time evolution of Morse oscillator was studied analytically and exactly at low temperatures whereupon optical correlation functions were calculated using Morse oscillator coherent states were employed. Morse oscillator propagator in the high temperature limit is derived and a closed form of its corresponding canonical partition function is obtained. Both diagonal and off-diagonal forms of Morse oscillator propagator are derived in the high temperature limit. Partition functions of diatomic molecules are calculated. - Highlights: • Derives the quantum propagator of Morse oscillator in the high temperature limit. • Uses the resulting diagonal propagatormore » to derive a closed form of Morse oscillator partition function. • Provides a more sophisticated formula of the quantum propagator to test the accuracy of the herein results.« less

  11. Noninvasive studies of human visual cortex using neuromagnetic techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aine, C.J.; George, J.S.; Supek, S.

    1990-01-01

    The major goals of noninvasive studies of the human visual cortex are: to increase knowledge of the functional organization of cortical visual pathways; and to develop noninvasive clinical tests for the assessment of cortical function. Noninvasive techniques suitable for studies of the structure and function of human visual cortex include magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission tomography (SPECT), scalp recorded event-related potentials (ERPs), and event-related magnetic fields (ERFs). The primary challenge faced by noninvasive functional measures is to optimize the spatial and temporal resolution of the measurement and analytic techniques in order to effectively characterizemore » the spatial and temporal variations in patterns of neuronal activity. In this paper we review the use of neuromagnetic techniques for this purpose. 8 refs., 3 figs.« less

  12. 40 CFR 1066.101 - Overview.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROCEDURES Equipment, Measurement Instruments, Fuel, and Analytical Gas Specifications § 1066.101 Overview. (a) This subpart addresses equipment related to emission testing, as well as test fuels and... specifications for fuels, engine fluids, and analytical gases; these specifications apply for testing under this...

  13. MEETING DATA QUALITY OBJECTIVES WITH INTERVAL INFORMATION

    EPA Science Inventory

    Immunoassay test kits are promising technologies for measuring analytes under field conditions. Frequently, these field-test kits report the analyte concentrations as falling in an interval between minimum and maximum values. Many project managers use field-test kits only for scr...

  14. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan

    PubMed Central

    2017-01-01

    Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395

  15. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan.

    PubMed

    Najat, Dereen

    2017-01-01

    Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.

  16. Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.

    PubMed

    Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S

    2004-01-01

    StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).

  17. (U) An Analytic Study of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-16

    We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less

  18. On the multiple zeros of a real analytic function with applications to the averaging theory of differential equations

    NASA Astrophysics Data System (ADS)

    García, Isaac A.; Llibre, Jaume; Maza, Susanna

    2018-06-01

    In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.

  19. Research study on stabilization and control: Modern sampled-data control theory. Continuous and discrete describing function analysis of the LST system. [with emphasis on the control moment gyroscope control loop

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.

    1974-01-01

    The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.

  20. The Role of Shaping the Client's Interpretations in Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Abreu, Paulo Roberto; Hubner, Maria Martha Costa; Lucchese, Fernanda

    2012-01-01

    Clinical behavior analysis often targets the shaping of clients' functional interpretations of/or rules about his own behavior. These are referred to as clinically relevant behavior 3 (CRB3) in functional analytic psychotherapy (FAP). We suggest that CRB3s should be seen as contingency-specifying stimuli (CSS), due to the their ability to change…

  1. Some subclasses of multivalent functions involving a certain linear operator

    NASA Astrophysics Data System (ADS)

    Srivastava, H. M.; Patel, J.

    2005-10-01

    The authors investigate various inclusion and other properties of several subclasses of the class of normalized p-valent analytic functions in the open unit disk, which are defined here by means of a certain linear operator. Problems involving generalized neighborhoods of analytic functions in the class are investigated. Finally, some applications of fractional calculus operators are considered.

  2. Analytical expressions for the log-amplitude correlation function of a plane wave through anisotropic atmospheric refractive turbulence.

    PubMed

    Gudimetla, V S Rao; Holmes, Richard B; Smith, Carey; Needham, Gregory

    2012-05-01

    The effect of anisotropic Kolmogorov turbulence on the log-amplitude correlation function for plane-wave fields is investigated using analysis, numerical integration, and simulation. A new analytical expression for the log-amplitude correlation function is derived for anisotropic Kolmogorov turbulence. The analytic results, based on the Rytov approximation, agree well with a more general wave-optics simulation based on the Fresnel approximation as well as with numerical evaluations, for low and moderate strengths of turbulence. The new expression reduces correctly to previously published analytic expressions for isotropic turbulence. The final results indicate that, as asymmetry becomes greater, the Rytov variance deviates from that given by the standard formula. This deviation becomes greater with stronger turbulence, up to moderate turbulence strengths. The anisotropic effects on the log-amplitude correlation function are dominant when the separation of the points is within the Fresnel length. In the direction of stronger turbulence, there is an enhanced dip in the correlation function at a separation close to the Fresnel length. The dip is diminished in the weak-turbulence axis, suggesting that energy redistribution via focusing and defocusing is dominated by the strong-turbulence axis. The new analytical expression is useful when anisotropy is observed in relevant experiments. © 2012 Optical Society of America

  3. Analytic modeling of aerosol size distributions

    NASA Technical Reports Server (NTRS)

    Deepack, A.; Box, G. P.

    1979-01-01

    Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.

  4. 42 CFR 493.1289 - Standard: Analytic systems quality assessment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a) The... through 493.1283. (b) The analytic systems quality assessment must include a review of the effectiveness...

  5. Nascent RNA kinetics: Transient and steady state behavior of models of transcription

    NASA Astrophysics Data System (ADS)

    Choubey, Sandeep

    2018-02-01

    Regulation of transcription is a vital process in cells, but mechanistic details of this regulation still remain elusive. The dominant approach to unravel the dynamics of transcriptional regulation is to first develop mathematical models of transcription and then experimentally test the predictions these models make for the distribution of mRNA and protein molecules at the individual cell level. However, these measurements are affected by a multitude of downstream processes which make it difficult to interpret the measurements. Recent experimental advancements allow for counting the nascent mRNA number of a gene as a function of time at the single-inglr cell level. These measurements closely reflect the dynamics of transcription. In this paper, we consider a general mechanism of transcription with stochastic initiation and deterministic elongation and probe its impact on the temporal behavior of nascent RNA levels. Using techniques from queueing theory, we derive exact analytical expressions for the mean and variance of the nascent RNA distribution as functions of time. We apply these analytical results to obtain the mean and variance of nascent RNA distribution for specific models of transcription. These models of initiation exhibit qualitatively distinct transient behaviors for both the mean and variance which further allows us to discriminate between them. Stochastic simulations confirm these results. Overall the analytical results presented here provide the necessary tools to connect mechanisms of transcription initiation to single-cell measurements of nascent RNA.

  6. Analytic tests and their relation to jet fuel thermal stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heneghan, S.P.; Kauffman, R.E.

    1995-05-01

    The evaluation of jet fuel thermal stability (TS) by simple analytic procedures has long been a goal of fuels chemists. The reason is obvious: if the analytic chemist can determine which types of material cause his test to respond, the refiners will know which materials to remove to improve stability. Complicating this quest is the lack of an acceptable quantitative TS test with which to compare any analytic procedures. To circumvent this problem, we recently compiled the results of TS tests for 12 fuels using six separate test procedures. The results covering a range of flow and temperature conditions showmore » that TS is not as dependent on test conditions as previously thought. Also, comparing the results from these tests with several analytic procedures shows that either a measure of the number of phenols or the total sulfur present in jet fuels is strongly indicative of the TS. The phenols have been measured using a cyclic voltammetry technique and the polar material by gas chromatography (atomic emission detection) following a solid phase extraction on silica gel. The polar material has been identified as mainly phenols (by mass spectrometry identification). Measures of the total acid number or peroxide concentration have little correlation with TS.« less

  7. The Development of MST Test Information for the Prediction of Test Performances

    ERIC Educational Resources Information Center

    Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.

    2017-01-01

    The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…

  8. Development of airframe design technology for crashworthiness.

    NASA Technical Reports Server (NTRS)

    Kruszewski, E. T.; Thomson, R. G.

    1973-01-01

    This paper describes the NASA portion of a joint FAA-NASA General Aviation Crashworthiness Program leading to the development of improved crashworthiness design technology. The objectives of the program are to develop analytical technology for predicting crashworthiness of structures, provide design improvements, and perform full-scale crash tests. The analytical techniques which are being developed both in-house and under contract are described, and typical results from these analytical programs are shown. In addition, the full-scale testing facility and test program are discussed.

  9. Optimality and stability of intentional and unintentional actions: I. Origins of drifts in performance.

    PubMed

    Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M; Latash, Mark L

    2017-02-01

    We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force-moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task.

  10. Optimality and stability of intentional and unintentional actions: I. Origins of drifts in performance

    PubMed Central

    Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2016-01-01

    We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force/moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task. PMID:27785549

  11. Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A 3 Year Experience

    PubMed Central

    Sakyi, AS; Laing, EF; Ephraim, RK; Asibey, OF; Sadique, OK

    2015-01-01

    Background: Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Aim: We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. Materials and Methods: We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). Results: A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Conclusion: Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified. PMID:25745569

  12. Novel two-way artificial boundary condition for 2D vertical water wave propagation modelled with Radial-Basis-Function Collocation Method

    NASA Astrophysics Data System (ADS)

    Mueller, A.

    2018-04-01

    A new transparent artificial boundary condition for the two-dimensional (vertical) (2DV) free surface water wave propagation modelled using the meshless Radial-Basis-Function Collocation Method (RBFCM) as boundary-only solution is derived. The two-way artificial boundary condition (2wABC) works as pure incidence, pure radiation and as combined incidence/radiation BC. In this work the 2wABC is applied to harmonic linear water waves; its performance is tested against the analytical solution for wave propagation over horizontal sea bottom, standing and partially standing wave as well as wave interference of waves with different periods.

  13. An Elastic Plastic Contact Model with Strain Hardening for the LAMMPS Granular Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhr, Bryan; Brake, Matthew Robert; Lechman, Jeremy B.

    2015-03-01

    The following details the implementation of an analytical elastic plastic contact model with strain hardening for normal im pacts into the LAMMPS granular package. The model assumes that, upon impact, the co llision has a period of elastic loading followed by a period of mixed elastic plas tic loading, with contributions to each mechanism estimated by a hyperbolic seca nt weight function. This function is implemented in the LAMMPS source code as the pair style gran/ep/history. Preliminary tests, simulating the pouring of pure nickel spheres, showed the elastic/plastic model took 1.66x as long as similar runs using gran/hertz/history.

  14. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Izacard, Olivier

    2016-08-01

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.

  15. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier, E-mail: izacard@llnl.gov

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  16. 42 CFR 493.941 - Hematology (including routine hematology and coagulation).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...

  17. 42 CFR 493.941 - Hematology (including routine hematology and coagulation).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...

  18. 42 CFR 493.941 - Hematology (including routine hematology and coagulation).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...

  19. 42 CFR 493.941 - Hematology (including routine hematology and coagulation).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...

  20. 42 CFR 493.941 - Hematology (including routine hematology and coagulation).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...

  1. Erythrocyte Sedimentation Rate (ESR)

    MedlinePlus

    ... 3 screens]. Available from: https://labtestsonline.org/understanding/analytes/esr/tab/test/ Lab Tests Online [Internet]. Washington ... 2 screens]. Available from: https://labtestsonline.org/understanding/analytes/esr/tab/sample/ National Heart, Lung, and Blood ...

  2. Measuring impact rebound with photography.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sumali, Hartono

    2010-05-01

    To study the rebound of a sphere colliding against a flat wall, a test setup was developed where the sphere is suspended with strings as a pendulum, elevated, and gravity-released to impact the wall. The motion of the sphere was recorded with a highspeed camera and traced with an image-processing program. From the speed of the sphere before and after each collision, the coefficient of restitution was computed, and shown to be a function of impact speed as predicted analytically.

  3. Segmented strings and the McMillan map

    DOE PAGES

    Gubser, Steven S.; Parikh, Sarthak; Witaszczyk, Przemek

    2016-07-25

    We present new exact solutions describing motions of closed segmented strings in AdS 3 in terms of elliptic functions. The existence of analytic expressions is due to the integrability of the classical equations of motion, which in our examples reduce to instances of the McMillan map. Here, we also obtain a discrete evolution rule for the motion in AdS 3 of arbitrary bound states of fundamental strings and D1-branes in the test approximation.

  4. Crude Oil Remote Sensing, Characterization and Cleaning with ContinuousWave and Pulsed Lasers

    DTIC Science & Technology

    2015-01-23

    explained by strong pressure spikes during cavitation in liquid jets . These experiments were not directly tested for the pipe cleaning, but their results...analytical functions (like circular, elliptical and similar shapes). In our case of cylindrical symmetry of the oil film shape is defined by two...the high-pressure (50 – 100 atm) oil and water jets (with cavitations in narrow tubes) revealed a new potential for a more efficient cleaning of

  5. Analytical functions for beta and gamma absorbed fractions of iodine-131 in spherical and ellipsoidal volumes.

    PubMed

    Mowlavi, Ali Asghar; Fornasier, Maria Rossa; Mirzaei, Mohammd; Bregant, Paola; de Denaro, Mario

    2014-10-01

    The beta and gamma absorbed fractions in organs and tissues are the important key factors of radionuclide internal dosimetry based on Medical Internal Radiation Dose (MIRD) approach. The aim of this study is to find suitable analytical functions for beta and gamma absorbed fractions in spherical and ellipsoidal volumes with a uniform distribution of iodine-131 radionuclide. MCNPX code has been used to calculate the energy absorption from beta and gamma rays of iodine-131 uniformly distributed inside different ellipsoids and spheres, and then the absorbed fractions have been evaluated. We have found the fit parameters of a suitable analytical function for the beta absorbed fraction, depending on a generalized radius for ellipsoid based on the radius of sphere, and a linear fit function for the gamma absorbed fraction. The analytical functions that we obtained from fitting process in Monte Carlo data can be used for obtaining the absorbed fractions of iodine-131 beta and gamma rays for any volume of the thyroid lobe. Moreover, our results for the spheres are in good agreement with the results of MIRD and other scientific literatures.

  6. Recognizing and Reducing Analytical Errors and Sources of Variation in Clinical Pathology Data in Safety Assessment Studies.

    PubMed

    Schultze, A E; Irizarry, A R

    2017-02-01

    Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.

  7. Functional neuroimaging correlates of thinking flexibility and knowledge structure in memory: Exploring the relationships between clinical reasoning and diagnostic thinking.

    PubMed

    Durning, Steven J; Costanzo, Michelle E; Beckman, Thomas J; Artino, Anthony R; Roy, Michael J; van der Vleuten, Cees; Holmboe, Eric S; Lipner, Rebecca S; Schuwirth, Lambert

    2016-06-01

    Diagnostic reasoning involves the thinking steps up to and including arrival at a diagnosis. Dual process theory posits that a physician's thinking is based on both non-analytic or fast, subconscious thinking and analytic thinking that is slower, more conscious, effortful and characterized by comparing and contrasting alternatives. Expertise in clinical reasoning may relate to the two dimensions measured by the diagnostic thinking inventory (DTI): memory structure and flexibility in thinking. Explored the functional magnetic resonance imaging (fMRI) correlates of these two aspects of the DTI: memory structure and flexibility of thinking. Participants answered and reflected upon multiple-choice questions (MCQs) during fMRI. A DTI was completed shortly after the scan. The brain processes associated with the two dimensions of the DTI were correlated with fMRI phases - assessing flexibility in thinking during analytical clinical reasoning, memory structure during non-analytical clinical reasoning and the total DTI during both non-analytical and analytical reasoning in experienced physicians. Each DTI component was associated with distinct functional neuroanatomic activation patterns, particularly in the prefrontal cortex. Our findings support diagnostic thinking conceptual models and indicate mechanisms through which cognitive demands may induce functional adaptation within the prefrontal cortex. This provides additional objective validity evidence for the use of the DTI in medical education and practice settings.

  8. Enabling quaternion derivatives: the generalized HR calculus

    PubMed Central

    Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C.; Mandic, Danilo P.

    2015-01-01

    Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis. PMID:26361555

  9. Enabling quaternion derivatives: the generalized HR calculus.

    PubMed

    Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C; Mandic, Danilo P

    2015-08-01

    Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis.

  10. Advancing Clinical Proteomics via Analysis Based on Biological Complexes: A Tale of Five Paradigms.

    PubMed

    Goh, Wilson Wen Bin; Wong, Limsoon

    2016-09-02

    Despite advances in proteomic technologies, idiosyncratic data issues, for example, incomplete coverage and inconsistency, resulting in large data holes, persist. Moreover, because of naïve reliance on statistical testing and its accompanying p values, differential protein signatures identified from such proteomics data have little diagnostic power. Thus, deploying conventional analytics on proteomics data is insufficient for identifying novel drug targets or precise yet sensitive biomarkers. Complex-based analysis is a new analytical approach that has potential to resolve these issues but requires formalization. We categorize complex-based analysis into five method classes or paradigms and propose an even-handed yet comprehensive evaluation rubric based on both simulated and real data. The first four paradigms are well represented in the literature. The fifth and newest paradigm, the network-paired (NP) paradigm, represented by a method called Extremely Small SubNET (ESSNET), dominates in precision-recall and reproducibility, maintains strong performance in small sample sizes, and sensitively detects low-abundance complexes. In contrast, the commonly used over-representation analysis (ORA) and direct-group (DG) test paradigms maintain good overall precision but have severe reproducibility issues. The other two paradigms considered here are the hit-rate and rank-based network analysis paradigms; both of these have good precision-recall and reproducibility, but they do not consider low-abundance complexes. Therefore, given its strong performance, NP/ESSNET may prove to be a useful approach for improving the analytical resolution of proteomics data. Additionally, given its stability, it may also be a powerful new approach toward functional enrichment tests, much like its ORA and DG counterparts.

  11. Stakeholder perspectives on decision-analytic modeling frameworks to assess genetic services policy.

    PubMed

    Guzauskas, Gregory F; Garrison, Louis P; Stock, Jacquie; Au, Sylvia; Doyle, Debra Lochner; Veenstra, David L

    2013-01-01

    Genetic services policymakers and insurers often make coverage decisions in the absence of complete evidence of clinical utility and under budget constraints. We evaluated genetic services stakeholder opinions on the potential usefulness of decision-analytic modeling to inform coverage decisions, and asked them to identify genetic tests for decision-analytic modeling studies. We presented an overview of decision-analytic modeling to members of the Western States Genetic Services Collaborative Reimbursement Work Group and state Medicaid representatives and conducted directed content analysis and an anonymous survey to gauge their attitudes toward decision-analytic modeling. Participants also identified and prioritized genetic services for prospective decision-analytic evaluation. Participants expressed dissatisfaction with current processes for evaluating insurance coverage of genetic services. Some participants expressed uncertainty about their comprehension of decision-analytic modeling techniques. All stakeholders reported openness to using decision-analytic modeling for genetic services assessments. Participants were most interested in application of decision-analytic concepts to multiple-disorder testing platforms, such as next-generation sequencing and chromosomal microarray. Decision-analytic modeling approaches may provide a useful decision tool to genetic services stakeholders and Medicaid decision-makers.

  12. Space shuttle low cost/risk avionics study

    NASA Technical Reports Server (NTRS)

    1971-01-01

    All work breakdown structure elements containing any avionics related effort were examined for pricing the life cycle costs. The analytical, testing, and integration efforts are included for the basic onboard avionics and electrical power systems. The design and procurement of special test equipment and maintenance and repair equipment are considered. Program management associated with these efforts is described. Flight test spares and labor and materials associated with the operations and maintenance of the avionics systems throughout the horizontal flight test are examined. It was determined that cost savings can be achieved by using existing hardware, maximizing orbiter-booster commonality, specifying new equipments to MIL quality standards, basing redundancy on cost effective analysis, minimizing software complexity and reducing cross strapping and computer-managed functions, utilizing compilers and floating point computers, and evolving the design as dictated by the horizontal flight test schedules.

  13. Enhanced detectability of fluorinated derivatives of N,N-dialkylamino alcohols and precursors of nitrogen mustards by gas chromatography coupled to Fourier transform infrared spectroscopy analysis for verification of chemical weapons convention.

    PubMed

    Garg, Prabhat; Purohit, Ajay; Tak, Vijay K; Dubey, D K

    2009-11-06

    N,N-Dialkylamino alcohols, N-methyldiethanolamine, N-ethyldiethanolamine and triethanolamine are the precursors of VX type nerve agents and three different nitrogen mustards respectively. Their detection and identification is of paramount importance for verification analysis of chemical weapons convention. GC-FTIR is used as complimentary technique to GC-MS analysis for identification of these analytes. One constraint of GC-FTIR, its low sensitivity, was overcome by converting the analytes to their fluorinated derivatives. Owing to high absorptivity in IR region, these derivatives facilitated their detection by GC-FTIR analysis. Derivatizing reagents having trimethylsilyl, trifluoroacyl and heptafluorobutyryl groups on imidazole moiety were screened. Derivatives formed there were analyzed by GC-FTIR quantitatively. Of these reagents studied, heptafluorobutyrylimidazole (HFBI) produced the greatest increase in sensitivity by GC-FTIR detection. 60-125 folds of sensitivity enhancement were observed for the analytes by HFBI derivatization. Absorbance due to various functional groups responsible for enhanced sensitivity were compared by determining their corresponding relative molar extinction coefficients ( [Formula: see text] ) considering uniform optical path length. The RSDs for intraday repeatability and interday reproducibility for various derivatives were 0.2-1.1% and 0.3-1.8%. Limit of detection (LOD) was achieved up to 10-15ng and applicability of the method was tested with unknown samples obtained in international proficiency tests.

  14. On the Use of Machine Learning Techniques for the Mechanical Characterization of Soft Biological Tissues.

    PubMed

    Cilla, M; Pérez-Rey, I; Martínez, M A; Peña, Estefania; Martínez, Javier

    2018-06-23

    Motivated by the search for new strategies for fitting a material model, a new approach is explored in the present work. The use of numerical and complex algorithms based on machine learning techniques such as support vector machines for regression, bagged decision trees and artificial neural networks is proposed for solving the parameter identification of constitutive laws for soft biological tissues. First, the mathematical tools were trained with analytical uniaxial data (circumferential and longitudinal directions) as inputs, and their corresponding material parameters of the Gasser, Ogden and Holzapfel strain energy function as outputs. The train and test errors show great efficiency during the training process in finding correlations between inputs and outputs; besides, the correlation coefficients were very close to 1. Second, the tool was validated with unseen observations of analytical circumferential and longitudinal uniaxial data. The results show an excellent agreement between the prediction of the material parameters of the SEF and the analytical curves. Finally, data from real circumferential and longitudinal uniaxial tests on different cardiovascular tissues were fitted, thus the material model of these tissues was predicted. We found that the method was able to consistently identify model parameters, and we believe that the use of these numerical tools could lead to an improvement in the characterization of soft biological tissues. This article is protected by copyright. All rights reserved.

  15. Analytical Model for Thermal Elastoplastic Stresses of Functionally Graded Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, P. C.; Chen, G.; Liu, L. S.

    2008-02-15

    A modification analytical model is presented for the thermal elastoplastic stresses of functionally graded materials subjected to thermal loading. The presented model follows the analytical scheme presented by Y. L. Shen and S. Suresh [6]. In the present model, the functionally graded materials are considered as multilayered materials. Each layer consists of metal and ceramic with different volume fraction. The ceramic layer and the FGM interlayers are considered as elastic brittle materials. The metal layer is considered as elastic-perfectly plastic ductile materials. Closed-form solutions for different characteristic temperature for thermal loading are presented as a function of the structure geometriesmore » and the thermomechanical properties of the materials. A main advance of the present model is that the possibility of the initial and spread of plasticity from the two sides of the ductile layers taken into account. Comparing the analytical results with the results from the finite element analysis, the thermal stresses and deformation from the present model are in good agreement with the numerical ones.« less

  16. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  17. Analytical techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.

  18. 42 CFR 493.803 - Condition: Successful participation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...

  19. 42 CFR 493.803 - Condition: Successful participation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...

  20. 42 CFR 493.803 - Condition: Successful participation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...

  1. 42 CFR 493.803 - Condition: Successful participation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...

  2. Robust control algorithms for Mars aerobraking

    NASA Technical Reports Server (NTRS)

    Shipley, Buford W., Jr.; Ward, Donald T.

    1992-01-01

    Four atmospheric guidance concepts have been adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. The first two offer improvements to the Analytic Predictor Corrector (APC) to increase its robustness to density variations. The second two are variations of a new Liapunov tracking exit phase algorithm, developed to guide the vehicle along a reference trajectory. These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. MARSGRAM is used to develop realistic atmospheres for the study. When square wave density pulses perturb the atmosphere all four controllers are successful. The algorithms are tested against atmospheres where the inbound and outbound density functions are different. Square wave density pulses are again used, but only for the outbound leg of the trajectory. Additionally, sine waves are used to perturb the density function. The new algorithms are found to be more robust than any previously tested and a Liapunov controller is selected as the most robust control algorithm overall examined.

  3. Potential energy distribution function and its application to the problem of evaporation

    NASA Astrophysics Data System (ADS)

    Gerasimov, D. N.; Yurin, E. I.

    2017-10-01

    Distribution function on potential energy in a strong correlated system can be calculated analytically. In an equilibrium system (for instance, in the bulk of the liquid) this distribution function depends only on temperature and mean potential energy, which can be found through the specific heat of vaporization. At the surface of the liquid this distribution function differs significantly, but its shape still satisfies analytical correlation. Distribution function on potential energy nearby the evaporation surface can be used instead of the work function of the atom of the liquid.

  4. Ab initio simulation of diffractometer instrumental function for high-resolution X-ray diffraction1

    PubMed Central

    Mikhalychev, Alexander; Benediktovitch, Andrei; Ulyanenkova, Tatjana; Ulyanenkov, Alex

    2015-01-01

    Modeling of the X-ray diffractometer instrumental function for a given optics configuration is important both for planning experiments and for the analysis of measured data. A fast and universal method for instrumental function simulation, suitable for fully automated computer realization and describing both coplanar and noncoplanar measurement geometries for any combination of X-ray optical elements, is proposed. The method can be identified as semi-analytical backward ray tracing and is based on the calculation of a detected signal as an integral of X-ray intensities for all the rays reaching the detector. The high speed of calculation is provided by the expressions for analytical integration over the spatial coordinates that describe the detection point. Consideration of the three-dimensional propagation of rays without restriction to the diffraction plane provides the applicability of the method for noncoplanar geometry and the accuracy for characterization of the signal from a two-dimensional detector. The correctness of the simulation algorithm is checked in the following two ways: by verifying the consistency of the calculated data with the patterns expected for certain simple limiting cases and by comparing measured reciprocal-space maps with the corresponding maps simulated by the proposed method for the same diffractometer configurations. Both kinds of tests demonstrate the agreement of the simulated instrumental function shape with the measured data. PMID:26089760

  5. Transonic buffet behavior of Northrop F-5A aircraft

    NASA Technical Reports Server (NTRS)

    Hwang, C.; Pi, W. S.

    1974-01-01

    Flight tests were performed on an F-5A aircraft to investigate the dynamic buffet pressure distribution on the wing surfaces and the responses during a series of transonic maneuvers called wind-up turns. The conditions under which the tests were conducted are defined. The fluctuating buffet pressure data on the right wing of the aircraft were acquired by miniaturized semiconductor-type pressure transducers flush mounted on the wing. Processing of the fluctuating pressures and responses included the generation of the auto- and cross-power spectra, and of the spatial correlation functions. An analytical correlation procedure was introduced to compute the aircraft response spectra based on the measured buffet pressures.

  6. Flight Validation of Mars Mission Technologies

    NASA Technical Reports Server (NTRS)

    Eberspeaker, P. J.

    2000-01-01

    Effective exploration and characterization of Mars will require the deployment of numerous surface probes, tethered balloon stations and free-flying balloon systems as well as larger landers and orbiting satellite systems. Since launch opportunities exist approximately every two years it is extremely critical that each and every mission maximize its potential for success. This will require significant testing of each system in an environment that simulates the actual operational environment as closely as possible. Analytical techniques and laboratory testing goes a long way in mitigating the inherent risks associated with space exploration, however they fall sort of accurately simulating the unpredictable operational environment in which these systems must function.

  7. Buckling Testing and Analysis of Space Shuttle Solid Rocket Motor Cylinders

    NASA Technical Reports Server (NTRS)

    Weidner, Thomas J.; Larsen, David V.; McCool, Alex (Technical Monitor)

    2002-01-01

    A series of full-scale buckling tests were performed on the space shuttle Reusable Solid Rocket Motor (RSRM) cylinders. The tests were performed to determine the buckling capability of the cylinders and to provide data for analytical comparison. A nonlinear ANSYS Finite Element Analysis (FEA) model was used to represent and evaluate the testing. Analytical results demonstrated excellent correlation to test results, predicting the failure load within 5%. The analytical value was on the conservative side, predicting a lower failure load than was applied to the test. The resulting study and analysis indicated the important parameters for FEA to accurately predict buckling failure. The resulting method was subsequently used to establish the pre-launch buckling capability of the space shuttle system.

  8. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  9. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.

  10. Acute effects of whole-body vibration on the motor function of patients with stroke: a randomized clinical trial.

    PubMed

    Silva, Adriana Teresa; Dias, Miqueline Pivoto Faria; Calixto, Ruanito; Carone, Antonio Luis; Martinez, Beatriz Bertolaccini; Silva, Andreia Maria; Honorato, Donizeti Cesar

    2014-04-01

    The aim of this study was to investigate the acute effects of whole-body vibration on the motor function of patients with stroke. The present investigation was a randomized clinical trial studying 43 individuals with hemiparesis after stroke, with 33 subjects allocated to the intervention group and 10 subjects allocated to the control group. The intervention group was subjected to one session of vibration therapy (frequency of 50 Hz and amplitude of 2 mm) comprising four 1-min series with 1-min rest intervals between series in three body positions: bipedal stances with the knees flexed to 30 degrees and 90 degrees and a unipedal stance on the paretic limb. The analytical tests were as follows: simultaneous electromyography of the affected and unaffected tibialis anterior and rectus femoris muscles bilaterally in voluntary isometric contraction; the Six-Minute Walk Test; the Stair-Climb Test; and the Timed Get-Up-and-Go Test. The data were analyzed by independent and paired t tests and by analysis of covariance. There was no evidence of effects on the group and time interaction relative to variables affected side rectus femoris, unaffected side rectus femoris, affected side tibialis anterior, unaffected side tibialis anterior, and the Stair-Climb Test (P > 0.05). There was evidence of effects on the group interaction relative to variables Six-Minute Walk Test and Timed Get-Up-and-Go Test (P < 0.05). Whole-body vibration contributed little to improve the functional levels of stroke patients.

  11. On the Gibbs phenomenon 3: Recovering exponential accuracy in a sub-interval from a spectral partial sum of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1993-01-01

    The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.

  12. Derivation of phase functions from multiply scattered sunlight transmitted through a hazy atmosphere

    NASA Technical Reports Server (NTRS)

    Weinman, J. A.; Twitty, J. T.; Browning, S. R.; Herman, B. M.

    1975-01-01

    The intensity of sunlight multiply scattered in model atmospheres is derived from the equation of radiative transfer by an analytical small-angle approximation. The approximate analytical solutions are compared to rigorous numerical solutions of the same problem. Results obtained from an aerosol-laden model atmosphere are presented. Agreement between the rigorous and the approximate solutions is found to be within a few per cent. The analytical solution to the problem which considers an aerosol-laden atmosphere is then inverted to yield a phase function which describes a single scattering event at small angles. The effect of noisy data on the derived phase function is discussed.

  13. Validation protocol of analytical procedures for quantification of drugs in polymeric systems for parenteral administration: dexamethasone phosphate disodium microparticles.

    PubMed

    Martín-Sabroso, Cristina; Tavares-Fernandes, Daniel Filipe; Espada-García, Juan Ignacio; Torres-Suárez, Ana Isabel

    2013-12-15

    In this work a protocol to validate analytical procedures for the quantification of drug substances formulated in polymeric systems that comprise both drug entrapped into the polymeric matrix (assay:content test) and drug released from the systems (assay:dissolution test) is developed. This protocol is applied to the validation two isocratic HPLC analytical procedures for the analysis of dexamethasone phosphate disodium microparticles for parenteral administration. Preparation of authentic samples and artificially "spiked" and "unspiked" samples is described. Specificity (ability to quantify dexamethasone phosphate disodium in presence of constituents of the dissolution medium and other microparticle constituents), linearity, accuracy and precision are evaluated, in the range from 10 to 50 μg mL(-1) in the assay:content test procedure and from 0.25 to 10 μg mL(-1) in the assay:dissolution test procedure. The robustness of the analytical method to extract drug from microparticles is also assessed. The validation protocol developed allows us to conclude that both analytical methods are suitable for their intended purpose, but the lack of proportionality of the assay:dissolution analytical method should be taken into account. The validation protocol designed in this work could be applied to the validation of any analytical procedure for the quantification of drugs formulated in controlled release polymeric microparticles. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Capillary device refilling. [liquid rocket propellant tank tests

    NASA Technical Reports Server (NTRS)

    Blatt, M. H.; Merino, F.; Symons, E. P.

    1980-01-01

    An analytical and experimental study was conducted dealing with refilling start baskets (capillary devices) with settled fluid. A computer program was written to include dynamic pressure, screen wicking, multiple-screen barriers, standpipe screens, variable vehicle mass for computing vehicle acceleration, and calculation of tank outflow rate and vapor pullthrough height. An experimental apparatus was fabricated and tested to provide data for correlation with the analytical model; the test program was conducted in normal gravity using a scale-model capillary device and ethanol as the test fluid. The test data correlated with the analytical model; the model is a versatile and apparently accurate tool for predicting start basket refilling under actual mission conditions.

  15. Selected Analytical Methods for Environmental Remediation and Recovery (SAM) - Home

    EPA Pesticide Factsheets

    The SAM Home page provides access to all information provided in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), and includes a query function allowing users to search methods by analyte, sample type and instrumentation.

  16. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K; de Frias, Gabriel Jose

    2018-03-01

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  17. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel Jose

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  18. Design review of fluid film bearing testers

    NASA Technical Reports Server (NTRS)

    Scharrer, Joseph K.

    1993-01-01

    The designs of three existing testers (Hybrid Bearing Tester, OTV Bearing Tester, and Long Life Bearing Tester) owned by NASA were reviewed for their capability to serve as a multi-purpose cryogenic fluid film bearing tester. The primary tester function is the validation of analytical predictions for fluid film bearing steady state and dynamic performance. Evaluation criteria were established for test bearing configurations, test fluids, instrumentation, and test objectives. Each tester was evaluated with respect to these criteria. A determination was made of design improvements which would allow the testers to meet the stated criteria. The cost and time required to make the design changes were estimated. A recommendation based on the results of this study was made to proceed with the Hybrid Bearing Tester.

  19. A Novel Wireless Wearable Volatile Organic Compound (VOC) Monitoring Device with Disposable Sensors.

    PubMed

    Deng, Yue; Chen, Cheng; Xian, Xiaojun; Tsow, Francis; Verma, Gaurav; McConnell, Rob; Fruin, Scott; Tao, Nongjian; Forzani, Erica S

    2016-12-03

    A novel portable wireless volatile organic compound (VOC) monitoring device with disposable sensors is presented. The device is miniaturized, light, easy-to-use, and cost-effective. Different field tests have been carried out to identify the operational, analytical, and functional performance of the device and its sensors. The device was compared to a commercial photo-ionization detector, gas chromatography-mass spectrometry, and carbon monoxide detector. In addition, environmental operational conditions, such as barometric change, temperature change and wind conditions were also tested to evaluate the device performance. The multiple comparisons and tests indicate that the proposed VOC device is adequate to characterize personal exposure in many real-world scenarios and is applicable for personal daily use.

  20. A Novel Wireless Wearable Volatile Organic Compound (VOC) Monitoring Device with Disposable Sensors

    PubMed Central

    Deng, Yue; Chen, Cheng; Xian, Xiaojun; Tsow, Francis; Verma, Gaurav; McConnell, Rob; Fruin, Scott; Tao, Nongjian; Forzani, Erica S.

    2016-01-01

    A novel portable wireless volatile organic compound (VOC) monitoring device with disposable sensors is presented. The device is miniaturized, light, easy-to-use, and cost-effective. Different field tests have been carried out to identify the operational, analytical, and functional performance of the device and its sensors. The device was compared to a commercial photo-ionization detector, gas chromatography-mass spectrometry, and carbon monoxide detector. In addition, environmental operational conditions, such as barometric change, temperature change and wind conditions were also tested to evaluate the device performance. The multiple comparisons and tests indicate that the proposed VOC device is adequate to characterize personal exposure in many real-world scenarios and is applicable for personal daily use. PMID:27918484

  1. Whole-brain functional connectivity during acquisition of novel grammar: Distinct functional networks depend on language learning abilities.

    PubMed

    Kepinska, Olga; de Rover, Mischa; Caspers, Johanneke; Schiller, Niels O

    2017-03-01

    In an effort to advance the understanding of brain function and organisation accompanying second language learning, we investigate the neural substrates of novel grammar learning in a group of healthy adults, consisting of participants with high and average language analytical abilities (LAA). By means of an Independent Components Analysis, a data-driven approach to functional connectivity of the brain, the fMRI data collected during a grammar-learning task were decomposed into maps representing separate cognitive processes. These included the default mode, task-positive, working memory, visual, cerebellar and emotional networks. We further tested for differences within the components, representing individual differences between the High and Average LAA learners. We found high analytical abilities to be coupled with stronger contributions to the task-positive network from areas adjacent to bilateral Broca's region, stronger connectivity within the working memory network and within the emotional network. Average LAA participants displayed stronger engagement within the task-positive network from areas adjacent to the right-hemisphere homologue of Broca's region and typical to lower level processing (visual word recognition), and increased connectivity within the default mode network. The significance of each of the identified networks for the grammar learning process is presented next to a discussion on the established markers of inter-individual learners' differences. We conclude that in terms of functional connectivity, the engagement of brain's networks during grammar acquisition is coupled with one's language learning abilities. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Color matters--material ejection and ion yields in UV-MALDI mass spectrometry as a function of laser wavelength and laser fluence.

    PubMed

    Soltwisch, Jens; Jaskolla, Thorsten W; Dreisewerd, Klaus

    2013-10-01

    The success of matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) as a widely employed analytical tool in the biomolecular sciences builds strongly on an effective laser-material interaction that is resulting in a soft co-desorption and ionization of matrix and imbedded biomolecules. To obtain a maximized ion yield for the analyte(s) of interest, in general both wavelength and fluence need to be tuned to match the specific optical absorption profile of the used matrix. However, commonly only lasers with fixed emission wavelengths of either 337 or 355 nm are used for MALDI-MS. Here, we employed a wavelength-tunable dye laser and recorded both the neutral material ejection and the MS ion data in a wide wavelength and fluence range between 280 and 377.5 nm. α-Cyano-4-hydroxycinnamic acid (HCCA), 4-chloro-α-cyanocinnamic acid (ClCCA), α-cyano-2,4-difluorocinnamic acid (DiFCCA), and 2,5-dihydroxybenzoic acid (DHB) were investigated as matrices, and several peptides as analytes. Recording of the material ejection was achieved by adopting a photoacoustic approach. Relative ion yields were derived by division of photoacoustic and ion signals. In this way, distinct wavelength/fluence regions can be identified for which maximum ion yields were obtained. For the tested matrices, optimal results were achieved for wavelengths corresponding to areas of high optical absorption of the respective matrix and at fluences about a factor of 2-3 above the matrix- and wavelength-dependent ion detection threshold fluences. The material ejection as probed by the photoacoustic method is excellently fitted by the quasithermal model, while a sigmoidal function allows for an empirical description of the ion signal-fluence relationship.

  3. Chemically-functionalized microcantilevers for detection of chemical, biological and explosive material

    DOEpatents

    Pinnaduwage, Lal A [Knoxville, TN; Thundat, Thomas G [Knoxville, TN; Brown, Gilbert M [Knoxville, TN; Hawk, John Eric [Olive Branch, MS; Boiadjiev, Vassil I [Knoxville, TN

    2007-04-24

    A chemically functionalized cantilever system has a cantilever coated on one side thereof with a reagent or biological species which binds to an analyte. The system is of particular value when the analyte is a toxic chemical biological warfare agent or an explosive.

  4. openECA Detailed Design Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    This document describes the functional and non-functional requirements for: The openECA platform The included analytic systems that will: Validate the operational readiness and performance of the openECA platform Provide out-of-box value to those that implement the openECA platform with an initial collection of analytics

  5. Functional Analytic Psychotherapy and Supervision

    ERIC Educational Resources Information Center

    Callaghan, Glenn M.

    2006-01-01

    The interpersonal behavior therapy, Functional Analytic Psychotherapy (FAP) has been empirically investigated and described in the literature for a little over a decade. Still, little has been written about the process of supervision in FAP. While there are many aspects of FAP supervision shared by other contemporary behavior therapies and…

  6. Space Station CMIF extended duration metabolic control test

    NASA Technical Reports Server (NTRS)

    Schunk, Richard G.; Bagdigian, Robert M.; Carrasquillo, Robyn L.; Ogle, Kathryn Y.; Wieland, Paul O.

    1989-01-01

    The Space Station Extended Duration Metabolic Control Test (EMCT) was conducted at the MSFC Core Module Integration Facility. The primary objective of the EMCT was to gather performance data from a partially-closed regenerative Environmental Control and Life Support (ECLS) system functioning under steady-state conditions. Included is a description of the EMCT configuration, a summary of events, a discussion of anomalies that occurred during the test, and detailed results and analysis from individual measurements of water and gas samples taken during the test. A comparison of the physical, chemical, and microbiological methods used in the post test laboratory analyses of the water samples is included. The preprototype ECLS hardware used in the test, providing an overall process description and theory of operation for each hardware item. Analytical results pertaining to a system level mass balance and selected system power estimates are also included.

  7. The partial coherence modulation transfer function in testing lithography lens

    NASA Astrophysics Data System (ADS)

    Huang, Jiun-Woei

    2018-03-01

    Due to the lithography demanding high performance in projection of semiconductor mask to wafer, the lens has to be almost free in spherical and coma aberration, thus, in situ optical testing for diagnosis of lens performance has to be established to verify the performance and to provide the suggesting for further improvement of the lens, before the lens has been build and integrated with light source. The measurement of modulation transfer function of critical dimension (CD) is main performance parameter to evaluate the line width of semiconductor platform fabricating ability for the smallest line width of producing tiny integrated circuits. Although the modulation transfer function (MTF) has been popularly used to evaluation the optical system, but in lithography, the contrast of each line-pair is in one dimension or two dimensions, analytically, while the lens stand along in the test bench integrated with the light source coherent or near coherent for the small dimension near the optical diffraction limit, the MTF is not only contributed by the lens, also by illumination of platform. In the study, the partial coherence modulation transfer function (PCMTF) for testing a lithography lens is suggested by measuring MTF in the high spatial frequency of in situ lithography lens, blended with the illumination of partial and in coherent light source. PCMTF can be one of measurement to evaluate the imperfect lens of lithography lens for further improvement in lens performance.

  8. Analytic Result for the Two-loop Six-point NMHV Amplitude in N = 4 Super Yang-Mills Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Lance J.; /SLAC; Drummond, James M.

    2012-02-15

    We provide a simple analytic formula for the two-loop six-point ratio function of planar N = 4 super Yang-Mills theory. This result extends the analytic knowledge of multi-loop six-point amplitudes beyond those with maximal helicity violation. We make a natural ansatz for the symbols of the relevant functions appearing in the two-loop amplitude, and impose various consistency conditions, including symmetry, the absence of spurious poles, the correct collinear behavior, and agreement with the operator product expansion for light-like (super) Wilson loops. This information reduces the ansatz to a small number of relatively simple functions. In order to fix these parametersmore » uniquely, we utilize an explicit representation of the amplitude in terms of loop integrals that can be evaluated analytically in various kinematic limits. The final compact analytic result is expressed in terms of classical polylogarithms, whose arguments are rational functions of the dual conformal cross-ratios, plus precisely two functions that are not of this type. One of the functions, the loop integral {Omega}{sup (2)}, also plays a key role in a new representation of the remainder function R{sub 6}{sup (2)} in the maximally helicity violating sector. Another interesting feature at two loops is the appearance of a new (parity odd) x (parity odd) sector of the amplitude, which is absent at one loop, and which is uniquely determined in a natural way in terms of the more familiar (parity even) x (parity even) part. The second non-polylogarithmic function, the loop integral {tilde {Omega}}{sup (2)}, characterizes this sector. Both {Omega}{sup (2)} and {tilde {Omega}}{sup (2)} can be expressed as one-dimensional integrals over classical polylogarithms with rational arguments.« less

  9. Development of a Test to Evaluate Students' Analytical Thinking Based on Fact versus Opinion Differentiation

    ERIC Educational Resources Information Center

    Thaneerananon, Taveep; Triampo, Wannapong; Nokkaew, Artorn

    2016-01-01

    Nowadays, one of the biggest challenges of education in Thailand is the development and promotion of the students' thinking skills. The main purposes of this research were to develop an analytical thinking test for 6th grade students and evaluate the students' analytical thinking. The sample was composed of 3,567 6th grade students in 2014…

  10. The Role of Functional Foods, Nutraceuticals, and Food Supplements in Intestinal Health

    PubMed Central

    Cencic, Avrelija; Chingwaru, Walter

    2010-01-01

    New eating habits, actual trends in production and consumption have a health, environmental and social impact. The European Union is fighting diseases characteristic of a modern age, such as obesity, osteoporosis, cancer, diabetes, allergies and dental problems. Developed countries are also faced with problems relating to aging populations, high energy foods, and unbalanced diets. The potential of nutraceuticals/functional foods/food supplements in mitigating health problems, especially in the gastrointestinal (GI) tract, is discussed. Certain members of gut microflora (e.g., probiotic/protective strains) play a role in the host health due to its involvement in nutritional, immunologic and physiological functions. The potential mechanisms by which nutraceuticals/functional foods/food supplements may alter a host’s health are also highlighted in this paper. The establishment of novel functional cell models of the GI and analytical tools that allow tests in controlled experiments are highly desired for gut research. PMID:22254045

  11. Is age kinder to the initially more able?: Yes, and no

    PubMed Central

    Gow, Alan J.; Johnson, Wendy; Mishra, Gita; Richards, Marcus; Kuh, Diana; Deary, Ian J.

    2012-01-01

    Although a number of analyses have addressed whether initial cognitive ability level is associated with age-related cognitive decline, results have been inconsistent. Latent growth curve modeling was applied to two aging cohorts, extending previous analyses with a further wave of data collection, or as a more appropriate analytical methodology than used previously. In the Lothian Birth Cohort 1921, cognitive ability at age 11 was not associated with cognitive change from age 79 to 87, either in general cognitive ability, or in tests of reasoning, memory and executive function. However, data from the MRC National Survey of Health and Development suggested that higher cognitive ability at age 15 predicted less decline between ages 43 and 53 years in a latent cognitive factor from tests of verbal memory and search speed, and in search speed when considered separately. The results are discussed in terms of the differences between the cohorts and the interpretability of the analytical approach. Suggestions are made about when initial ability might be cognitively protective, and study requirements to bring about a clearer resolution. PMID:23690652

  12. Distinct neural substrates of visuospatial and verbal-analytic reasoning as assessed by Raven's Advanced Progressive Matrices.

    PubMed

    Chen, Zhencai; De Beuckelaer, Alain; Wang, Xu; Liu, Jia

    2017-11-24

    Recent studies revealed spontaneous neural activity to be associated with fluid intelligence (gF) which is commonly assessed by Raven's Advanced Progressive Matrices, and embeds two types of reasoning: visuospatial and verbal-analytic reasoning. With resting-state fMRI data, using global brain connectivity (GBC) analysis which averages functional connectivity of a voxel in relation to all other voxels in the brain, distinct neural correlates of these two reasoning types were found. For visuospatial reasoning, negative correlations were observed in both the primary visual cortex (PVC) and the precuneus, and positive correlations were observed in the temporal lobe. For verbal-analytic reasoning, negative correlations were observed in the right inferior frontal gyrus (rIFG), dorsal anterior cingulate cortex and temporoparietal junction, and positive correlations were observed in the angular gyrus. Furthermore, an interaction between GBC value and type of reasoning was found in the PVC, rIFG and the temporal lobe. These findings suggest that visuospatial reasoning benefits more from elaborate perception to stimulus features, whereas verbal-analytic reasoning benefits more from feature integration and hypothesis testing. In sum, the present study offers, for different types of reasoning in gF, first empirical evidence of separate neural substrates in the resting brain.

  13. The Role of Nanoparticle Design in Determining Analytical Performance of Lateral Flow Immunoassays.

    PubMed

    Zhan, Li; Guo, Shuang-Zhuang; Song, Fayi; Gong, Yan; Xu, Feng; Boulware, David R; McAlpine, Michael C; Chan, Warren C W; Bischof, John C

    2017-12-13

    Rapid, simple, and cost-effective diagnostics are needed to improve healthcare at the point of care (POC). However, the most widely used POC diagnostic, the lateral flow immunoassay (LFA), is ∼1000-times less sensitive and has a smaller analytical range than laboratory tests, requiring a confirmatory test to establish truly negative results. Here, a rational and systematic strategy is used to design the LFA contrast label (i.e., gold nanoparticles) to improve the analytical sensitivity, analytical detection range, and antigen quantification of LFAs. Specifically, we discovered that the size (30, 60, or 100 nm) of the gold nanoparticles is a main contributor to the LFA analytical performance through both the degree of receptor interaction and the ultimate visual or thermal contrast signals. Using the optimal LFA design, we demonstrated the ability to improve the analytical sensitivity by 256-fold and expand the analytical detection range from 3 log 10 to 6 log 10 for diagnosing patients with inflammatory conditions by measuring C-reactive protein. This work demonstrates that, with appropriate design of the contrast label, a simple and commonly used diagnostic technology can compete with more expensive state-of-the-art laboratory tests.

  14. An analytical study of physical models with inherited temporal and spatial memory

    NASA Astrophysics Data System (ADS)

    Jaradat, Imad; Alquran, Marwan; Al-Khaled, Kamel

    2018-04-01

    Du et al. (Sci. Reb. 3, 3431 (2013)) demonstrated that the fractional derivative order can be physically interpreted as a memory index by fitting the test data of memory phenomena. The aim of this work is to study analytically the joint effect of the memory index on time and space coordinates simultaneously. For this purpose, we introduce a novel bivariate fractional power series expansion that is accompanied by twofold fractional derivatives ordering α, β\\in(0,1]. Further, some convergence criteria concerning our expansion are presented and an analog of the well-known bivariate Taylor's formula in the sense of mixed fractional derivatives is obtained. Finally, in order to show the functionality and efficiency of this expansion, we employ the corresponding Taylor's series method to obtain closed-form solutions of various physical models with inherited time and space memory.

  15. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  16. RATE OF APPROXIMATION OF PIECEWISE-ANALYTIC FUNCTIONS BY RATIONAL FRACTIONS IN THE L_p-METRICS, 0 < p\\leq\\infty

    NASA Astrophysics Data System (ADS)

    Vjačeslavov, N. S.

    1980-02-01

    In this paper estimates are found for L_pR_n(f) - the least deviation in the L_p-metric, 0 < p\\leq\\infty, of a piecewise analytic function f from the rational functions of degree at most n. It is shown that these estimates are sharp in a well-defined sense.Bibliography: 12 titles.

  17. Wavelets and the Poincaré half-plane

    NASA Astrophysics Data System (ADS)

    Klauder, J. R.; Streater, R. F.

    1994-01-01

    A square-integrable signal of positive energy is transformed into an analytic function in the upper half-plane, on which SL(2,R) acts. It is shown that this analytic function is determined by its scalar products with the discrete family of functions obtained by acting with SL(2,Z) on a cyclic vector, provided that the spin of the representation is less than 3.

  18. Empirical testing of an analytical model predicting electrical isolation of photovoltaic models

    NASA Astrophysics Data System (ADS)

    Garcia, A., III; Minning, C. P.; Cuddihy, E. F.

    A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.

  19. Modeling of soil water retention from saturation to oven dryness

    USGS Publications Warehouse

    Rossi, Cinzia; Nimmo, John R.

    1994-01-01

    Most analytical formulas used to model moisture retention in unsaturated porous media have been developed for the wet range and are unsuitable for applications in which low water contents are important. We have developed two models that fit the entire range from saturation to oven dryness in a practical and physically realistic way with smooth, continuous functions that have few parameters. Both models incorporate a power law and a logarithmic dependence of water content on suction, differing in how these two components are combined. In one model, functions are added together (model “sum”); in the other they are joined smoothly together at a discrete point (model “junction”). Both models also incorporate recent developments that assure a continuous derivative and force the function to reach zero water content at a finite value of suction that corresponds to oven dryness. The models have been tested with seven sets of water retention data that each cover nearly the entire range. The three-parameter sum model fits all data well and is useful for extrapolation into the dry range when data for it are unavailable. The two-parameter junction model fits most data sets almost as well as the sum model and has the advantage of being analytically integrable for convenient use with capillary-bundle models to obtain the unsaturated hydraulic conductivity.

  20. The pathway not taken: understanding 'omics data in the perinatal context.

    PubMed

    Edlow, Andrea G; Slonim, Donna K; Wick, Heather C; Hui, Lisa; Bianchi, Diana W

    2015-07-01

    'Omics analysis of large datasets has an increasingly important role in perinatal research, but understanding gene expression analyses in the fetal context remains a challenge. We compared the interpretation provided by a widely used systems biology resource (ingenuity pathway analysis [IPA]) with that from gene set enrichment analysis (GSEA) with functional annotation curated specifically for the fetus (Developmental FunctionaL Annotation at Tufts [DFLAT]). Using amniotic fluid supernatant transcriptome datasets previously produced by our group, we analyzed 3 different developmental perturbations: aneuploidy (Trisomy 21 [T21]), hemodynamic (twin-twin transfusion syndrome [TTTS]), and metabolic (maternal obesity) vs sex- and gestational age-matched control subjects. Differentially expressed probe sets were identified with the use of paired t-tests with the Benjamini-Hochberg correction for multiple testing (P < .05). Functional analyses were performed with IPA and GSEA/DFLAT. Outputs were compared for biologic relevance to the fetus. Compared with control subjects, there were 414 significantly dysregulated probe sets in T21 fetuses, 2226 in TTTS recipient twins, and 470 in fetuses of obese women. Each analytic output was unique but complementary. For T21, both IPA and GSEA/DFLAT identified dysregulation of brain, cardiovascular, and integumentary system development. For TTTS, both analytic tools identified dysregulation of cell growth/proliferation, immune and inflammatory signaling, brain, and cardiovascular development. For maternal obesity, both tools identified dysregulation of immune and inflammatory signaling, brain and musculoskeletal development, and cell death. GSEA/DFLAT identified substantially more dysregulated biologic functions in fetuses of obese women (1203 vs 151). For all 3 datasets, GSEA/DFLAT provided more comprehensive information about brain development. IPA consistently provided more detailed annotation about cell death. IPA produced many dysregulated terms that pertained to cancer (14 in T21, 109 in TTTS, 26 in maternal obesity); GSEA/DFLAT did not. Interpretation of the fetal amniotic fluid supernatant transcriptome depends on the analytic program, which suggests that >1 resource should be used. Within IPA, physiologic cellular proliferation in the fetus produced many "false positive" annotations that pertained to cancer, which reflects its bias toward adult diseases. This study supports the use of gene annotation resources with a developmental focus, such as DFLAT, for 'omics studies in perinatal medicine. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Modeling of classical swirl injector dynamics

    NASA Astrophysics Data System (ADS)

    Ismailov, Maksud M.

    The knowledge of the dynamics of a swirl injector is crucial in designing a stable liquid rocket engine. Since the swirl injector is a complex fluid flow device in itself, not much work has been conducted to describe its dynamics either analytically or by using computational fluid dynamics techniques. Even the experimental observation is limited up to date. Thus far, there exists an analytical linear theory by Bazarov [1], which is based on long-wave disturbances traveling on the free surface of the injector core. This theory does not account for variation of the nozzle reflection coefficient as a function of disturbance frequency, and yields a response function which is strongly dependent on the so called artificial viscosity factor. This causes an uncertainty in designing an injector for the given operational combustion instability frequencies in the rocket engine. In this work, the author has studied alternative techniques to describe the swirl injector response, both analytically and computationally. In the analytical part, by using the linear small perturbation analysis, the entire phenomenon of unsteady flow in swirl injectors is dissected into fundamental components, which are the phenomena of disturbance wave refraction and reflection, and vortex chamber resonance. This reveals the nature of flow instability and the driving factors leading to maximum injector response. In the computational part, by employing the nonlinear boundary element method (BEM), the author sets the boundary conditions such that they closely simulate those in the analytical part. The simulation results then show distinct peak responses at frequencies that are coincident with those resonant frequencies predicted in the analytical part. Moreover, a cold flow test of the injector related to this study also shows a clear growth of instability with its maximum amplitude at the first fundamental frequency predicted both by analytical methods and BEM. It shall be noted however that Bazarov's theory does not predict the resonant peaks. Overall this methodology provides clearer understanding of the injector dynamics compared to Bazarov's. Even though the exact value of response is not possible to obtain at this stage of theoretical, computational, and experimental investigation, this methodology sets the starting point from where the theoretical description of reflection/refraction, resonance, and their interaction between each other may be refined to higher order to obtain its more precise value.

  2. [Automated analyzer of enzyme immunoassay].

    PubMed

    Osawa, S

    1995-09-01

    Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.

  3. A microfluidic device integrating dual CMOS polysilicon nanowire sensors for on-chip whole blood processing and simultaneous detection of multiple analytes.

    PubMed

    Kuan, Da-Han; Wang, I-Shun; Lin, Jiun-Rue; Yang, Chao-Han; Huang, Chi-Hsien; Lin, Yen-Hung; Lin, Chih-Ting; Huang, Nien-Tsu

    2016-08-02

    The hemoglobin-A1c test, measuring the ratio of glycated hemoglobin (HbA1c) to hemoglobin (Hb) levels, has been a standard assay in diabetes diagnosis that removes the day-to-day glucose level variation. Currently, the HbA1c test is restricted to hospitals and central laboratories due to the laborious, time-consuming whole blood processing and bulky instruments. In this paper, we have developed a microfluidic device integrating dual CMOS polysilicon nanowire sensors (MINS) for on-chip whole blood processing and simultaneous detection of multiple analytes. The micromachined polymethylmethacrylate (PMMA) microfluidic device consisted of a serpentine microchannel with multiple dam structures designed for non-lysed cells or debris trapping, uniform plasma/buffer mixing and dilution. The CMOS-fabricated polysilicon nanowire sensors integrated with the microfluidic device were designed for the simultaneous, label-free electrical detection of multiple analytes. Our study first measured the Hb and HbA1c levels in 11 clinical samples via these nanowire sensors. The results were compared with those of standard Hb and HbA1c measurement methods (Hb: the sodium lauryl sulfate hemoglobin detection method; HbA1c: cation-exchange high-performance liquid chromatography) and showed comparable outcomes. Finally, we successfully demonstrated the efficacy of the MINS device's on-chip whole blood processing followed by simultaneous Hb and HbA1c measurement in a clinical sample. Compared to current Hb and HbA1c sensing instruments, the MINS platform is compact and can simultaneously detect two analytes with only 5 μL of whole blood, which corresponds to a 300-fold blood volume reduction. The total assay time, including the in situ sample processing and analyte detection, was just 30 minutes. Based on its on-chip whole blood processing and simultaneous multiple analyte detection functionalities with a lower sample volume requirement and shorter process time, the MINS device can be effectively applied to real-time diabetes diagnostics and monitoring in point-of-care settings.

  4. Thinking outside the box: effects of modes larger than the survey on matter power spectrum covariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putter, Roland de; Wagner, Christian; Verde, Licia

    2012-04-01

    Accurate power spectrum (or correlation function) covariance matrices are a crucial requirement for cosmological parameter estimation from large scale structure surveys. In order to minimize reliance on computationally expensive mock catalogs, it is important to have a solid analytic understanding of the different components that make up a covariance matrix. Considering the matter power spectrum covariance matrix, it has recently been found that there is a potentially dominant effect on mildly non-linear scales due to power in modes of size equal to and larger than the survey volume. This beat coupling effect has been derived analytically in perturbation theory andmore » while it has been tested with simulations, some questions remain unanswered. Moreover, there is an additional effect of these large modes, which has so far not been included in analytic studies, namely the effect on the estimated average density which enters the power spectrum estimate. In this article, we work out analytic, perturbation theory based expressions including both the beat coupling and this local average effect and we show that while, when isolated, beat coupling indeed causes large excess covariance in agreement with the literature, in a realistic scenario this is compensated almost entirely by the local average effect, leaving only ∼ 10% of the excess. We test our analytic expressions by comparison to a suite of large N-body simulations, using both full simulation boxes and subboxes thereof to study cases without beat coupling, with beat coupling and with both beat coupling and the local average effect. For the variances, we find excellent agreement with the analytic expressions for k < 0.2 hMpc{sup −1} at z = 0.5, while the correlation coefficients agree to beyond k = 0.4 hMpc{sup −1}. As expected, the range of agreement increases towards higher redshift and decreases slightly towards z = 0. We finish by including the large-mode effects in a full covariance matrix description for arbitrary survey geometry and confirming its validity using simulations. This may be useful as a stepping stone towards building an actual galaxy (or other tracer's) power spectrum covariance matrix.« less

  5. Interpersonal Mindfulness Informed by Functional Analytic Psychotherapy: Findings from a Pilot Randomized Trial

    ERIC Educational Resources Information Center

    Bowen, Sarah; Haworth, Kevin; Grow, Joel; Tsai, Mavis; Kohlenberg, Robert

    2012-01-01

    Functional Analytic Psychotherapy (FAP; Kohlenberg & Tsai, 1991) aims to improve interpersonal relationships through skills intended to increase closeness and connection. The current trial assessed a brief mindfulness-based intervention informed by FAP, in which an interpersonal element was added to a traditional intrapersonal mindfulness…

  6. Utility of the summation chromatographic peak integration function to avoid manual reintegrations in the analysis of targeted analytes

    USDA-ARS?s Scientific Manuscript database

    As sample preparation and analytical techniques have improved, data handling has become the main limitation in automated high-throughput analysis of targeted chemicals in many applications. Conventional chromatographic peak integration functions rely on complex software and settings, but untrustwor...

  7. Functional Analytic Psychotherapy for Interpersonal Process Groups: A Behavioral Application

    ERIC Educational Resources Information Center

    Hoekstra, Renee

    2008-01-01

    This paper is an adaptation of Kohlenberg and Tsai's work, Functional Analytical Psychotherapy (1991), or FAP, to group psychotherapy. This author applied a behavioral rationale for interpersonal process groups by illustrating key points with a hypothetical client. Suggestions are also provided for starting groups, identifying goals, educating…

  8. Functional Analytic Psychotherapy with Juveniles Who Have Committed Sexual Offenses

    ERIC Educational Resources Information Center

    Newring, Kirk A. B.; Wheeler, Jennifer G.

    2012-01-01

    We have previously discussed the application of Functional Analytic Psychotherapy (FAP) with adults who have committed sexual offense behaviors (Newring & Wheeler, 2010). The present entry borrows heavily from the foundation presented in that chapter, and extends this approach to working with adolescents, youth, and juveniles with sexual offense…

  9. Equifinality in Functional Analytic Psychotherapy: Different Strokes for Different Folks

    ERIC Educational Resources Information Center

    Darrow, Sabrina M.; Dalto, Georgia; Follette, William C.

    2012-01-01

    Functional Analytic Psychotherapy (FAP) is an interpersonal behavior therapy that relies on a therapist's ability to contingently respond to in-session client behavior. Valued behavior change in clients results from the therapist shaping more effective client interpersonal behaviors by providing effective social reinforcement when these behaviors…

  10. Promoting Efficacy Research on Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Maitland, Daniel W. M.; Gaynor, Scott T.

    2012-01-01

    Functional Analytic Psychotherapy (FAP) is a form of therapy grounded in behavioral principles that utilizes therapist reactions to shape target behavior. Despite a growing literature base, there is a paucity of research to establish the efficacy of FAP. As a general approach to psychotherapy, and how the therapeutic relationship produces change,…

  11. Pre-analytical issues in the haemostasis laboratory: guidance for the clinical laboratories.

    PubMed

    Magnette, A; Chatelain, M; Chatelain, B; Ten Cate, H; Mullier, F

    2016-01-01

    Ensuring quality has become a daily requirement in laboratories. In haemostasis, even more than in other disciplines of biology, quality is determined by a pre-analytical step that encompasses all procedures, starting with the formulation of the medical question, and includes patient preparation, sample collection, handling, transportation, processing, and storage until time of analysis. This step, based on a variety of manual activities, is the most vulnerable part of the total testing process and is a major component of the reliability and validity of results in haemostasis and constitutes the most important source of erroneous or un-interpretable results. Pre-analytical errors may occur throughout the testing process and arise from unsuitable, inappropriate or wrongly handled procedures. Problems may arise during the collection of blood specimens such as misidentification of the sample, use of inadequate devices or needles, incorrect order of draw, prolonged tourniquet placing, unsuccessful attempts to locate the vein, incorrect use of additive tubes, collection of unsuitable samples for quality or quantity, inappropriate mixing of a sample, etc. Some factors can alter the result of a sample constituent after collection during transportation, preparation and storage. Laboratory errors can often have serious adverse consequences. Lack of standardized procedures for sample collection accounts for most of the errors encountered within the total testing process. They can also have clinical consequences as well as a significant impact on patient care, especially those related to specialized tests as these are often considered as "diagnostic". Controlling pre-analytical variables is critical since this has a direct influence on the quality of results and on their clinical reliability. The accurate standardization of the pre-analytical phase is of pivotal importance for achieving reliable results of coagulation tests and should reduce the side effects of the influence factors. This review is a summary of the most important recommendations regarding the importance of pre-analytical factors for coagulation testing and should be a tool to increase awareness about the importance of pre-analytical factors for coagulation testing.

  12. A multi-analyte biosensor for the simultaneous label-free detection of pathogens and biomarkers in point-of-need animal testing.

    PubMed

    Ewald, Melanie; Fechner, Peter; Gauglitz, Günter

    2015-05-01

    For the first time, a multi-analyte biosensor platform has been developed using the label-free 1-lambda-reflectometry technique. This platform is the first, which does not use imaging techniques, but is able to perform multi-analyte measurements. It is designed to be portable and cost-effective and therefore allows for point-of-need testing or on-site field-testing with possible applications in diagnostics. This work highlights the application possibilities of this platform in the field of animal testing, but is also relevant and transferable to human diagnostics. The performance of the platform has been evaluated using relevant reference systems like biomarker (C-reactive protein) and serology (anti-Salmonella antibodies) as well as a panel of real samples (animal sera). The comparison of the working range and limit of detection shows no loss of performance transferring the separate assays to the multi-analyte setup. Moreover, the new multi-analyte platform allows for discrimination between sera of animals infected with different Salmonella subtypes.

  13. Longitudinal dielectric function and dispersion relation of electrostatic waves in relativistic plasmas

    NASA Astrophysics Data System (ADS)

    Touil, B.; Bendib, A.; Bendib-Kalache, K.

    2017-02-01

    The longitudinal dielectric function is derived analytically from the relativistic Vlasov equation for arbitrary values of the relevant parameters z = m c 2 / T , where m is the rest electron mass, c is the speed of light, and T is the electron temperature in energy units. A new analytical approach based on the Legendre polynomial expansion and continued fractions was used. Analytical expression of the electron distribution function was derived. The real part of the dispersion relation and the damping rate of electron plasma waves are calculated both analytically and numerically in the whole range of the parameter z . The results obtained improve significantly the previous results reported in the literature. For practical purposes, explicit expressions of the real part of the dispersion relation and the damping rate in the range z > 30 and strongly relativistic regime are also proposed.

  14. Addressing the too big to fail problem with baryon physics and sterile neutrino dark matter

    NASA Astrophysics Data System (ADS)

    Lovell, Mark R.; Gonzalez-Perez, Violeta; Bose, Sownak; Boyarsky, Alexey; Cole, Shaun; Frenk, Carlos S.; Ruchayskiy, Oleg

    2017-07-01

    N-body dark matter simulations of structure formation in the Λ cold dark matter (ΛCDM) model predict a population of subhaloes within Galactic haloes that have higher central densities than inferred for the Milky Way satellites, a tension known as the 'too big to fail' problem. Proposed solutions include baryonic effects, a smaller mass for the Milky Way halo and warm dark matter (WDM). We test these possibilities using a semi-analytic model of galaxy formation to generate luminosity functions for Milky Way halo-analogue satellite populations, the results of which are then coupled to the Jiang & van den Bosch model of subhalo stripping to predict the subhalo Vmax functions for the 10 brightest satellites. We find that selecting the brightest satellites (as opposed to the most massive) and modelling the expulsion of gas by supernovae at early times increases the likelihood of generating the observed Milky Way satellite Vmax function. The preferred halo mass is 6 × 1011 M⊙, which has a 14 per cent probability to host a Vmax function like that of the Milky Way satellites. We conclude that the Milky Way satellite Vmax function is compatible with a CDM cosmology, as previously found by Sawala et al. using hydrodynamic simulations. Sterile neutrino-WDM models achieve a higher degree of agreement with the observations, with a maximum 50 per cent chance of generating the observed Milky Way satellite Vmax function. However, more work is required to check that the semi-analytic stripping model is calibrated correctly for each sterile neutrino cosmology.

  15. Understanding the dynamics of superparamagnetic particles under the influence of high field gradient arrays

    NASA Astrophysics Data System (ADS)

    Barnsley, Lester C.; Carugo, Dario; Aron, Miles; Stride, Eleanor

    2017-03-01

    The aim of this study was to characterize the behaviour of superparamagnetic particles in magnetic drug targeting (MDT) schemes. A 3-dimensional mathematical model was developed, based on the analytical derivation of the trajectory of a magnetized particle suspended inside a fluid channel carrying laminar flow and in the vicinity of an external source of magnetic force. Semi-analytical expressions to quantify the proportion of captured particles, and their relative accumulation (concentration) as a function of distance along the wall of the channel were also derived. These were expressed in terms of a non-dimensional ratio of the relevant physical and physiological parameters corresponding to a given MDT protocol. The ability of the analytical model to assess magnetic targeting schemes was tested against numerical simulations of particle trajectories. The semi-analytical expressions were found to provide good first-order approximations for the performance of MDT systems in which the magnetic force is relatively constant over a large spatial range. The numerical model was then used to test the suitability of a range of different designs of permanent magnet assemblies for MDT. The results indicated that magnetic arrays that emit a strong magnetic force that varies rapidly over a confined spatial range are the most suitable for concentrating magnetic particles in a localized region. By comparison, commonly used magnet geometries such as button magnets and linear Halbach arrays result in distributions of accumulated particles that are less efficient for delivery. The trajectories predicted by the numerical model were verified experimentally by acoustically focusing magnetic microbeads flowing in a glass capillary channel, and optically tracking their path past a high field gradient Halbach array.

  16. Reducing liver function tests for statin monitoring: an observational comparison of two clinical commissioning groups.

    PubMed

    Homer, Kate; Robson, John; Solaiman, Susannah; Davis, Abigail; Khan, Saima Zubeda; McCoy, David; Mathur, Rohini; Hull, Sally; Boomla, Kambiz

    2017-03-01

    Current liver function testing for statin monitoring is largely unnecessary and costly. Statins do not cause liver disease. Both reduction in test frequency and use of a single alanine transaminase (ALT) rather than a full seven analyte liver function test (LFT) array would reduce cost and may benefit patients. To assess LFT testing in relation to statin use and evaluate an intervention to reduce full-array LFTs ordered by GPs for statin monitoring. Two-year cross-sectional time series in two east London clinical commissioning groups (CCGs) with 650 000 patients. One CCG received the intervention; the other did not. The intervention comprised local guidance on LFTs for statin monitoring and access to a single ALT rather than full LFT array. Of the total population, 17.6% were on statins, accounting for 43.2% of total LFTs. In the population without liver disease, liver function tests were 3.6 times higher for those on statins compared with those who were not. Following intervention there was a significant reduction in the full LFT array per 1000 people on statins, from 70.3 (95% confidence interval [CI] = 66.3 to 74.6) in the pre-intervention year, to 58.1 (95% CI = 55.5 to 60.7) in the post-intervention year ( P <0.001). In the final month, March 2016, the rate was 53.2, a 24.3% reduction on the pre-intervention rate. This simple and generalisable intervention, enabling ordering of a single ALT combined with information recommending prudent rather than periodic testing, reduced full LFT testing by 24.3% in people on statins. This is likely to have patient benefit at reduced cost. © British Journal of General Practice 2017.

  17. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  18. Wind tunnel investigation of a 14 foot vertical axis windmill

    NASA Technical Reports Server (NTRS)

    Muraca, R. J.; Guillotte, R. J.

    1976-01-01

    A full scale wind tunnel investigation was made to determine the performance characteristics of a 14 ft diameter vertical axis windmill. The parameters measured were wind velocity, shaft torque, shaft rotation rate, along with the drag and yawing moment. A velocity survey of the flow field downstream of the windmill was also made. The results of these tests along with some analytically predicted data are presented in the form of generalized data as a function of tip speed ratio.

  19. Catastrophe optics of sharp-edge diffraction.

    PubMed

    Borghi, Riccardo

    2016-07-01

    A classical problem of diffraction theory, namely plane wave diffraction by sharp-edge apertures, is here reformulated from the viewpoint of the fairly new subject of catastrophe optics. On using purely geometrical arguments, properly embedded into a wave optics context, uniform analytical estimates of the diffracted wavefield at points close to fold caustics are obtained, within paraxial approximation, in terms of the Airy function and its first derivative. Diffraction from parabolic apertures is proposed to test reliability and accuracy of our theoretical predictions.

  20. Numerical and analytical investigation of steel beam subjected to four-point bending

    NASA Astrophysics Data System (ADS)

    Farida, F. M.; Surahman, A.; Sofwan, A.

    2018-03-01

    A One type of bending tests is four-point bending test. The aim of this test is to investigate the properties and behavior of materials with structural applications. This study uses numerical and analytical studies. Results from both of these studies help to improve in experimental works. The purpose of this study is to predict steel beam behavior subjected to four-point bending test. This study intension is to analyze flexural beam subjected to four-point bending prior to experimental work. Main results of this research are location of strain gauge and LVDT on steel beam based on numerical study, manual calculation, and analytical study. Analytical study uses linear elasticity theory of solid objects. This study results is position of strain gauge and LVDT. Strain gauge is located between two concentrated loads at the top beam and bottom beam. LVDT is located between two concentrated loads.

  1. On relation between analytic and univalent functions defined by close-to P class with the function belonging to S class

    NASA Astrophysics Data System (ADS)

    Yildiz, Ismet; Uyanik, Neslihan; Albayrak, Hilal; Ay, Hilal

    2017-09-01

    The Weierstrass's associated function is not elliptic but it is of great use in developing the theory of elliptic function. The Zeta function is defined by the double series ∑'m∑″n{1/z-Wmn +1/Wm n +z/Wmn 2 } , where Wmn = 2mω1 + 2nω2 and m, n are integers, not simultaneously zero; the summation ∑'m∑″n{ 1/z -Wm n +1/Wm n +z/Wmn 2 } extends overall integers, not simultaneously. Which Wmn Lattice points. Evidently Wmn are simple poles of ζ (z) and hence the function is meromorphic in W = m ω1+n ω2:(m ,n )≠(0 ,0 ),m ,n ∈ℤ ,Im τ >0, D *=z :|z |>1 ,|Re z |<1/2 andImτ >0, z ∈ℂ. ζ (z) is uniformly convergent series of analytic functions, so the series can be differentiated term-by-term. ζ (z) is an odd function, hence the coefficients of the terms z2k is evidently zero when k is positive integers. Let A be the class of functions f (z) which are analytic and normalized with f (0) = 0 and f' (0) = 1. Let S be the subclass of A consisting of functions f (z) which are univalent in D. Let P class be univalent functions largely concerned with the family S of functions f analytic and univalent in the unit disk D, and satisfying the conditions f (0) = 0 and f' (0) = 1. One of the basic results of the theory is growth theorem, which asserts in part that for each f ∈ S. In particular, the functions f ∈ S are uniformly bounded on each compact subset of D. Thus the family S is locally bounded, and so by Montel's theorem it is a normal family. A relation was established between S class with function of Weierstrass which is analytic and monomorphic Closes-to-P class in unit disk.

  2. The general 2-D moments via integral transform method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  3. Type-curve estimation of statistical heterogeneity

    NASA Astrophysics Data System (ADS)

    Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica

    2004-04-01

    The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.

  4. On the first crossing distributions in fractional Brownian motion and the mass function of dark matter haloes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiotelis, Nicos; Popolo, Antonino Del, E-mail: adelpopolo@oact.inaf.it, E-mail: hiotelis@ipta.demokritos.gr

    We construct an integral equation for the first crossing distributions for fractional Brownian motion in the case of a constant barrier and we present an exact analytical solution. Additionally we present first crossing distributions derived by simulating paths from fractional Brownian motion. We compare the results of the analytical solutions with both those of simulations and those of some approximated solutions which have been used in the literature. Finally, we present multiplicity functions for dark matter structures resulting from our analytical approach and we compare with those resulting from N-body simulations. We show that the results of analytical solutions aremore » in good agreement with those of path simulations but differ significantly from those derived from approximated solutions. Additionally, multiplicity functions derived from fractional Brownian motion are poor fits of the those which result from N-body simulations. We also present comparisons with other models which are exist in the literature and we discuss different ways of improving the agreement between analytical results and N-body simulations.« less

  5. 40 CFR Table 5 to Subpart Uuuuu of... - Performance Testing Requirements

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to ASTM D6348-03, Sections A1 through A8 are mandatory; (2) For ASTM D6348-03 Annex A5 (Analyte Spiking Technique), the percent (%) R must be determined for each target analyte (see Equation A5.5); (3) For the ASTM D6348-03 test data to be acceptable for a target analyte, %R must be 70% ≥ R ≤ 130%; and...

  6. 40 CFR Table 5 to Subpart Uuuuu of... - Performance Testing Requirements

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to ASTM D6348-03, Sections A1 through A8 are mandatory; (2) For ASTM D6348-03 Annex A5 (Analyte Spiking Technique), the percent (%)R must be determined for each target analyte (see Equation A5.5); (3) For the ASTM D6348-03 test data to be acceptable for a target analyte, %R must be 70% ≤R ≤130%; and (4...

  7. 40 CFR Table 5 to Subpart Uuuuu of... - Performance Testing Requirements

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to ASTM D6348-03, Sections A1 through A8 are mandatory; (2) For ASTM D6348-03 Annex A5 (Analyte Spiking Technique), the percent (%)R must be determined for each target analyte (see Equation A5.5); (3) For the ASTM D6348-03 test data to be acceptable for a target analyte, %R must be 70%≤ R ≤ 130%; and...

  8. Use of Strain Measurements from Acoustic Bench Tests of the Battleship Flowliner Test Articles To Link Analytical Model Results to In-Service Resonant Response

    NASA Technical Reports Server (NTRS)

    Frady, Greg; Smaolloey, Kurt; LaVerde, Bruce; Bishop, Jim

    2004-01-01

    The paper will discuss practical and analytical findings of a test program conducted to assist engineers in determining which analytical strain fields are most appropriate to describe the crack initiating and crack propagating stresses in thin walled cylindrical hardware that serves as part of the Space Shuttle Main Engine's fuel system. In service the hardware is excited by fluctuating dynamic pressures in a cryogenic fuel that arise from turbulent flow/pump cavitation. A bench test using a simplified system was conducted using acoustic energy in air to excite the test articles. Strain measurements were used to reveal response characteristics of two Flowliner test articles that are assembled as a pair when installed in the engine feed system.

  9. Analytical transmission cross-coefficients for pink beam X-ray microscopy based on compound refractive lenses.

    PubMed

    Falch, Ken Vidar; Detlefs, Carsten; Snigirev, Anatoly; Mathiesen, Ragnvald H

    2018-01-01

    Analytical expressions for the transmission cross-coefficients for x-ray microscopes based on compound refractive lenses are derived based on Gaussian approximations of the source shape and energy spectrum. The effects of partial coherence, defocus, beam convergence, as well as lateral and longitudinal chromatic aberrations are accounted for and discussed. Taking the incoherent limit of the transmission cross-coefficients, a compact analytical expression for the modulation transfer function of the system is obtained, and the resulting point, line and edge spread functions are presented. Finally, analytical expressions for optimal numerical aperture, coherence ratio, and bandwidth are given. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Polymer functionalized nanostructured porous silicon for selective water vapor sensing at room temperature

    NASA Astrophysics Data System (ADS)

    Dwivedi, Priyanka; Das, Samaresh; Dhanekar, Saakshi

    2017-04-01

    This paper highlights the surface treatment of porous silicon (PSi) for enhancing the sensitivity of water vapors at room temperature. A simple and low cost technique was used for fabrication and functionalization of PSi. Spin coated polyvinyl alcohol (PVA) was used for functionalizing PSi surface. Morphological and structural studies were conducted to analyze samples using SEM and XRD/Raman spectroscopy respectively. Contact angle measurements were performed for assessing the wettability of the surfaces. PSi and functionalized PSi samples were tested as sensors in presence of different analytes like ethanol, acetone, isopropyl alcohol (IPA) and water vapors in the range of 50-500 ppm. Electrical measurements were taken from parallel aluminium electrodes fabricated on the functionalized surface, using metal mask and thermal evaporation. Functionalized PSi sensors in comparison to non-functionalized sensors depicted selective and enhanced response to water vapor at room temperature. The results portray an efficient and selective water vapor detection at room temperature.

  11. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  12. 7 CFR 91.39 - Premium hourly fee rates for overtime and legal holiday service.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... overtime work. When analytical testing in a Science and Technology facility requires the services of... (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS SERVICES AND GENERAL INFORMATION Fees and Charges § 91.39 Premium hourly fee rates for overtime and legal holiday service. (a) When analytical testing in a Science...

  13. 7 CFR 91.39 - Premium hourly fee rates for overtime and legal holiday service.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... overtime work. When analytical testing in a Science and Technology facility requires the services of... (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS SERVICES AND GENERAL INFORMATION Fees and Charges § 91.39 Premium hourly fee rates for overtime and legal holiday service. (a) When analytical testing in a Science...

  14. 7 CFR 91.39 - Premium hourly fee rates for overtime and legal holiday service.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... overtime work. When analytical testing in a Science and Technology facility requires the services of... (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS SERVICES AND GENERAL INFORMATION Fees and Charges § 91.39 Premium hourly fee rates for overtime and legal holiday service. (a) When analytical testing in a Science...

  15. 7 CFR 91.39 - Premium hourly fee rates for overtime and legal holiday service.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... overtime work. When analytical testing in a Science and Technology facility requires the services of... (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS SERVICES AND GENERAL INFORMATION Fees and Charges § 91.39 Premium hourly fee rates for overtime and legal holiday service. (a) When analytical testing in a Science...

  16. 40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...

  17. 40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...

  18. 40 CFR 1066.145 - Test fuel, engine fluids, analytical gases, and other calibration standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... requirements of 40 CFR 1065.750. (e) Mass standards. Use mass standards that meet the requirements of 40 CFR... gases, and other calibration standards. 1066.145 Section 1066.145 Protection of Environment..., analytical gases, and other calibration standards. (a) Test fuel. Use test fuel as specified in the standard...

  19. Analytical control test plan and microbiological methods for the water recovery test

    NASA Technical Reports Server (NTRS)

    Traweek, M. S. (Editor); Tatara, J. D. (Editor)

    1994-01-01

    Qualitative and quantitative laboratory results are important to the decision-making process. In some cases, they may represent the only basis for deciding between two or more given options or processes. Therefore, it is essential that handling of laboratory samples and analytical operations employed are performed at a deliberate level of conscientious effort. Reporting erroneous results can lead to faulty interpretations and result in misinformed decisions. This document provides analytical control specifications which will govern future test procedures related to all Water Recovery Test (WRT) Phase 3 activities to be conducted at the National Aeronautics and Space Administration/Marshall Space Flight Center (NASA/MSFC). This document addresses the process which will be used to verify analytical data generated throughout the test period, and to identify responsibilities of key personnel and participating laboratories, the chains of communication to be followed, and ensure that approved methodology and procedures are used during WRT activities. This document does not outline specifics, but provides a minimum guideline by which sampling protocols, analysis methodologies, test site operations, and laboratory operations should be developed.

  20. Ductile-Phase-Toughened Tungsten for Plasma-Facing Materials

    NASA Astrophysics Data System (ADS)

    Cunningham, Kevin Hawkins

    A variety of processing approaches were employed to fabricate ductile-phase-toughened (DPT) tungsten (W) composites. Mechanical testing and analytical modeling were used to guide composite development. This work provides a basis for further development of W composites to be used in structural divertor components of future fusion reactors. W wire was tested in tension, showing significant ductility and strength. Coatings of copper (Cu) or tungsten carbide (WC) were applied to the W wire via electrodeposition and carburization, respectively. Composites were fabricated using spark plasma sintering (SPS) to consolidate W powders together with each type of coated W wire. DPT behavior, e.g. crack arrest and crack bridging, was not observed in three-point bend testing of the sintered composites. A laminate was fabricated by hot pressing W and Cu foils together with W wires, and subsequently tested in tension. This laminate was bonded via hot pressing to thick W plate as a reinforcing layer, and the composite was tested in three-point bending. Crack arrest was observed along with some fiber pullout, but significant transverse cracking in the W plate confounded further fracture toughness analysis. The fracture toughness of thin W plate was measured in three-point bending. W plates were brazed with Cu foils to form a laminate. Crack arrest and crack bridging were observed in three-point bend tests of the laminate, and fracture resistance curves were successfully calculated for this DPT composite. An analytical model of crack bridging was developed using the basis described by Chao in previous work by the group. The model uses the specimen geometry, matrix properties, and the stress-displacement function of a ductile reinforcement ("bridging law") to calculate the fracture resistance curve (R-curve) and load-displacement curve (P-D curve) for any test specimen geometry. The code was also implemented to estimate the bridging law of an arbitrary composite using R-curve data. Finally, a parametric study was performed to quantitatively determine the necessary mechanical properties of useful toughening reinforcements for a DPT W composite. The analytical model has a broad applicability for any DPT material.

  1. On the Gibbs phenomenon 4: Recovering exponential accuracy in a sub-interval from a Gegenbauer partial sum of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.

  2. Boundary Electron and Beta Dosimetry-Quantification of the Effects of Dissimilar Media on Absorbed Dose

    NASA Astrophysics Data System (ADS)

    Nunes, Josane C.

    1991-02-01

    This work quantifies the changes effected in electron absorbed dose to a soft-tissue equivalent medium when part of this medium is replaced by a material that is not soft -tissue equivalent. That is, heterogeneous dosimetry is addressed. Radionuclides which emit beta particles are the electron sources of primary interest. They are used in brachytherapy and in nuclear medicine: for example, beta -ray applicators made with strontium-90 are employed in certain ophthalmic treatments and iodine-131 is used to test thyroid function. More recent medical procedures under development and which involve beta radionuclides include radioimmunotherapy and radiation synovectomy; the first is a cancer modality and the second deals with the treatment of rheumatoid arthritis. In addition, the possibility of skin surface contamination exists whenever there is handling of radioactive material. Determination of absorbed doses in the examples of the preceding paragraph requires considering boundaries of interfaces. Whilst the Monte Carlo method can be applied to boundary calculations, for routine work such as in clinical situations, or in other circumstances where doses need to be determined quickly, analytical dosimetry would be invaluable. Unfortunately, few analytical methods for boundary beta dosimetry exist. Furthermore, the accuracy of results from both Monte Carlo and analytical methods has to be assessed. Although restricted to one radionuclide, phosphorus -32, the experimental data obtained in this work serve several purposes, one of which is to provide standards against which calculated results can be tested. The experimental data also contribute to the relatively sparse set of published boundary dosimetry data. At the same time, they may be useful in developing analytical boundary dosimetry methodology. The first application of the experimental data is demonstrated. Results from two Monte Carlo codes and two analytical methods, which were developed elsewhere, are compared with experimental data. Monte Carlo results compare satisfactory with experimental results for the boundaries considered. The agreement with experimental results for air interfaces is of particular interest because of discrepancies reported previously by another investigator who used data obtained from a different experimental technique. Results from one of the analytical methods differ significantly from the experimental data obtained here. The second analytical method provided data which approximate experimental results to within 30%. This is encouraging but it remains to be determined whether this method performs equally well for other source energies.

  3. Inequalities for majorizing analytic functions and their applications to rational trigonometric functions and polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olesov, A V

    2014-10-31

    New inequalities are established for analytic functions satisfying Meiman's majorization conditions. Estimates for values of and differential inequalities involving rational trigonometric functions with an integer majorant on an interval of length less than the period and with prescribed poles which are symmetrically positioned relative to the real axis, as well as differential inequalities for trigonometric polynomials in some classes, are given as applications. These results improve several theorems due to Meiman, Genchev, Smirnov and Rusak. Bibliography: 27 titles.

  4. Simple functionalization method for single conical pores with a polydopamine layer

    NASA Astrophysics Data System (ADS)

    Horiguchi, Yukichi; Goda, Tatsuro; Miyahara, Yuji

    2018-04-01

    Resistive pulse sensing (RPS) is an interesting analytical system in which micro- to nanosized pores are used to evaluate particles or small analytes. Recently, molecular immobilization techniques to improve the performance of RPS have been reported. The problem in functionalization for RPS is that molecular immobilization by chemical reaction is restricted by the pore material type. Herein, a simple functionalization is performed using mussel-inspired polydopamine as an intermediate layer to connect the pore material with functional molecules.

  5. Visual and Analytic Strategies in Geometry

    ERIC Educational Resources Information Center

    Kospentaris, George; Vosniadou, Stella; Kazic, Smaragda; Thanou, Emilian

    2016-01-01

    We argue that there is an increasing reliance on analytic strategies compared to visuospatial strategies, which is related to geometry expertise and not on individual differences in cognitive style. A Visual/Analytic Strategy Test (VAST) was developed to investigate the use of visuo-spatial and analytic strategies in geometry in 30 mathematics…

  6. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE PAGES

    Izacard, Olivier

    2016-08-02

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  7. Analytical Ultrasonics in Materials Research and Testing

    NASA Technical Reports Server (NTRS)

    Vary, A.

    1986-01-01

    Research results in analytical ultrasonics for characterizing structural materials from metals and ceramics to composites are presented. General topics covered by the conference included: status and advances in analytical ultrasonics for characterizing material microstructures and mechanical properties; status and prospects for ultrasonic measurements of microdamage, degradation, and underlying morphological factors; status and problems in precision measurements of frequency-dependent velocity and attenuation for materials analysis; procedures and requirements for automated, digital signal acquisition, processing, analysis, and interpretation; incentives for analytical ultrasonics in materials research and materials processing, testing, and inspection; and examples of progress in ultrasonics for interrelating microstructure, mechanical properites, and dynamic response.

  8. Experimental and Analytical Investigation of the Coolant Flow Characteristics in Cooled Turbine Airfoils

    NASA Technical Reports Server (NTRS)

    Damerow, W. P.; Murtaugh, J. P.; Burggraf, F.

    1972-01-01

    The flow characteristics of turbine airfoil cooling system components were experimentally investigated. Flow models representative of leading edge impingement, impingement with crossflow (midchord cooling), pin fins, feeder supply tube, and a composite model of a complete airfoil flow system were tested. Test conditions were set by varying pressure level to cover the Mach number and Reynolds number range of interest in advanced turbine applications. Selected geometrical variations were studied on each component model to determine these effects. Results of these tests were correlated and compared with data available in the literature. Orifice flow was correlated in terms of discharge coefficients. For the leading edge model this was found to be a weak function of hole Mach number and orifice-to-impinged wall spacing. In the impingement with crossflow tests, the discharge coefficient was found to be constant and thus independent of orifice Mach number, Reynolds number, crossflow rate, and impingement geometry. Crossflow channel pressure drop showed reasonable agreement with a simple one-dimensional momentum balance. Feeder tube orifice discharge coefficients correlated as a function of orifice Mach number and the ratio of the orifice-to-approach velocity heads. Pin fin data was correlated in terms of equivalent friction factor, which was found to be a function of Reynolds number and pin spacing but independent of pin height in the range tested.

  9. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.

  10. Gas-phase ion-molecule reactions for the identification of the sulfone functionality in protonated analytes in a linear quadrupole ion trap mass spectrometer.

    PubMed

    Tang, Weijuan; Sheng, Huaming; Kong, John Y; Yerabolu, Ravikiran; Zhu, Hanyu; Max, Joann; Zhang, Minli; Kenttämaa, Hilkka I

    2016-06-30

    The oxidation of sulfur atoms is an important biotransformation pathway for many sulfur-containing drugs. In order to rapidly identify the sulfone functionality in drug metabolites, a tandem mass spectrometric method based on ion-molecule reactions was developed. A phosphorus-containing reagent, trimethyl phosphite (TMP), was allowed to react with protonated analytes with various functionalities in a linear quadrupole ion trap mass spectrometer. The reaction products and reaction efficiencies were measured. Only protonated sulfone model compounds were found to react with TMP to form a characteristic [TMP adduct-MeOH] product ion. All other protonated compounds investigated, with functionalities such as sulfoxide, N-oxide, hydroxylamino, keto, carboxylic acid, and aliphatic and aromatic amino, only react with TMP via proton transfer and/or addition. The specificity of the reaction was further demonstrated by using a sulfoxide-containing anti-inflammatory drug, sulindac, as well as its metabolite sulindac sulfone. A method based on functional group-selective ion-molecule reactions in a linear quadrupole ion trap mass spectrometer has been demonstrated for the identification of the sulfone functionality in protonated analytes. A characteristic [TMP adduct-MeOH] product ion was only formed for the protonated sulfone analytes. The applicability of the TMP reagent in identifying sulfone functionalities in drug metabolites was also demonstrated. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. ac Modeling and impedance spectrum tests of the superconducting magnetic field coils for the Wendelstein 7-X fusion experiment.

    PubMed

    Ehmler, Hartmut; Köppen, Matthias

    2007-10-01

    The impedance spectrum test was employed for detection of short circuits within Wendelstein 7-X (W7-X) superconducting magnetic field coils. This test is based on measuring the complex impedance over several decades of frequency. The results are compared to predictions of appropriate electrical equivalent circuits of coils in different production states or during cold test. When the equivalent circuit is not too complicated the impedance can be represented by an analytic function. A more detailed analysis is performed with a network simulation code. The overall agreement of measured and calculated or simulated spectra is good. Two types of short circuits which appeared are presented and analyzed. The detection limit of the method is discussed. It is concluded that combined high-voltage ac and low-voltage impedance spectrum tests are ideal means to rule out short circuits in the W7-X coils.

  12. First-and Second-Order Displacement Transfer Functions for Structural Shape Calculations Using Analytically Predicted Surface Strains

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Fleischer, Van Tran

    2012-01-01

    New first- and second-order displacement transfer functions have been developed for deformed shape calculations of nonuniform cross-sectional beam structures such as aircraft wings. The displacement transfer functions are expressed explicitly in terms of beam geometrical parameters and surface strains (uniaxial bending strains) obtained at equally spaced strain stations along the surface of the beam structure. By inputting the measured or analytically calculated surface strains into the displacement transfer functions, one could calculate local slopes, deflections, and cross-sectional twist angles of the nonuniform beam structure for mapping the overall structural deformed shapes for visual display. The accuracy of deformed shape calculations by the first- and second-order displacement transfer functions are determined by comparing these values to the analytically predicted values obtained from finite element analyses. This comparison shows that the new displacement transfer functions could quite accurately calculate the deformed shapes of tapered cantilever tubular beams with different tapered angles. The accuracy of the present displacement transfer functions also are compared to those of the previously developed displacement transfer functions.

  13. Enzyme Biosensors for Biomedical Applications: Strategies for Safeguarding Analytical Performances in Biological Fluids

    PubMed Central

    Rocchitta, Gaia; Spanu, Angela; Babudieri, Sergio; Latte, Gavinella; Madeddu, Giordano; Galleri, Grazia; Nuvoli, Susanna; Bagella, Paola; Demartis, Maria Ilaria; Fiore, Vito; Manetti, Roberto; Serra, Pier Andrea

    2016-01-01

    Enzyme-based chemical biosensors are based on biological recognition. In order to operate, the enzymes must be available to catalyze a specific biochemical reaction and be stable under the normal operating conditions of the biosensor. Design of biosensors is based on knowledge about the target analyte, as well as the complexity of the matrix in which the analyte has to be quantified. This article reviews the problems resulting from the interaction of enzyme-based amperometric biosensors with complex biological matrices containing the target analyte(s). One of the most challenging disadvantages of amperometric enzyme-based biosensor detection is signal reduction from fouling agents and interference from chemicals present in the sample matrix. This article, therefore, investigates the principles of functioning of enzymatic biosensors, their analytical performance over time and the strategies used to optimize their performance. Moreover, the composition of biological fluids as a function of their interaction with biosensing will be presented. PMID:27249001

  14. Modeling Choice Under Uncertainty in Military Systems Analysis

    DTIC Science & Technology

    1991-11-01

    operators rather than fuzzy operators. This is suggested for further research. 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) In AHP , objectives, functions and...14 4.1 IMPRECISELY SPECIFIED MULTIPLE A’ITRIBUTE UTILITY THEORY... 14 4.2 FUZZY DECISION ANALYSIS...14 4.3 ANALYTIC HIERARCHICAL PROCESS ( AHP ) ................................... 14 4.4 SUBJECTIVE TRANSFER FUNCTION APPROACH

  15. Translating the Theoretical into Practical: A Logical Framework of Functional Analytic Psychotherapy Interactions for Research, Training, and Clinical Purposes

    ERIC Educational Resources Information Center

    Weeks, Cristal E.; Kanter, Jonathan W.; Bonow, Jordan T.; Landes, Sara J.; Busch, Andrew M.

    2012-01-01

    Functional analytic psychotherapy (FAP) provides a behavioral analysis of the psychotherapy relationship that directly applies basic research findings to outpatient psychotherapy settings. Specifically, FAP suggests that a therapist's in vivo (i.e., in-session) contingent responding to targeted client behaviors, particularly positive reinforcement…

  16. Some classes of analytic functions involving Noor integral operator

    NASA Astrophysics Data System (ADS)

    Patel, J.; Cho, N. E.

    2005-12-01

    The object of the present paper is to investigate some inclusion properties of certain subclasses of analytic functions defined by using the Noor integral operator. The integral preserving properties in connection with the operator are also considered. Relevant connections of the results presented here with those obtained in earlier works are pointed out.

  17. Calculation of the second term of the exact Green's function of the diffusion equation for diffusion-controlled chemical reactions

    NASA Astrophysics Data System (ADS)

    Plante, Ianik

    2016-01-01

    The exact Green's function of the diffusion equation (GFDE) is often considered to be the gold standard for the simulation of partially diffusion-controlled reactions. As the GFDE with angular dependency is quite complex, the radial GFDE is more often used. Indeed, the exact GFDE is expressed as a Legendre expansion, the coefficients of which are given in terms of an integral comprising Bessel functions. This integral does not seem to have been evaluated analytically in existing literature. While the integral can be evaluated numerically, the Bessel functions make the integral oscillate and convergence is difficult to obtain. Therefore it would be of great interest to evaluate the integral analytically. The first term was evaluated previously, and was found to be equal to the radial GFDE. In this work, the second term of this expansion was evaluated. As this work has shown that the first two terms of the Legendre polynomial expansion can be calculated analytically, it raises the question of the possibility that an analytical solution exists for the other terms.

  18. Neural oscillatory mechanisms during novel grammar learning underlying language analytical abilities.

    PubMed

    Kepinska, Olga; Pereda, Ernesto; Caspers, Johanneke; Schiller, Niels O

    2017-12-01

    The goal of the present study was to investigate the initial phases of novel grammar learning on a neural level, concentrating on mechanisms responsible for individual variability between learners. Two groups of participants, one with high and one with average language analytical abilities, performed an Artificial Grammar Learning (AGL) task consisting of learning and test phases. During the task, EEG signals from 32 cap-mounted electrodes were recorded and epochs corresponding to the learning phases were analysed. We investigated spectral power modulations over time, and functional connectivity patterns by means of a bivariate, frequency-specific index of phase synchronization termed Phase Locking Value (PLV). Behavioural data showed learning effects in both groups, with a steeper learning curve and higher ultimate attainment for the highly skilled learners. Moreover, we established that cortical connectivity patterns and profiles of spectral power modulations over time differentiated L2 learners with various levels of language analytical abilities. Over the course of the task, the learning process seemed to be driven by whole-brain functional connectivity between neuronal assemblies achieved by means of communication in the beta band frequency. On a shorter time-scale, increasing proficiency on the AGL task appeared to be supported by stronger local synchronisation within the right hemisphere regions. Finally, we observed that the highly skilled learners might have exerted less mental effort, or reduced attention for the task at hand once the learning was achieved, as evidenced by the higher alpha band power. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The Analysis of Adhesively Bonded Advanced Composite Joints Using Joint Finite Elements

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.

    2012-01-01

    The design and sizing of adhesively bonded joints has always been a major bottleneck in the design of composite vehicles. Dense finite element (FE) meshes are required to capture the full behavior of a joint numerically, but these dense meshes are impractical in vehicle-scale models where a course mesh is more desirable to make quick assessments and comparisons of different joint geometries. Analytical models are often helpful in sizing, but difficulties arise in coupling these models with full-vehicle FE models. Therefore, a joint FE was created which can be used within structural FE models to make quick assessments of bonded composite joints. The shape functions of the joint FE were found by solving the governing equations for a structural model for a joint. By analytically determining the shape functions of the joint FE, the complex joint behavior can be captured with very few elements. This joint FE was modified and used to consider adhesives with functionally graded material properties to reduce the peel stress concentrations located near adherend discontinuities. Several practical concerns impede the actual use of such adhesives. These include increased manufacturing complications, alterations to the grading due to adhesive flow during manufacturing, and whether changing the loading conditions significantly impact the effectiveness of the grading. An analytical study is conducted to address these three concerns. Furthermore, proof-of-concept testing is conducted to show the potential advantages of functionally graded adhesives. In this study, grading is achieved by strategically placing glass beads within the adhesive layer at different densities along the joint. Furthermore, the capability to model non-linear adhesive constitutive behavior with large rotations was developed, and progressive failure of the adhesive was modeled by re-meshing the joint as the adhesive fails. Results predicted using the joint FE was compared with experimental results for various joint configurations, including double cantilever beam and single lap joints.

  20. FETC/EPRI Biomass Cofiring Cooperative Agreement. Quarterly technical report, April 1-June 30, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, E.; Tillman, D.

    1997-12-01

    The FETC/EPRI Biomass Cofiring Program has accelerated the pace of cofiring development by increasing the testing activities plus the support activities for interpreting test results. Past tests conducted and analyzed include the Allen Fossil Plant and Seward Generating Station programs. On-going tests include the Colbert Fossil Plant precommercial test program, the Greenidge Station commercialization program, and the Blount St. Station switchgrass program. Tests in the formative stages included the NIPSCO cofiring test at Michigan City Generating Station. Analytical activities included modeling and related support functions required to analyze the cofiring test results, and to place those results into context. Amongmore » these activities is the fuel availability study in the Pittsburgh, PA area. This study, conducted for Duquesne Light, supports their initial investigation into reburn technology using wood waste as a fuel. This Quarterly Report, covering the third quarter of the FETC/EPRI Biomass Cofiring Program, highlights the progress made on the 16 projects funded under this cooperative agreement.« less

Top