Science.gov

Sample records for specific calibration problems

  1. Specific Pronunciation Problems.

    ERIC Educational Resources Information Center

    Avery, Peter; And Others

    1987-01-01

    Reviews common pronunciation problems experienced by learners of English as a second language who are native speakers of Vietnamese, Cantonese, Spanish, Portuguese, Italian, Polish, Greek, and Punjabi. (CB)

  2. 40 CFR 89.306 - Dynamometer specifications and calibration weights.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calibration weights. 89.306 Section 89.306 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... ENGINES Emission Test Equipment Provisions § 89.306 Dynamometer specifications and calibration weights. (a...) Dynamometer calibration weights. A minimum of six calibration weights for each range used are required....

  3. 40 CFR 89.306 - Dynamometer specifications and calibration weights.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... calibration weights. 89.306 Section 89.306 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... ENGINES Emission Test Equipment Provisions § 89.306 Dynamometer specifications and calibration weights. (a...) Dynamometer calibration weights. A minimum of six calibration weights for each range used are required....

  4. A review of some radiometric calibration problems and methods

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1984-01-01

    The in-flight radiometric calibration instrumentation and procedures of the Landsat Thematic Mapper and the high-resolution visible-range instruments of SPOT are illustrated with drawings and diagrams, characterized, and compared. Problems encountered in the laboratory calibration process, minimizing the temporal instability of the systems, identifying anomalies in the electronics in flight, and rechecking the calibration are examined, and it is pointed out that the stability of the calibration systems is less than that of the instruments themselves. The use of carefully measured ground-site data and atmospheric parameters in combination with radiative-transfer models for periodic calibration is recommended.

  5. Site-specific calibration of the Hanford personnel neutron dosimeter

    SciTech Connect

    Endres, A.W.; Brackenbush, L.W.; Baumgartner, W.V.; Rathbone, B.A.

    1994-10-01

    A new personnel dosimetry system, employing a standard Hanford thermoluminescent dosimeter (TLD) and a combination dosimeter with both CR-39 nuclear track and TLD-albedo elements, is being implemented at Hanford. Measurements were made in workplace environments in order to verify the accuracy of the system and establish site-specific factors to account for the differences in dosimeter response between the workplace and calibration laboratory. Neutron measurements were performed using sources at Hanford`s Plutonium Finishing Plant under high-scatter conditions to calibrate the new neutron dosimeter design to site-specific neutron spectra. The dosimeter was also calibrated using bare and moderated {sup 252}Cf sources under low-scatter conditions available in the Hanford Calibration Laboratory. Dose equivalent rates in the workplace were calculated from spectrometer measurements using tissue equivalent proportional counter (TEPC) and multisphere spectrometers. The accuracy of the spectrometers was verified by measurements on neutron sources with calibrations directly traceable to the National Institute of Standards and Technology (NIST).

  6. Algebraic analysis of the phase-calibration problem in the self-calibration procedures

    NASA Astrophysics Data System (ADS)

    Lannes, A.; Prieur, J.-L.

    2011-10-01

    This paper presents an analysis of the phase-calibration problem encountered in astronomy when mapping incoherent sources with aperture-synthesis devices. More precisely, this analysis concerns the phase-calibration operation involved in the self-calibration procedures of phase-closure imaging. The paper revisits and completes a previous analysis presented by Lannes in the Journal of the Optical Society of America A in 2005. It also benefits from some recent developments made for solving similar problems encountered in global navigation satellite systems. In radio-astronomy, the related optimization problems have been stated and solved hitherto at the phasor level. We present here an analysis conducted at the phase level, from which we derive a method for diagnosing and solving the difficulties of the phasor approach. In the most general case, the techniques to be implemented appeal to the algebraic graph theory and the algebraic number theory. The minima of the objective functionals to be minimized are identified by raising phase-closure integer ambiguities. We also show that in some configurations, to benefit from all the available information, closure phases of order greater than three are to be introduced. In summary, this study leads to a better understanding of the difficulties related to the very principle of phase-closure imaging. To circumvent these difficulties, we propose a strategy both simple and robust.

  7. Array shape self-calibration technique for direction finding problems

    NASA Astrophysics Data System (ADS)

    Ng, B. P.

    1992-12-01

    In the paper a self-calibration technique is proposed to handle the bearing estimation problem involving unknown perturbed sensor location. This calibration technique is applied on the MUSIC estimator in finding the direction of arrival (DOA) of plane waves in white noise. The basic idea of the technique is to maximize the total output power from the MUSIC estimator in the directional or frequency regions of interest while imposing constraints on the length of projection of the signal position vector (SPV) on the noise subspace. This technique exhibits relatively stable performance over other existing techniques in the sense that it converges to the required solution consistently. However, this is achieved at the expense of heavy computational load. This is illustrated with numerical results obtained from the computer studies conducted.

  8. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  9. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  10. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  11. 40 CFR 91.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specifications. (1) The dynamometer test stand and other instruments for measurement of engine speed and torque... accuracy. (1) The dynamometer test stand and other instruments for measurement of engine torque and...

  12. Calibrating corneal material model parameters using only inflation data: an ill-posed problem.

    PubMed

    Kok, S; Botha, N; Inglis, H M

    2014-12-01

    Goldmann applanation tonometry (GAT) is a method used to estimate the intraocular pressure by measuring the indentation resistance of the cornea. A popular approach to investigate the sensitivity of GAT results to material and geometry variations is to perform numerical modelling using the finite element method, for which a calibrated material model is required. These material models are typically calibrated using experimental inflation data by solving an inverse problem. In the inverse problem, the underlying material constitutive behaviour is inferred from the measured macroscopic response (chamber pressure versus apical displacement). In this study, a biomechanically motivated elastic fibre-reinforced corneal material model is chosen. The inverse problem of calibrating the corneal material model parameters using only experimental inflation data is demonstrated to be ill-posed, with small variations in the experimental data leading to large differences in the calibrated model parameters. This can result in different groups of researchers, calibrating their material model with the same inflation test data, drawing vastly different conclusions about the effect of material parameters on GAT results. It is further demonstrated that multiple loading scenarios, such as inflation as well as bending, would be required to reliably calibrate such a corneal material model. PMID:25112972

  13. Calibration

    NASA Astrophysics Data System (ADS)

    Kunze, Hans-Joachim

    Commercial spectrographic systems are usually supplied with some wave-length calibration, but it is essential that the experimenter performs his own calibration for reliable measurements. A number of sources emitting well-known emission lines are available, and the best values of their wavelengths may be taken from data banks accessible on the internet. Data have been critically evaluated for many decades by the National Institute of Standards and Technology (NIST) of the USA [13], see also p. 3. Special data bases have been established by the astronomy and fusion communities (Appendix B).

  14. Specific strategies: interventions for identified problem behaviors.

    PubMed

    Reed, S A

    1990-12-01

    Negativism, complaining, underachievement, game playing, passive-aggressive behavior, and workaholism constitute a repertoire of problem employee behaviors that impact on the productivity and morale of the work environment. Responding appropriately to the employee who presents with any of these behaviors is a formidable challenge to the nurse manager. Understanding the etiology of unmet needs, psychosocial dynamics (as discussed in Chapter 1) and variety of interventions can empower the nurse manager to achieve success in these difficult interactions. PMID:2081113

  15. Specific interoperability problems of security infrastructure services.

    PubMed

    Pharow, Peter; Blobel, Bernd

    2006-01-01

    Communication and co-operation in healthcare and welfare require a well-defined set of security services based on a standards-based interoperable security infrastructure and provided by a Trusted Third Party. Generally, the services describe status and relation of communicating principals, corresponding keys and attributes, and the access rights to both applications and data. Legal, social, behavioral and ethical requirements demand securely stored patient information and well-established access tools and tokens. Electronic signatures as means for securing integrity of messages and files, certified time stamps and time signatures are important for accessing and storing data in Electronic Health Record Systems. The key for all these services is a secure and reliable procedure for authentication (identification and verification). While mentioning technical problems (e.g. lifetime of the storage devices, migration of retrieval and presentation software), this paper aims at identifying harmonization and interoperability requirements of securing data items, files, messages, sets of archived items or documents, and life-long Electronic Health Records based on a secure certificate-based identification. It's commonly known that just relying on existing and emerging security standards does not necessarily guarantee interoperability of different security infrastructure approaches. So certificate separation can be a key to modern interoperable security infrastructure services. PMID:17095833

  16. Soil specific re-calibration of water content sensors for a field-scale sensor network

    NASA Astrophysics Data System (ADS)

    Gasch, Caley K.; Brown, David J.; Anderson, Todd; Brooks, Erin S.; Yourek, Matt A.

    2015-04-01

    Obtaining accurate soil moisture data from a sensor network requires sensor calibration. Soil moisture sensors are factory calibrated, but multiple site specific factors may contribute to sensor inaccuracies. Thus, sensors should be calibrated for the specific soil type and conditions in which they will be installed. Lab calibration of a large number of sensors prior to installation in a heterogeneous setting may not be feasible, and it may not reflect the actual performance of the installed sensor. We investigated a multi-step approach to retroactively re-calibrate sensor water content data from the dielectric permittivity readings obtained by sensors in the field. We used water content data collected since 2009 from a sensor network installed at 42 locations and 5 depths (210 sensors total) within the 37-ha Cook Agronomy Farm with highly variable soils located in the Palouse region of the Northwest United States. First, volumetric water content was calculated from sensor dielectric readings using three equations: (1) a factory calibration using the Topp equation; (2) a custom calibration obtained empirically from an instrumented soil in the field; and (3) a hybrid equation that combines the Topp and custom equations. Second, we used soil physical properties (particle size and bulk density) and pedotransfer functions to estimate water content at saturation, field capacity, and wilting point for each installation location and depth. We also extracted the same reference points from the sensor readings, when available. Using these reference points, we re-scaled the sensor readings, such that water content was restricted to the range of values that we would expect given the physical properties of the soil. The re-calibration accuracy was assessed with volumetric water content measurements obtained from field-sampled cores taken on multiple dates. In general, the re-calibration was most accurate when all three reference points (saturation, field capacity, and wilting

  17. Identification and classification of technical specification problems: Final report

    SciTech Connect

    Bizzak, D.J.; Stella, M.E.; Stukus, J.R.

    1987-12-01

    This report describes a methodology for a systematic review of nuclear plant technical specifications problems. Operating personnel conducted a line-by-line examination of the LaSalle Station technical specifications creating a computerized database of problems, categorized as to their cause, effect, and recommendations for resolving the problems. Some 102 technical specifications problems were identified. Results indicated that the predominant type of problem was inappropriate limiting conditions for operation. The ECCS and containment systems had the largest number of problem technical specifications. The most significant effect was extension of outage lengths. It was estimated that risk-based evaluations would help to justify desirable changes in some 40% of the problems. Both the methodology and the LaSalle database are detailed in the report. 9 refs., 2 figs., 8 tabs.

  18. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  19. A linear semi-infinite programming strategy for constructing optimal wavelet transforms in multivariate calibration problems.

    PubMed

    Coelho, Clarimar José; Galvão, Roberto K H; de Araújo, Mário César U; Pimentel, Maria Fernanda; da Silva, Edvan Cirino

    2003-01-01

    A novel strategy for the optimization of wavelet transforms with respect to the statistics of the data set in multivariate calibration problems is proposed. The optimization follows a linear semi-infinite programming formulation, which does not display local maxima problems and can be reproducibly solved with modest computational effort. After the optimization, a variable selection algorithm is employed to choose a subset of wavelet coefficients with minimal collinearity. The selection allows the building of a calibration model by direct multiple linear regression on the wavelet coefficients. In an illustrative application involving the simultaneous determination of Mn, Mo, Cr, Ni, and Fe in steel samples by ICP-AES, the proposed strategy yielded more accurate predictions than PCR, PLS, and nonoptimized wavelet regression. PMID:12767151

  20. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    SciTech Connect

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  1. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  2. 10 CFR 70.39 - Specific licenses for the manufacture or initial transfer of calibration or reference sources.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Specific licenses for the manufacture or initial transfer... manufacture or initial transfer of calibration or reference sources. (a) An application for a specific license to manufacture or initially transfer calibration or reference sources containing plutonium,...

  3. 10 CFR 70.39 - Specific licenses for the manufacture or initial transfer of calibration or reference sources.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Specific licenses for the manufacture or initial transfer... manufacture or initial transfer of calibration or reference sources. (a) An application for a specific license to manufacture or initially transfer calibration or reference sources containing plutonium,...

  4. 10 CFR 70.39 - Specific licenses for the manufacture or initial transfer of calibration or reference sources.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Specific licenses for the manufacture or initial transfer... manufacture or initial transfer of calibration or reference sources. (a) An application for a specific license to manufacture or initially transfer calibration or reference sources containing plutonium,...

  5. Specific Cognitive Predictors of Early Math Problem Solving

    ERIC Educational Resources Information Center

    Decker, Scott L.; Roberts, Alycia M.

    2015-01-01

    Development of early math skill depends on a prerequisite level of cognitive development. Identification of specific cognitive skills that are important for math development may not only inform instructional approaches but also inform assessment approaches to identifying children with specific learning problems in math. This study investigated the…

  6. The problem of calibration: A possible way to overcome the drawbacks of age models

    NASA Astrophysics Data System (ADS)

    Goswami, B.; Heitzig, J.; Rehfeld, K.; Marwan, N.; Kurths, J.

    2012-04-01

    Constructing a meaningful age model from a set of radiocarbon age-depth measurements made on a palaeoclimatic archive is the crucial backbone of all proxy-based research carried out thereafter. Significant progress in the development of Monte Carlo based interpolation techniques and Bayesian methods has been made recently, targeting the uncertainties of radiocarbon dating, which then reflect meaningfully as time domain errors in the proxy vs. time relationship. However, one primary limitation of these approaches is the debatable assumption of Gaussianity of the errors in calibrated ages as calibration often results in highly irregular and non-trivial probability distributions of the age for every measurement. Here, we present a method that circumvents this limitation by focussing on the construction of the proxy vs. time relationship rather than emphasising on the estimation of an age-depth relation as the intermediary step. Our method is based on a simple analysis of the involved probabilistic uncertainties and the use of (preferably non-parametric) regression methods that give an estimate of the uncertainty of regression at every point as well. With the appropriate use of Bayes' Theorem we then provide a regression-based estimator for the proxy measurements and compute the respective distribution parameters (such as mean and variance) that quantify the uncertainties of the proxy in the time domain. We verify this method with the help of an artificial data set involving the accumulation history of a simulated core and noisy radiocarbon dating and proxy measurements made on it. To our best knowledge, this is the first method that manages to overcome the fundamental problem of irregular distributions induced by calibration of radiocarbon ages. We feel that this approach shall enable us to look at the problem of dating uncertainties in a new light and open up newer possibilities for studying not only speleothem proxies but, more generally, from other palaeoclimatic

  7. Analytic ultracentrifuge calibration and determination of lipoprotein-specific refractive increments

    SciTech Connect

    Talwinder, S.K.; Adamson, G.L.; Glines, L.A.; Lindgren, F.T.; Laskaris, M.A.; Shore, V.G.

    1984-01-01

    Accurate quantification of the major classes and subfractions of human serum lipoproteins is an important analytical need in the characterization and evaluation of therapy of lipid and lipoprotein abnormalities. For calibrating the analytic ultracentrifuge (AnUC), the authors routinely use a Beckman calibration wedge cell with parallel scribed lines 1 cm apart. Such a cell give a rectangular pattern in the schlieren diagram, which determines magnification and also provides an area corresponding to an invariant refractive increment. Complete calibration for AnUC analysis of lipoproteins also requires accurate determination of the specific refractive increments (SRI) of the major lipoprotein classes, namely low density lipoprotein (LDL) and high density lipoprotein (HDL). These are measured in the density in which they are analyzed, i.e., 1.061 g/ml for LDL and 1.200 g/ml for HDL. Five fresh serum samples were fractionated for total LDL and total HDL and their SRI determined. Total lipoprotein mass was determined using precise CHN elemental analysis and compositional analyses. The results yielded corrected SRI of 0.00142 and 0.00135 ..delta..n/g/100 ml for LDL and HDL. Thus, their current values using 0.00154 and 0.00149 ..delta..n/g/100 ml underestimate LDL and HDL by 9% and 11%. Corrections of all previous LDL and HDL AnUC data can be made using appropriate factors of 1.087 and 1.106.

  8. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase

    NASA Astrophysics Data System (ADS)

    Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten

    2016-04-01

    Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  9. Model calibration for ice sheets and glaciers dynamics: a general theory of inverse problems in glaciology

    NASA Astrophysics Data System (ADS)

    Giudici, M.; Baratelli, F.; Comunian, A.; Vassena, C.; Cattaneo, L.

    2014-10-01

    Numerical modelling of the dynamic evolution of ice sheets and glaciers requires the solution of discrete equations which are based on physical principles (e.g. conservation of mass, linear momentum and energy) and phenomenological constitutive laws (e.g. Glen's and Fourier's laws). These equations must be accompanied by information on the forcing term and by initial and boundary conditions (IBCs) on ice velocity, stress and temperature; on the other hand the constitutive laws involve many physical parameters, some of which depend on the ice thermodynamical state. The proper forecast of the dynamics of ice sheets and glaciers requires a precise knowledge of several quantities which appear in the IBCs, in the forcing terms and in the phenomenological laws. As these quantities cannot be easily measured at the study scale in the field, they are often obtained through model calibration by solving an inverse problem (IP). The objective of this paper is to provide a thorough and rigorous conceptual framework for IPs in cryospheric studies and in particular: to clarify the role of experimental and monitoring data to determine the calibration targets and the values of the parameters that can be considered to be fixed; to define and characterise identifiability, a property related to the solution to the forward problem; to study well-posedness in a correct way, without confusing instability with ill-conditioning or with the properties of the method applied to compute a solution; to cast sensitivity analysis in a general framework and to differentiate between the computation of local sensitivity indicators with a one-at-a-time approach and first-order sensitivity indicators that consider the whole possible variability of the model parameters. The conceptual framework and the relevant properties are illustrated by means of a simple numerical example of isothermal ice flow, based on the shallow-ice approximation.

  10. Determination of site specific calibration functions for the estimation of soil moisture from measurements of cosmic-ray neutron intensity

    NASA Astrophysics Data System (ADS)

    Andreasen, M.; Looms, M. C.; Bogena, H. R.; Desilets, D.; Zreda, M. G.; Jensen, K. H.

    2015-12-01

    The recently-developed cosmic-ray neutron intensity method measures area-average soil moisture at an intermediate scale of hectometers. Calibration has proven difficult at that scale because of spatial variability of soil water and the presence of other pools of water, such as that in vegetation, also spatially and temporally variable. Soil moisture is determined using a standard calibration function that relates the neutron intensity to soil water, and that has been parameterized by fitting a curve to neutron intensities modelled at different soil moistures. Neutron transport was simulated using the MCNPX model in which a simple setup of bare ground and sandy homogeneous soil only composed of SiO2 was used. The standard procedure is that only one parameter of the calibration function should be fitted, which is determined from at least one independent soil moisture calibration. In this study, site-specific calibration functions are determined to obtain some insights on the effect of other pools of hydrogen than soil moisture. Insights will elucidate whether the calibration scheme for field sites with other major pools of hydrogen should be adapted. The calibration functions are obtained similarly to the standard calibration function, but site specific model-setups are used. We obtained calibration at field sites within HOBE - the Danish Hydrologic Observatory. The field sites represent three major land covers within the catchment; farmland, forest and heathland, and the model-setups are based on site-specific data for soil chemistry, soil organic carbon, litter layer and above- and below ground biomass. The three models provided three different calibration functions and, additionally, they were all different from the standard calibration function. The steepness of the curve and the dynamic range of neutron intensity modeled were found to be particularly dependent on the above-ground biomass and the thickness of the litter layer. Three-to-four independent soil

  11. Behavior problems in children with specific language impairment.

    PubMed

    Maggio, Verónica; Grañana, Nora E; Richaudeau, Alba; Torres, Silvio; Giannotti, Adrián; Suburo, Angela M

    2014-02-01

    We studied behavior in a group of children with specific language impairment in its 2 subtypes (expressive and mixed receptive/expressive). After exclusion of other psychiatric conditions, we evaluated 114 children of ages 2 to 7 years using language developmental tests and behavioral screening scales. Behavior problems appeared in 54% of the children. Withdrawn was the most frequently found syndrome in preschool children, whereas anxious/depressed and social problems were the most frequent in older children. The high frequency of behavioral syndromes in children with specific language impairment is remarkable and requires the awareness of primary attendants and specialists. Anxiety, depression, social isolation, and aggressive and rule-breaking behavior can obscure identification of the language impairment. Taking into account this relationship would improve the chances of a timely and appropriate intervention. PMID:24272522

  12. Investigating temporal field sampling strategies for site-specific calibration of three soil moisture - neutron intensity parameterisation methods

    NASA Astrophysics Data System (ADS)

    Iwema, J.; Rosolem, R.; Baatz, R.; Wagener, T.; Bogena, H. R.

    2015-02-01

    The Cosmic-Ray Neutron Sensor (CRNS) can provide soil moisture information at scales relevant to hydrometeorological modeling applications. Site-specific calibration is needed to translate CRNS neutron intensities into sensor footprint average soil moisture contents. We investigated temporal sampling strategies for calibration of three CRNS parameterisations (modified N0, HMF, and COSMIC) by assessing the effects of the number of sampling days and soil wetness conditions on the performance of the calibration results, for three sites with distinct climate and land use: a semi-arid site, a temperate grassland and a temperate forest. When calibrated with a year of data, COSMIC performed relatively good at all three sites, and the modified N0 method performed best at the two humid sites. It is advisable to collect soil moisture samples on more than a single day regardless of which parameterisation is used. In any case, sampling on more than ten days would, despite the strong increase in work effort, improve calibration results only little. COSMIC needed the least number of days at each site. At the semi-arid site, the N0mod method was calibrated better under average wetness conditions, whereas HMF and COSMIC were calibrated better under drier conditions. Average soil wetness condition gave better calibration results at the two humid sites. The calibration results for the HMF method were better when calibrated with combinations of days with similar soil wetness conditions, opposed to N0mod and COSMIC, which profited from using days with distinct wetness conditions. The outcomes of this study can be used by researchers as a CRNS calibration strategy guideline.

  13. Applying transport-distance specific SOC distribution to calibrate soil erosion model WaTEM

    NASA Astrophysics Data System (ADS)

    Hu, Yaxian; Heckrath, Goswin J.; Kuhn, Nikolaus J.

    2016-04-01

    Slope-scale soil erosion, transport and deposition fundamentally decide the spatial redistribution of eroded sediments in terrestrial and aquatic systems, which further affect the burial and decomposition of eroded SOC. However, comparisons of SOC contents between upper eroding slope and lower depositional site cannot fully reflect the movement of eroded SOC in-transit along hillslopes. The actual transport distance of eroded SOC is decided by its settling velocity. So far, the settling velocity distribution of eroded SOC is mostly calculated from mineral particle specific SOC distribution. Yet, soil is mostly eroded in form of aggregates, and the movement of aggregates differs significantly from individual mineral particles. This urges a SOC erodibility parameter based on actual transport distance distribution of eroded fractions to better calibrate soil erosion models. Previous field investigation on a freshly seeded cropland in Denmark has shown immediate deposition of fast settling soil fractions and the associated SOC at footslopes, followed by a fining trend at the slope tail. To further quantify the long-term effects of topography on erosional redistribution of eroded SOC, the actual transport-distance specific SOC distribution observed on the field was applied to a soil erosion model WaTEM (based on USLE). After integrating with local DEM, our calibrated model succeeded in locating the hotspots of enrichment/depletion of eroded SOC on different topographic positions, much better corresponding to the real-world field observation. By extrapolating into repeated erosion events, our projected results on the spatial distribution of eroded SOC are also adequately consistent with the SOC properties in the consecutive sample profiles along the slope.

  14. Model calibration for ice sheets and glaciers dynamics: a general theory of inverse problems in glaciology

    NASA Astrophysics Data System (ADS)

    Giudici, Mauro; Baratelli, Fulvia; Vassena, Chiara; Cattaneo, Laura

    2014-05-01

    Numerical modelling of the dynamic evolution of ice sheets and glaciers requires the solution of discrete equations which are based on physical principles (e.g. conservation of mass, linear momentum and energy) and phenomenological constitutive laws (e.g. Glen's and Fourier's laws). These equations must be accompanied by information on the forcing term and by initial and boundary conditions (IBC) on ice velocity, stress and temperature; on the other hand the constitutive laws involves many physical parameters, which possibly depend on the ice thermodynamical state. The proper forecast of the dynamics of ice sheets and glaciers (forward problem, FP) requires a precise knowledge of several quantities which appear in the IBCs, in the forcing terms and in the phenomenological laws and which cannot be easily measured at the study scale in the field. Therefore these quantities can be obtained through model calibration, i.e. by the solution of an inverse problem (IP). Roughly speaking, the IP aims at finding the optimal values of the model parameters that yield the best agreement of the model output with the field observations and data. The practical application of IPs is usually formulated as a generalised least squares approach, which can be cast in the framework of Bayesian inference. IPs are well developed in several areas of science and geophysics and several applications were proposed also in glaciology. The objective of this paper is to provide a further step towards a thorough and rigorous theoretical framework in cryospheric studies. Although the IP is often claimed to be ill-posed, this is rigorously true for continuous domain models, whereas for numerical models, which require the solution of algebraic equations, the properties of the IP must be analysed with more care. First of all, it is necessary to clarify the role of experimental and monitoring data to determine the calibration targets and the values of the parameters that can be considered to be fixed

  15. Radiometric calibration of IR Fourier transform spectrometers - Solution to a problem with the High-Resolution Interferometer Sounder

    NASA Technical Reports Server (NTRS)

    Revercomb, Henry E.; Smith, William L.; Buijs, H.; Howell, Hugh B.; Laporte, D. D.

    1988-01-01

    A calibrated Fourier transform spectrometer, known as the High-Resolution Interferometer Sounder (HIS), has been flown on the NASA U-2 research aircraft to measure the infrared emission spectrum of the earth. The primary use - atmospheric temperature and humidity sounding - requires high radiometric precision and accuracy (of the order of 0.1 and 1 C, respectively). To meet these requirements, the HIS instruments, the HIS instrument performs inflight radiometric calibration, using observations of hot and cold blackbody reference sources as the basis for two-point calibrations at each wavenumber. Initially, laboratory tests revealed a calibration problem with brightness temperature errors as large as 15 C between 600 and 900/cm. The symptom of the problem, which occurred in one of the three spectral bands of HIS, was a source-dependent phase response. Minor changes to the calibration equations completely eliminated the anomalous errors. The new analysis properly accounts for the situation in which the phase response for radiance from the instrument itself differs from that for radiance from an external source. The mechanism responsible for the dual phase response of the HIS instrument is identified as emission from the interferometer beam splitter.

  16. Analytic Solution to the Problem of Aircraft Electric Field Mill Calibration

    NASA Technical Reports Server (NTRS)

    Koshak, William

    2003-01-01

    It is by no means a simple task to retrieve storm electric fields from an aircraft instrumented with electric field mill sensors. The presence of the aircraft distorts the ambient field in a complicated way. Before retrievals of the storm field can be made, the field mill measurement system must be "calibrated". In other words, a relationship between impressed (i.e., ambient) electric field and mill output must be established. If this relationship can be determined, it is mathematically inverted so that ambient field can be inferred from the mill outputs. Previous studies have primarily focused on linear theories where the relationship between ambient field and mill output is described by a "calibration matrix" M. Each element of the matrix describes how a particular component of the ambient field is enhanced by the aircraft. For example the product M(sub ix), E(sub x), is the contribution of the E(sub x) field to the i(th) mill output. Similarly, net aircraft charge (described by a "charge field component" E(sub q)) contributes an amount M(sub iq)E(sub q) to the output of the i(th) sensor. The central difficulty in obtaining M stems from the fact that the impressed field (E(sub x), E(sub y), E(sub z), E(sub q) is not known but is instead estimated. Typically, the aircraft is flown through a series of roll and pitch maneuvers in fair weather, and the values of the fair weather field and aircraft charge are estimated at each point along the aircraft trajectory. These initial estimates are often highly inadequate, but several investigators have improved the estimates by implementing various (ad hoc) iterative methods. Unfortunately, none of the iterative methods guarantee absolute convergence to correct values (i.e., absolute convergence to correct values has not been rigorously proven). In this work, the mathematical problem is solved directly by analytic means. For m mills installed on an arbitrary aircraft, it is shown that it is possible to solve for a single 2m

  17. Investigating temporal field sampling strategies for site-specific calibration of three soil moisture-neutron intensity parameterisation methods

    NASA Astrophysics Data System (ADS)

    Iwema, J.; Rosolem, R.; Baatz, R.; Wagener, T.; Bogena, H. R.

    2015-07-01

    The Cosmic-Ray Neutron Sensor (CRNS) can provide soil moisture information at scales relevant to hydrometeorological modelling applications. Site-specific calibration is needed to translate CRNS neutron intensities into sensor footprint average soil moisture contents. We investigated temporal sampling strategies for calibration of three CRNS parameterisations (modified N0, HMF, and COSMIC) by assessing the effects of the number of sampling days and soil wetness conditions on the performance of the calibration results while investigating actual neutron intensity measurements, for three sites with distinct climate and land use: a semi-arid site, a temperate grassland, and a temperate forest. When calibrated with 1 year of data, both COSMIC and the modified N0 method performed better than HMF. The performance of COSMIC was remarkably good at the semi-arid site in the USA, while the N0mod performed best at the two temperate sites in Germany. The successful performance of COSMIC at all three sites can be attributed to the benefits of explicitly resolving individual soil layers (which is not accounted for in the other two parameterisations). To better calibrate these parameterisations, we recommend in situ soil sampled to be collected on more than a single day. However, little improvement is observed for sampling on more than 6 days. At the semi-arid site, the N0mod method was calibrated better under site-specific average wetness conditions, whereas HMF and COSMIC were calibrated better under drier conditions. Average soil wetness condition gave better calibration results at the two humid sites. The calibration results for the HMF method were better when calibrated with combinations of days with similar soil wetness conditions, opposed to N0mod and COSMIC, which profited from using days with distinct wetness conditions. Errors in actual neutron intensities were translated to average errors specifically to each site. At the semi-arid site, these errors were below the

  18. Calibration of Energy-Specific TDDFT for Modeling K-edge XAS Spectra of Light Elements.

    PubMed

    Lestrange, Patrick J; Nguyen, Phu D; Li, Xiaosong

    2015-07-14

    X-ray absorption spectroscopy (XAS) has become a powerful technique in chemical physics, because of advances in synchrotron technology that have greatly improved its temporal and spectroscopic resolution. Our recent work on energy-specific time-dependent density functional theory (ES-TDDFT) allows for the direct calculation of excitation energies in any region of the absorption spectrum, from UV-vis to X-ray. However, the ability of different density functional theories to model X-ray absorption spectra (XAS) of light elements has not yet been verified for ES-TDDFT. This work is a calibration of the ability of existing DFT kernels and basis sets to reproduce experimental K-edge excitation energies. Results were compared against 30 different transitions from gas-phase experiments. We focus on six commonly used density functionals (BHandHLYP, B3LYP, PBE1PBE, BP86, HSE06, LC-ωPBE) and various triple-ζ basis sets. The effects of core and diffuse functions are also investigated. PMID:26575736

  19. Patient-specific stopping power calibration for proton therapy planning based on single-detector proton radiography

    NASA Astrophysics Data System (ADS)

    Doolan, P. J.; Testa, M.; Sharp, G.; Bentefour, E. H.; Royle, G.; Lu, H.-M.

    2015-03-01

    A simple robust optimizer has been developed that can produce patient-specific calibration curves to convert x-ray computed tomography (CT) numbers to relative stopping powers (HU-RSPs) for proton therapy treatment planning. The difference between a digitally reconstructed radiograph water-equivalent path length (DRRWEPL) map through the x-ray CT dataset and a proton radiograph (set as the ground truth) is minimized by optimizing the HU-RSP calibration curve. The function of the optimizer is validated with synthetic datasets that contain no noise and its robustness is shown against CT noise. Application of the procedure is then demonstrated on a plastic and a real tissue phantom, with proton radiographs produced using a single detector. The mean errors using generic/optimized calibration curves between the DRRWEPL map and the proton radiograph were 1.8/0.4% for a plastic phantom and -2.1/ - 0.2% for a real tissue phantom. It was then demonstrated that these optimized calibration curves offer a better prediction of the water equivalent path length at a therapeutic depth. We believe that these promising results are suggestive that a single proton radiograph could be used to generate a patient-specific calibration curve as part of the current proton treatment planning workflow.

  20. Interacting domain-specific languages with biological problem solving environments

    NASA Astrophysics Data System (ADS)

    Cickovski, Trevor M.

    Iteratively developing a biological model and verifying results with lab observations has become standard practice in computational biology. This process is currently facilitated by biological Problem Solving Environments (PSEs), multi-tiered and modular software frameworks which traditionally consist of two layers: a computational layer written in a high level language using design patterns, and a user interface layer which hides its details. Although PSEs have proven effective, they still enforce some communication overhead between biologists refining their models through repeated comparison with experimental observations in vitro or in vivo, and programmers actually implementing model extensions and modifications within the computational layer. I illustrate the use of biological Domain-Specific Languages (DSLs) as a middle-level PSE tier to ameliorate this problem by providing experimentalists with the ability to iteratively test and develop their models using a higher degree of expressive power compared to a graphical interface, while saving the requirement of general purpose programming knowledge. I develop two radically different biological DSLs: XML-based BIOLOGO will model biological morphogenesis using a cell-centered stochastic cellular automaton and translate into C++ modules for an object-oriented PSE C OMPUCELL3D, and MDLab will provide a set of high-level Python libraries for running molecular dynamics simulations, using wrapped functionality from the C++ PSE PROTOMOL. I describe each language in detail, including its its roles within the larger PSE and its expressibility in terms of representable phenomena, and a discussion of observations from users of the languages. Moreover I will use these studies to draw general conclusions about biological DSL development, including dependencies upon the goals of the corresponding PSE, strategies, and tradeoffs.

  1. Is Simpler Better? A Visualization-based Exploration of How Parametric Screening Influences Problem Difficulty and Equifinality in Multiobjective Calibration

    NASA Astrophysics Data System (ADS)

    Reed, P. M.; Urban, R. L.; Wagener, T.; van Werkhoven, K. L.

    2009-12-01

    This study uses interactive visualization to investigate the common assumption that parametric screening using sensitivity analysis simplifies hydrologic calibration. Put simply, do we make calibration easier by eliminating model parameters from the optimization problem? Traditional approaches for parametric screening focus on model evaluation metrics that seek to minimize statistical error. We demonstrate in this study that additional hydrology relevant metrics (e.g., water balance) are essential to properly screening parameters and producing search problems that do not degenerate into random walks (a severe case of equifinality). This work highlights that we should move beyond a focus on optimality in a traditional error sense and instead focus on enhancing our evaluative metrics and formulations to include hydrology relevant information. Building on the prior work by van Werkhoven et al. 2009, this study utilizes parameter screening results based on Sobol sensitivity analysis to reduce the size of hydrologic calibration problems for the Sacramento Soil Moisture Accounting model (SAC SMA). Our study was conducted across four hydroclimatically diverse watersheds, and we distinguish various sets of parametric screenings, including a full parameter search, as well as parameter screenings based on 5%, 10%, 20%, and 30% Sobol sensitivity levels. For each Sobol sensitivity level there are two subcases: (1) parameters are screened based on statistical metrics alone, and (2) parameters are screened based on statistical and hydrological metrics. The reduced parameter sets were searched using a multiobjective evolutionary algorithm to determine the tradeoff surfaces of optimal parameter settings. Our results contribute detailed interactive visualizations of the 4-objective tradeoff surfaces for all of the parametric screening cases evaluated. For almost all of problem formulations that result from parametric screening, the combined use of statistical and hydrological

  2. On the precision of absolute sensitivity calibration and specifics of spectroscopic quantities interpretation in tokamaks.

    PubMed

    Naydenkova, D I; Weinzettl, V; Stockel, J; Matějíček, J

    2014-12-01

    Typical situations, which can be met during the process of absolute calibration, are shown in the case of a visible light observation system for the COMPASS tokamak. Technical issues and experimental limitations of absolute measurements connected with tokamak operation are discussed. PMID:25607972

  3. Specific calibration and uncertainty evaluation for flood propagation models by using distributed information

    NASA Astrophysics Data System (ADS)

    Camici, Stefania; Tito Aronica, Giuseppe; Tarpanelli, Angelica; Moramarco, Tommaso

    2013-04-01

    Hydraulic models are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons. First, the lack of relevant data necessary for the model calibration. Indeed, flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment. The second reason is related to the choice of a suitable performance measures for calibrating and to evaluate model predictions in a credible and consistent way (and to reduce the uncertainty). This study takes a well documented flood event in November 2012 in Paglia river basin (Central Italy). For this area a detailed description of the main channel morphology, obtained from an accurate topographical surveys and by a DEM with spatial resolution of 2 m, and several points within the floodplain areas, in which the maximum water level has been measured, were available for the post-event analysis. On basis of these information two-dimensional inertial finite element hydraulic model was set up and calibrated using different performance measures. Manning roughness coefficients obtained from the different calibrations were then used for the delineation of inundation maps including also uncertainty. The water levels of three hydrometric stations and flooded area extensions, derived by video recording the day after the flood event, have been used for the validation of the model.

  4. A stochastic analysis of the calibration problem for Augmented Reality systems with see-through head-mounted displays

    NASA Astrophysics Data System (ADS)

    Leebmann, Johannes

    This paper presents a closed stochastic solution for the calibration of see-through head-mounted displays (STHMD) for Augmented Reality. An Augmented Reality system (ARS) is based on several components that are affected by stochastic and random errors. One important component is the tracking system. The flock of birds (FOB) tracking system was tested for consistency in position and orientation outputs by establishing constraints that the system was required to meet. The tests for position and orientation were separated to derive uncorrelated quality measures. The tests are self-controlling and do not require any other measuring device. In addition, the image coordinate accuracy also had to be determined to complete the stochastic description of the calibration problem. Based on this stochastic model, different mathematical models were tested to determine whether or not they fit the stochastic model. An overview of different calibration approaches for optical see-through displays is given and a quantitative comparison of the different models is made based on the derived accuracy information.

  5. Specific Reading Comprehension Disability: Major Problem, Myth, or Misnomer?

    ERIC Educational Resources Information Center

    Spencer, Mercedes; Quinn, Jamie M.; Wagner, Richard K.

    2014-01-01

    The goal of the present study was to test three competing hypotheses about the nature of comprehension problems of students who are poor in reading comprehension. Participants in the study were first, second, and third graders, totaling nine cohorts and over 425,000 participants in all. The pattern of results was consistent across all cohorts:…

  6. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration.

    PubMed

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A; Allen, Justine J; Demirci, Utkan; Hanlon, Roger T

    2014-02-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  7. A calibration method for patient specific IMRT QA using a single therapy verification film

    PubMed Central

    Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.

    2013-01-01

    Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly

  8. Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1996-01-01

    Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.

  9. Dichotomy in perceptual learning of interval timing: calibration of mean accuracy and precision differ in specificity and time course.

    PubMed

    Sohn, Hansem; Lee, Sang-Hun

    2013-01-01

    Our brain is inexorably confronted with a dynamic environment in which it has to fine-tune spatiotemporal representations of incoming sensory stimuli and commit to a decision accordingly. Among those representations needing constant calibration is interval timing, which plays a pivotal role in various cognitive and motor tasks. To investigate how perceived time interval is adjusted by experience, we conducted a human psychophysical experiment using an implicit interval-timing task in which observers responded to an invisible bar drifting at a constant speed. We tracked daily changes in distributions of response times for a range of physical time intervals over multiple days of training with two major types of timing performance, mean accuracy and precision. We found a decoupled dynamics of mean accuracy and precision in terms of their time course and specificity of perceptual learning. Mean accuracy showed feedback-driven instantaneous calibration evidenced by a partial transfer around the time interval trained with feedback, while timing precision exhibited a long-term slow improvement with no evident specificity. We found that a Bayesian observer model, in which a subjective time interval is determined jointly by a prior and likelihood function for timing, captures the dissociative temporal dynamics of the two types of timing measures simultaneously. Finally, the model suggested that the width of the prior, not the likelihoods, gradually shrinks over sessions, substantiating the important role of prior knowledge in perceptual learning of interval timing. PMID:23076112

  10. The Domain Generality--Specificity of Epistemological Beliefs: A Theoretical Problem, a Methodological Problem or Both?

    ERIC Educational Resources Information Center

    Limon, Margarita

    2006-01-01

    Research on epistemological beliefs has clearly increased in the last decade. Even though the construct is clearer and relevant data are being collected, there are important theoretical and methodological issues that need further clarification. One of them is the debate about the domain generality-specificity of epistemological beliefs. I argue…

  11. A Calibration Protocol for Population-Specific Accelerometer Cut-Points in Children

    PubMed Central

    Mackintosh, Kelly A.; Fairclough, Stuart J.; Stratton, Gareth; Ridgers, Nicola D.

    2012-01-01

    Purpose To test a field-based protocol using intermittent activities representative of children's physical activity behaviours, to generate behaviourally valid, population-specific accelerometer cut-points for sedentary behaviour, moderate, and vigorous physical activity. Methods Twenty-eight children (46% boys) aged 10–11 years wore a hip-mounted uniaxial GT1M ActiGraph and engaged in 6 activities representative of children's play. A validated direct observation protocol was used as the criterion measure of physical activity. Receiver Operating Characteristics (ROC) curve analyses were conducted with four semi-structured activities to determine the accelerometer cut-points. To examine classification differences, cut-points were cross-validated with free-play and DVD viewing activities. Results Cut-points of ≤372, >2160 and >4806 counts•min−1 representing sedentary, moderate and vigorous intensity thresholds, respectively, provided the optimal balance between the related needs for sensitivity (accurately detecting activity) and specificity (limiting misclassification of the activity). Cross-validation data demonstrated that these values yielded the best overall kappa scores (0.97; 0.71; 0.62), and a high classification agreement (98.6%; 89.0%; 87.2%), respectively. Specificity values of 96–97% showed that the developed cut-points accurately detected physical activity, and sensitivity values (89–99%) indicated that minutes of activity were seldom incorrectly classified as inactivity. Conclusion The development of an inexpensive and replicable field-based protocol to generate behaviourally valid and population-specific accelerometer cut-points may improve the classification of physical activity levels in children, which could enhance subsequent intervention and observational studies. PMID:22590635

  12. Balancing Particle Diversity in Markov Chain Monte Carlo Methods for Dual Calibration-Data Assimilation Problems in Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Hernandez, F.; Liang, X.

    2014-12-01

    Given the inherent uncertainty in almost all of the variables involved, recent research is re-addressing the problem of calibrating hydrologic models from a stochastic perspective: the focus is shifting from finding a single parameter configuration that minimizes the model error, to approximating the maximum likelihood multivariate probability distribution of the parameters. To this end, Markov chain Monte Carlo (MCMC) formulations are widely used, where the distribution is defined as a smoothed ensemble of particles or members, each of which represents a feasible parameterization. However, the updating of these ensembles needs to strike a careful balance so that the particles adequately resemble the real distribution without either clustering or drifting excessively. In this study, we explore the implementation of two techniques that attempt to improve the quality of the resulting ensembles, both for the approximation of the model parameters and of the unknown states, in a dual calibration-data assimilation framework. The first feature of our proposed algorithm, in an effort to keep members from clustering on areas of high likelihood in light of the observations, is the introduction of diversity-inducing operators after each resampling. This approach has been successfully used before, and here we aim at testing additional operators which are also borrowed from the Evolutionary Computation literature. The second feature is a novel arrangement of the particles into two alternative data structures. The first one is a non-sorted Pareto population which favors 1) particles with high likelihood, and 2) particles that introduce a certain level of heterogeneity. The second structure is a partitioned array, in which each partition requires its members to have increasing levels of divergence from the particles in the areas of larger likelihood. Our newly proposed algorithm will be evaluated and compared to traditional MCMC methods in terms of convergence speed, and the

  13. A flexible multi-model framework for catchment-specific calibration, and application to diverse European catchments

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Fenicia, Fabrizio; Savenije, Hubert H. G.

    2010-05-01

    If one accepts that a single model structure cannot accommodate the wide spectrum of catchment dynamics encountered in practice, the need for flexible hydrological models becomes evident. Here, we present SUPERFLEX, a hydrological modelling system that represent the catchment as a network of conceptual elements, including nonlinear reservoirs and routing components, with connectivity, constitutive relations and parameterizations specified by the Hydrologist using a priori insights into the catchment of interest, and refined based on calibration results. The model equations are implemented using robust numerical approaches, and the entire SUPERFLEX system is integrated into a Bayesian inference suite, permitting hypotheses regarding forcing/response data to be evaluated and refined as part of the model inference. The application of the SUPERFLEX approach to a range of European catchments is presented, demonstrating how the ability to adjust the model structure to specific catchments allows improved representatation of its key hydrological processes, and consequently improved model performance.

  14. Instrumentation report 1: specification, design, calibration, and installation of instrumentation for an experimental, high-level, nuclear waste storage facility

    SciTech Connect

    Brough, W.G.; Patrick, W.C.

    1982-01-01

    The Spent Fuel Test-Climax (SFT-C) is being conducted 420 m underground at the Nevada Test Site under the auspices of the US Department of Energy. The test facility houses 11 spent fuel assemblies from an operating commercial nuclear reactor and numerous other thermal sources used to simulate the near-field effects of a large repository. We developed a large-scale instrumentation plan to ensure that a sufficient quality and quantity of data were acquired during the three- to five-year test. These data help satisfy scientific, operational, and radiation safety objectives. Over 800 data channels are being scanned to measure temperature, electrical power, radiation, air flow, dew point, stress, displacement, and equipment operation status (on/off). This document details the criteria, design, specifications, installation, calibration, and current performance of the entire instrumentation package.

  15. Integrating a Gravity Simulation and Groundwater Modeling on the Calibration of Specific Yield for Choshui Alluvial Fan

    NASA Astrophysics Data System (ADS)

    Chang, Liang Cheng; Tsai, Jui pin; Chen, Yu Wen; Way Hwang, Chein; Chung Cheng, Ching; Chiang, Chung Jung

    2014-05-01

    For sustainable management, accurate estimation of recharge can provide critical information. The accuracy of estimation is highly related to uncertainty of specific yield (Sy). Because Sy value is traditionally obtained by a multi-well pumping test, the available Sy values are usually limited due to high installation cost. Therefore, this information insufficiency of Sy may cause high uncertainty for recharge estimation. Because gravity is a function of a material mass and the inverse square of the distance, gravity measurement can assist to obtain the mass variation of a shallow groundwater system. Thus, the groundwater level observation data and gravity measurements are used for the calibration of Sy for a groundwater model. The calibration procedure includes four steps. First, gravity variations of three groundwater-monitoring wells, Si-jhou, Tu-ku and Ke-cuo, are observed in May, August and November 2012. To obtain the gravity caused by groundwater variation, this study filters the noises from other sources, such as ocean tide and land subsidence, in the collected data The refined data, which are data without noises, are named gravity residual. Second, this study develops a groundwater model using MODFLOW 2005 to simulate the water mass variation of the groundwater system. Third, we use Newton gravity integral to simulate the gravity variation caused by the simulated water mass variation during each of the observation periods. Fourth, comparing the ratio of the gravity variation between the two data sets, which are observed gravity residuals and simulated gravities. The values of Sy is continuously modified until the gravity variation ratios of the two data sets are the same. The Sy value of Si-jhou is 0.216, which is obtained by the multi-well pumping test. This Sy value is assigned to the simulation model. The simulation results show that the simulated gravity can well fit the observed gravity residual without parameter calibration. This result indicates

  16. An episodic specificity induction enhances means-end problem solving in young and older adults

    PubMed Central

    Madore, Kevin P.; Schacter, Daniel L.

    2014-01-01

    Episodic memory plays an important role not only in remembering past experiences, but also in constructing simulations of future experiences and solving means-end social problems. We recently found that an episodic specificity induction- brief training in recollecting details of past experiences- enhances performance of young and older adults on memory and imagination tasks. Here we tested the hypothesis that this specificity induction would also positively impact a means-end problem solving task on which age-related changes have been linked to impaired episodic memory. Young and older adults received the specificity induction or a control induction before completing a means-end problem solving task as well as memory and imagination tasks. Consistent with previous findings, older adults provided fewer relevant steps on problem solving than did young adults, and their responses also contained fewer internal (i.e., episodic) details across the three tasks. There was no difference in the number of other (e.g., irrelevant) steps on problem solving or external (i.e., semantic) details generated on the three tasks as a function of age. Critically, the specificity induction increased the number of relevant steps and internal details (but not other steps or external details) that both young and older adults generated in problem solving compared with the control induction, as well as the number of internal details (but not external details) generated for memory and imagination. Our findings support the idea that episodic retrieval processes are involved in means-end problem solving, extend the range of tasks on which a specificity induction targets these processes, and show that the problem solving performance of older adults can benefit from a specificity induction as much as that of young adults. PMID:25365688

  17. On the use of problem-specific candidate generators for the hybrid optimization of multi-objective production engineering problems.

    PubMed

    Weinert, K; Zabel, A; Kersting, P; Michelitsch, T; Wagner, T

    2009-01-01

    In the field of production engineering, various complex multi-objective problems are known. In this paper we focus on the design of mold temperature control systems, the reconstruction of digitized surfaces, and the optimization of NC paths for the five-axis milling process. For all these applications, efficient problem-specific algorithms exist that only consider a subset of the desirable objectives. In contrast, modern multi-objective evolutionary algorithms are able to cope with many conflicting objectives, but they require a long runtime due to their general applicability. Therefore, we propose hybrid algorithms for the three applications mentioned. In each case, the problem-specific algorithms are used to determine promising initial solutions for the multi-objective evolutionary approach, whose variation concepts are used to generate diversity in the objective space. We show that the combination of these techniques provides great benefits. Since the final solution is chosen by a decision maker based on this Pareto front approximation, appropriate visualizations of the high-dimensional solutions are presented. PMID:19916775

  18. Genetic and environmental vulnerabilities underlying adolescent substance use and problem use: general or specific?

    PubMed

    Young, Susan E; Rhee, Soo Hyun; Stallings, Michael C; Corley, Robin P; Hewitt, John K

    2006-07-01

    Are genetic and environmental risks for adolescent substance use specific to individual substances or general across substance classes? We examined this question in 645 monozygotic twin pairs, 702 dizygotic twin pairs, 429 biological sibling pairs, and 96 adoptive (biologically unrelated) sibling pairs ascertained from community-based samples, and ranging in age from 12 to 18 years. Substance use patterns and symptoms were assessed using structured psychiatric interviews. Biometrical model fitting was carried out using age- and sex-specific thresholds for (a) repeated use and (b) problem use, defined as one or more DSM-IV symptoms of abuse or dependence. We hypothesized that problem use would be more heritable than use in adolescence, and that both genetic and environmental risks underlying tobacco, alcohol, and marijuana use and problem use would be significantly correlated. Results of univariate analyses suggested significant heritable factors for use and problem use for all substances with the exception of alcohol use. Shared environmental factors were important in all cases and special twin environmental factors were significant for tobacco use, tobacco problem use, and alcohol use. Multivariate analyses yielded significant genetic correlations between each of the substances (for both levels studied), and significant shared environmental correlations among use variables only. Our results suggest that tobacco, alcohol, and marijuana problem use are mediated by common genetic influences, but shared environmental influences may be more substance-specific for problem use. PMID:16619135

  19. Preschool Sleep Problems and Differential Associations With Specific Aspects of Executive Control in Early Elementary School.

    PubMed

    Nelson, Timothy D; Nelson, Jennifer Mize; Kidwell, Katherine M; James, Tiffany D; Espy, Kimberly Andrews

    2015-01-01

    This study examined the differential associations between parent-reported child sleep problems in preschool and specific aspects of executive control in early elementary school in a large sample of typically developing children (N = 215). Consistent with expectations, sleep problems were negatively associated with performance on tasks assessing working memory and interference suppression inhibition, even after controlling for general cognitive abilities, but not with flexible shifting or response inhibition. The findings add to the literature on cognitive impairments associated with pediatric sleep loss and highlight the need for early intervention for children with sleep problems to promote healthy cognitive development. PMID:26151614

  20. Commodity-Free Calibration

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.

  1. Preschool Children with Intellectual Disability: Syndrome Specificity, Behaviour Problems, and Maternal Well-Being

    PubMed Central

    Eisenhower, A. S.; Baker, Bruce L.; Blacher, J.

    2011-01-01

    Background Children with intellectual disability (ID) are at heightened risk for behaviour problems and diagnosed mental disorder. Likewise, mothers of children with ID are more stressed than mothers of typically-developing children. Research on behavioural phenotypes suggests that different syndromes of ID may be associated with distinct child behavioural risks and maternal well-being risks. In the present study, maternal reports of child behaviour problems and maternal well-being were examined for syndrome-specific differences. Methods The present authors studied the early manifestation and continuity of syndrome-specific behaviour problems in 215 preschool children belonging to 5 groups (typically-developing, undifferentiated developmental delays, Down syndrome, autism, cerebral palsy), as well as the relation of syndrome group to maternal well-being. Results At age 3, children with autism and cerebral palsy showed the highest levels of behaviour problems, and children with Down syndrome and typically-developing children showed the lowest levels. Mothers of children with autism reported more parenting stress than all other groups. These syndrome-specific patterns of behaviour and maternal stress were stable across ages 3, 4 and 5 years, except for relative increases in behaviour problems and maternal stress in the Down syndrome and cerebral palsy groups. Child syndrome contributed to maternal stress even after accounting for differences in behaviour problems and cognitive level. Conclusions These results, although based on small syndrome groups, suggest that phenotypic expressions of behaviour problems are manifested as early as age 3. These behavioural differences were paralleled by differences in maternal stress, such that mothers of children with autism are at elevated risk for high stress. In addition, there appear to be other unexamined characteristics of these syndromes, beyond behaviour problems, which also contribute to maternal stress. PMID:16108983

  2. Using Multiple Calibration Indices in Order to Capture the Complex Picture of What Affects Students' Accuracy of Feeling of Confidence

    ERIC Educational Resources Information Center

    Boekaerts, Monique; Rozendaal, Jeroen S.

    2010-01-01

    The present study used multiple calibration indices to capture the complex picture of fifth graders' calibration of feeling of confidence in mathematics. Specifically, the effects of gender, type of mathematical problem, instruction method, and time of measurement (before and after problem solving) on calibration skills were investigated. Fourteen…

  3. Authoring Effective Embedded Tutors: An Overview of the Extensible Problem Specific Tutor (xPST) System

    ERIC Educational Resources Information Center

    Gilbert, Stephen B.; Blessing, Stephen B.; Guo, Enruo

    2015-01-01

    The Extensible Problem Specific Tutor (xPST) allows authors who are not cognitive scientists and not programmers to quickly create an intelligent tutoring system that provides instruction akin to a model-tracing tutor. Furthermore, this instruction is overlaid on existing software, so that the learner's interface does not have to be made from…

  4. The Role of Problem Specification Workshops in Extension: An IPM Example.

    ERIC Educational Resources Information Center

    Foster, John; And Others

    1995-01-01

    Of three extension models--top-down technology transfer, farmers-first approach, and participatory research--the latter extends elements of the other two into a more comprehensive analysis of a problem and specification of solution strategies. An Australian integrated pest management (IPM) example illustrates how structured workshops are a useful…

  5. SU-C-204-02: Improved Patient-Specific Optimization of the Stopping Power Calibration for Proton Therapy Planning Using a Single Proton Radiography

    SciTech Connect

    Rinaldi, I; Parodi, K; Krah, N

    2015-06-15

    Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representation of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to improve

  6. Preliminary specificity study of the Bestel-Clément-Sorine electromechanical model of the heart using parameter calibration from medical images.

    PubMed

    Marchesseau, S; Delingette, H; Sermesant, M; Sorine, M; Rhode, K; Duckett, S G; Rinaldi, C A; Razavi, R; Ayache, N

    2013-04-01

    Patient-specific cardiac modelling can help in understanding pathophysiology and predict therapy effects. This requires the personalization of the geometry, kinematics, electrophysiology and mechanics. We use the Bestel-Clément-Sorine (BCS) electromechanical model of the heart, which provides reasonable accuracy with a reduced parameter number compared to the available clinical data at the organ level. We propose a preliminary specificity study to determine the relevant global parameters able to differentiate the pathological cases from the healthy controls. To this end, a calibration algorithm on global measurements is developed. This calibration method was tested successfully on 6 volunteers and 2 heart failure cases and enabled to tune up to 7 out of the 14 necessary parameters of the BCS model, from the volume and pressure curves. This specificity study confirmed domain-knowledge that the relaxation rate is impaired in post-myocardial infarction heart failure and the myocardial stiffness is increased in dilated cardiomyopathy heart failures. PMID:23499249

  7. The problem with total error models in establishing performance specifications and a simple remedy.

    PubMed

    Krouwer, Jan S

    2016-08-01

    A recent issue in this journal revisited performance specifications since the Stockholm conference. Of the three recommended methods, two use total error models to establish performance specifications. It is shown that the most commonly used total error model - the Westgard model - is deficient, yet even more complete models fail to capture all errors that comprise total error. Moreover, total error models are often set at 95% of results, which leave 5% of results as unspecified. Glucose meter performance standards are used to illustrate these problems. The Westgard model is useful to asses assay performance but not to set performance specifications. Total error can be used to set performance specifications if the specifications include 100% of the results. PMID:26974143

  8. Problems in estimating age-specific survival rates from recovery data of birds ringed as young

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, Kenneth P.; White, Gary C.

    1985-01-01

    (1) The life table model is frequently employed in the analysis of ringer samples of young in bird populations. The basic model is biologically unrealistic and of little use in making inferences concerning age-specific survival probabilities. (2) This model rests on a number of restrictive assumptions, the failure of which causes serious biases. Several important assumptions are not met with real data and the estimators of age-specific survival are not robust enough to these failures. (3) Five major problems in the use of the life table method are reviewed. Examples are provided to illustrate several of the problems involved in using this method in making inferences about survival rates and its age-specific nature. (4) We conclude that this is an invalid procedure and it should not be used. Furthermore, ringing studies involving only young birds are pointless as regards survival estimation because no valid method exists for estimating age-specific or time-specific survival rates from such data. (5) In our view, inferences about age-specific survival rates are possible only if both young and adult (or young, subadult and adult) age classes are ringed each year for k years (k ≥ 2).

  9. Self-calibration and biconvex compressive sensing

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang; Strohmer, Thomas

    2015-11-01

    The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely self-calibration, compressive sensing, and biconvex optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations {\\boldsymbol{y}}={\\boldsymbol{D}}{\\boldsymbol{A}}{\\boldsymbol{x}}, where both {\\boldsymbol{x}} and the diagonal matrix {\\boldsymbol{D}} (which models the calibration error) are unknown. By ‘lifting’ this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both {\\boldsymbol{x}} and {\\boldsymbol{D}} can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis.

  10. 10 CFR 70.39 - Specific licenses for the manufacture or initial transfer of calibration or reference sources.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... to manufacture or initially transfer calibration or reference sources containing plutonium, for...) Chemical and physical form and maximum quantity of plutonium in the source; (ii) Details of construction and design; (iii) Details of the method of incorporation and binding of the plutonium in the...

  11. 10 CFR 70.39 - Specific licenses for the manufacture or initial transfer of calibration or reference sources.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... to manufacture or initially transfer calibration or reference sources containing plutonium, for...) Chemical and physical form and maximum quantity of plutonium in the source; (ii) Details of construction and design; (iii) Details of the method of incorporation and binding of the plutonium in the...

  12. Worrying about the future: An episodic specificity induction impacts problem solving, reappraisal, and well-being.

    PubMed

    Jing, Helen G; Madore, Kevin P; Schacter, Daniel L

    2016-04-01

    Previous research has demonstrated that an episodic specificity induction--brief training in recollecting details of a recent experience--enhances performance on various subsequent tasks thought to draw upon episodic memory processes. Existing work has also shown that mental simulation can be beneficial for emotion regulation and coping with stressors. Here we focus on understanding how episodic detail can affect problem solving, reappraisal, and psychological well-being regarding worrisome future events. In Experiment 1, an episodic specificity induction significantly improved participants' performance on a subsequent means-end problem solving task (i.e., more relevant steps) and an episodic reappraisal task (i.e., more episodic details) involving personally worrisome future events compared with a control induction not focused on episodic specificity. Imagining constructive behaviors with increased episodic detail via the specificity induction was also related to significantly larger decreases in anxiety, perceived likelihood of a bad outcome, and perceived difficulty to cope with a bad outcome, as well as larger increases in perceived likelihood of a good outcome and indicated use of active coping behaviors compared with the control. In Experiment 2, we extended these findings using a more stringent control induction, and found preliminary evidence that the specificity induction was related to an increase in positive affect and decrease in negative affect compared with the control. Our findings support the idea that episodic memory processes are involved in means-end problem solving and episodic reappraisal, and that increasing the episodic specificity of imagining constructive behaviors regarding worrisome events may be related to improved psychological well-being. PMID:26820166

  13. Graphical R-matrix atomic collision environment (G RACE): the problem specification stage

    NASA Astrophysics Data System (ADS)

    Scott, N. S.; McMinn, A.; Burke, P. G.; Burke, V. M.; Noble, C. J.

    1993-12-01

    In this paper we introduce the concept of a graphical R-matrix atomic collision environment (G RACE). G RACE couples the graphical capability of powerful workstations with the processing power of supercomputers to provide an environment for the study of atomic collision properties and processes. At the core of G RACE is a new generation R-matrix program package, which is used to compute properties characterising electron atom and electron ion collisions. One of the motivations behind the project is to render this package simple to use by novice and experienced users alike, thereby significantly improving its usefulness to the physics community. G RACE is composed of a problem specification stage, a computation stage, and an interpretation stage. The focus of this paper is a description of the X Window graphical user interface which constitutes the problem specification stage of G RACE.

  14. School Vandalism and Break-Ins. Problem-Oriented Guides for Police. Problem-Specific Guides Series, No. 35

    ERIC Educational Resources Information Center

    Johnson, Kelly Dedel

    2005-01-01

    This guide addresses school vandalism and break-ins, describing the problem and reviewing the risk factors. It also discusses the associated problems of school burglaries and arson. The guide then identifies a series of questions to help analyze each local problem. Finally, it reviews responses to the problem, and what is known about them from…

  15. Mental Health Problems during Puberty: Tanner Stage-Related Differences in Specific Symptoms. The TRAILS Study

    ERIC Educational Resources Information Center

    Oldehinkel, Albertine J.; Verhulst, Frank C.; Ormel, Johan

    2011-01-01

    The aim of this study was to investigate associations between specific mental health problems and pubertal stage in (pre)adolescents participating in the Dutch prospective cohort study TRAILS (first assessment: N = 2230, age 11.09 [plus or minus] 0.56, 50.8% girls; second assessment: N = 2149, age 13.56 [plus or minus] 0.53, 51.0% girls). Mental…

  16. TH-C-BRD-05: Reducing Proton Beam Range Uncertainty with Patient-Specific CT HU to RSP Calibrations Based On Single-Detector Proton Radiography

    SciTech Connect

    Doolan, P; Sharp, G; Testa, M; Lu, H-M; Bentefour, E; Royle, G

    2014-06-15

    Purpose: Beam range uncertainty in proton treatment comes primarily from converting the patient's X-ray CT (xCT) dataset to relative stopping power (RSP). Current practices use a single curve for this conversion, produced by a stoichiometric calibration based on tissue composition data for average, healthy, adult humans, but not for the individual in question. Proton radiographs produce water-equivalent path length (WEPL) maps, dependent on the RSP of tissues within the specific patient. This work investigates the use of such WEPL maps to optimize patient-specific calibration curves for reducing beam range uncertainty. Methods: The optimization procedure works on the principle of minimizing the difference between the known WEPL map, obtained from a proton radiograph, and a digitally-reconstructed WEPL map (DRWM) through an RSP dataset, by altering the calibration curve that is used to convert the xCT into an RSP dataset. DRWMs were produced with Plastimatch, an in-house developed software, and an optimization procedure was implemented in Matlab. Tests were made on a range of systems including simulated datasets with computed WEPL maps and phantoms (anthropomorphic and real biological tissue) with WEPL maps measured by single detector proton radiography. Results: For the simulated datasets, the optimizer showed excellent results. It was able to either completely eradicate or significantly reduce the root-mean-square-error (RMSE) in the WEPL for the homogeneous phantoms (to zero for individual materials or from 1.5% to 0.2% for the simultaneous optimization of multiple materials). For the heterogeneous phantom the RMSE was reduced from 1.9% to 0.3%. Conclusion: An optimization procedure has been designed to produce patient-specific calibration curves. Test results on a range of systems with different complexities and sizes have been promising for accurate beam range control in patients. This project was funded equally by the Engineering and Physical Sciences Research

  17. The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  18. Dose Calculation on KV Cone Beam CT Images: An Investigation of the Hu-Density Conversion Stability and Dose Accuracy Using the Site-Specific Calibration

    SciTech Connect

    Rong Yi

    2010-10-01

    Precise calibration of Hounsfield units (HU) to electron density (HU-density) is essential to dose calculation. On-board kV cone beam computed tomography (CBCT) imaging is used predominantly for patients' positioning, but will potentially be used for dose calculation. The impacts of varying 3 imaging parameters (mAs, source-imager distance [SID], and cone angle) and phantom size on the HU number accuracy and HU-density calibrations for CBCT imaging were studied. We proposed a site-specific calibration method to achieve higher accuracy in CBCT image-based dose calculation. Three configurations of the Computerized Imaging Reference Systems (CIRS) water equivalent electron density phantom were used to simulate sites including head, lungs, and lower body (abdomen/pelvis). The planning computed tomography (CT) scan was used as the baseline for comparisons. CBCT scans of these phantom configurations were performed using Varian Trilogy{sup TM} system in a precalibrated mode with fixed tube voltage (125 kVp), but varied mAs, SID, and cone angle. An HU-density curve was generated and evaluated for each set of scan parameters. Three HU-density tables generated using different phantom configurations with the same imaging parameter settings were selected for dose calculation on CBCT images for an accuracy comparison. Changing mAs or SID had small impact on HU numbers. For adipose tissue, the HU discrepancy from the baseline was 20 HU in a small phantom, but 5 times lager in a large phantom. Yet, reducing the cone angle significantly decreases the HU discrepancy. The HU-density table was also affected accordingly. By performing dose comparison between CT and CBCT image-based plans, results showed that using the site-specific HU-density tables to calibrate CBCT images of different sites improves the dose accuracy to {approx}2%. Our phantom study showed that CBCT imaging can be a feasible option for dose computation in adaptive radiotherapy approach if the site-specific

  19. Emergence of Coding and its Specificity as a Physico-Informatic Problem

    NASA Astrophysics Data System (ADS)

    Wills, Peter R.; Nieselt, Kay; McCaskill, John S.

    2015-06-01

    We explore the origin-of-life consequences of the view that biological systems are demarcated from inanimate matter by their possession of referential information, which is processed computationally to control choices of specific physico-chemical events. Cells are cybernetic: they use genetic information in processes of communication and control, subjecting physical events to a system of integrated governance. The genetic code is the most obvious example of how cells use information computationally, but the historical origin of the usefulness of molecular information is not well understood. Genetic coding made information useful because it imposed a modular metric on the evolutionary search and thereby offered a general solution to the problem of finding catalysts of any specificity. We use the term "quasispecies symmetry breaking" to describe the iterated process of self-organisation whereby the alphabets of distinguishable codons and amino acids increased, step by step.

  20. The roles of mothers' neighborhood perceptions and specific monitoring strategies in youths' problem behavior.

    PubMed

    Byrnes, Hilary F; Miller, Brenda A; Chen, Meng-Jinn; Grube, Joel W

    2011-03-01

    The neighborhood context can interfere with parents' abilities to effectively monitor their children, but may be related to specific monitoring strategies in different ways. The present study examines the importance of mothers' perceptions of neighborhood disorganization for the specific monitoring strategies they use and how each of these strategies are related to youths' alcohol use and delinquency. The sample consists of 415 mother-child dyads recruited from urban and suburban communities in Western New York state. Youths were between 10 and 16 years of age (56% female), and were mostly Non-Hispanic White and African American (45.3 and 36.5%, respectively). Structural equation modeling shows that mothers who perceive greater neighborhood problems use more rule-setting strategies, but report lower levels of knowledge of their children's whereabouts. Knowledge of whereabouts is related to less youth alcohol use and delinquency through its association with lowered peer substance use, whereas rule-setting is unrelated to these outcomes. Thus, mothers who perceive greater problems in their neighborhoods use less effective monitoring strategies. Prevention programs could address parental monitoring needs based upon neighborhood differences, tailoring programs for different neighborhoods. Further, parents could be apprised of the limitations of rule-setting, particularly in the absence of monitoring their child's whereabouts. PMID:20414711

  1. Problems with tense marking in children with specific language impairment: not how but when

    PubMed Central

    Bishop, Dorothy V. M.

    2014-01-01

    Many children with specific language impairment (SLI) have persisting problems in the correct use of verb tense, but there has been disagreement as to the underlying reason. When we take into account studies using receptive as well as expressive language tasks, the data suggest that the difficulty for children with SLI is in knowing when to inflect verbs for tense, rather than how to do so. This is perhaps not surprising when we consider that tense does not have a transparent semantic interpretation, but depends on complex relationships between inflections and hierarchically organized clauses. An explanation in terms of syntactic limitations contrasts with a popular morpho-phonological account, the Words and Rules model. This model, which attributes problems to difficulties with applying a rule to generate regular inflected forms, has been widely applied to adult-acquired disorders. There are striking similarities in the pattern of errors in adults with anterior aphasia and children with SLI, suggesting that impairments in appreciation of when to mark tense may apply to acquired as well as developmental disorders. PMID:24324242

  2. Student Party Riots. Problem-Oriented Guides for Police. Problem-Specific Guides Series. Guide Number 39

    ERIC Educational Resources Information Center

    Madensen, Tamara D.; Eck, John E.

    2006-01-01

    Alcohol-related riots among university students pose a significant problem for police agencies that serve college communities. The intensity of the disturbances may vary. However, the possible outcomes include property destruction and physical violence and are a serious threat to community and officer safety. This report provides a framework for…

  3. Bomb Threats in Schools. Problem-Oriented Guides for Police. Problem-Specific Guides Series. Guide Number 32

    ERIC Educational Resources Information Center

    Newman, Graeme R.

    2005-01-01

    This guide addresses the problem of bomb threats in schools, public or private, kindergarten through 12th grade. Colleges and universities are excluded because they generally differ from schools. The guide reviews the factors that increase the risk of bomb threats in schools and then identifies a series of questions that might assist departments…

  4. Phonological working memory impairments in children with specific language impairment: where does the problem lie?

    PubMed Central

    Alt, Mary

    2010-01-01

    Purpose The purpose of this study was to determine which factors contribute to the lexical learning deficits of children with Specific Language Impairment (SLI). Method Participants included 40 7-8-year old participants, half of whom were diagnosed with SLI and half of whom had normal language skills. We tested hypotheses about the contributions to word learning of the initial encoding of phonological information and the link to long-term memory. Children took part in a computer-based fast-mapping task which manipulated word length and phonotactic probability to address the hypotheses. The task had a recognition and a production component. Data were analyzed using mixed ANOVAs with post-hoc testing. Results Results indicate that the main problem for children with SLI is with initial encoding, with implications for limited capacity. There was not strong evidence for specific deficits in the link to long term memory. Conclusions We were able to ascertain which aspects of lexical learning are most problematic for children with SLI in terms of fast-mapping. These findings may allow clinicians to focus intervention on known areas of weakness. Future directions include extending these findings to slow mapping scenarios. PMID:20943232

  5. Unit-specific calibration of Actigraph accelerometers in a mechanical setup – Is it worth the effort? The effect on random output variation caused by technical inter-instrument variability in the laboratory and in the field

    PubMed Central

    Moeller, Niels C; Korsholm, Lars; Kristensen, Peter L; Andersen, Lars B; Wedderkopp, Niels; Froberg, Karsten

    2008-01-01

    Background Potentially, unit-specific in-vitro calibration of accelerometers could increase field data quality and study power. However, reduced inter-unit variability would only be important if random instrument variability contributes considerably to the total variation in field data. Therefore, the primary aim of this study was to calculate and apply unit-specific calibration factors in multiple accelerometers in order to examine the impact on random output variation caused by inter-instrument variability. Methods Instrument-specific calibration factors were estimated in 25 MTI- and 53 CSA accelerometers in a mechanical setup using four different settings varying in frequencies and/or amplitudes. Calibration effect was analysed by comparing raw and calibrated data after applying unit-specific calibration factors to data obtained during quality checks in a mechanical setup and to data collected during free living conditions. Results Calibration reduced inter-instrument variability considerably in the mechanical setup, both in the MTI instruments (raw SDbetween units = 195 counts*min-1 vs. calibrated SDbetween units = 65 counts*min-1) and in the CSA instruments (raw SDbetween units = 343 counts*min-1 vs. calibrated SDbetween units = 67 counts*min-1). However, the effect of applying the derived calibration to children's and adolescents' free living physical activity data did not alter the coefficient of variation (CV) (children: CVraw = 30.2% vs. CVcalibrated = 30.4%, adolescents: CVraw = 36.3% vs. CVcalibrated = 35.7%). High correlations (r = 0.99 & r = 0.98, respectively) were observed between raw and calibrated field data, and the proportion of the total variation caused by the MTI- and CSA monitor was estimated to be only 1.1% and 4.2%, respectively. Compared to the CSA instruments, a significantly increased (9.95%) mean acceleration response was observed post hoc in the batch of MTI instruments, in which a significantly reduced inter-instrumental reliability

  6. The Contribution of Domain-Specific Knowledge in Predicting Students' Proportional Word Problem-Solving Performance

    ERIC Educational Resources Information Center

    Jitendra, Asha K.; Lein, Amy E.; Star, Jon R.; Dupuis, Danielle N.

    2013-01-01

    This study explored the extent to which domain-specific knowledge predicted proportional word problem-solving performance. We tested 411 seventh-grade students on conceptual and procedural fraction knowledge, conceptual and procedural proportion knowledge, and proportional word problem solving. Multiple regression analyses indicated that all four…

  7. Calibration of neutron-sensitive devices

    NASA Astrophysics Data System (ADS)

    Gressier, V.; Taylor, G. C.

    2011-12-01

    The calibration of a neutron-sensitive device can range from a simple calibration factor at a single energy or energy distribution to a full response characterization over the entire energy range to which the device is sensitive. As the responses of neutron-sensitive devices and the fluence-to-dose-equivalent conversion coefficients can vary with neutron energy and incident angle, both simulation and experiments in standard neutron fields are required. Although several ISO standards present calibration principles in general and detailed discussion on many specific areas, there are certain omissions and limitations that this paper intends to highlight, along with some new recommendations derived from the recent literature, mainly focused on the effective centre, corrections for geometry and neutron scattering, as well as the problem of calibrating in terms of personal dose equivalent.

  8. Integrating a Gravity Simulation and Groundwater Numerical Modeling on the Calibration of Specific Yield for Choshui Alluvial Fan

    NASA Astrophysics Data System (ADS)

    Hsu, C. Y.

    2014-12-01

    In Taiwan, groundwater resources play a vital role on the regional supply management. Because the groundwater resources have been used without proper management in decades, several kinds of natural hazards, such as land subsidence, have been occurred. The Choshui alluvial fan is one of the hot spots in Taiwan. For sustainable management, accurately estimation of recharge is the most important information. The accuracy is highly related to the uncertainty of specific yield (Sy). Besides, because the value of Sy should be tested via a multi-well pumping test, the installation cost for the multi-well system limits the number of field tests. Therefore, the low spatial density of field test for Sy makes the estimation of recharge contains high uncertainty. The proposed method combines MODFLOW with a numerical integration procedure that calculates the gravity variations. Heterogeneous parameters (Sy) can be assigned to MODFLOW cells. An inverse procedure is then applied to interpret and identify the Sy value around the gravity station. The proposed methodology is applied to the Choshui alluvial fan, one of the most important groundwater basins in Taiwan. Three gravity measurement stations, "GS01", "GS02" and "GS03", were established. The location of GS01 is in the neighborhood of a groundwater observation well where pumping test data are available. The Sy value estimated from the gravitation measurements collected from GS01 compares favorably with that obtained from the traditional pumping test. The comparison verifies the correctness and accuracy of the proposed method. We then use the gravity measurements collected from GS02 and GS03 to estimate the Sy values in the areas where there exist no pumping test data. Using the estimated values obtained from gravity measurements, the spatial distribution of the values of specific yield for the aquifer can be further refined. The proposed method is a cost-saving and accuracy alternative for the estimation of specific yield in

  9. [Specific features and problems in the pharmacotherapy of schizophrenic psychoses in children and adolescents].

    PubMed

    Fegert, J M

    2002-10-01

    The present article reviews the specific problems of the off-label use of atypical and typical neuroleptics in the treatment of adolescent patients with schizophrenia. There is a considerable gap in the empirical knowledge of treatment efficacy and long term safety in adult populations as compared to children and adolescents. This means that in most in-patients with early onset schizophrenia some sort of typical or atypical neuroleptic drug is currently used beyond license. From a legal point of view there is no protection for treated children or adolescents and their parents as the manufacturer of these pharmaceutical products does not assume liability for off-label use. Whether the doctor can be held liable in these cases, depends on the quality of the information he provides to his patients and the patients' consent. Legal changes such as the Food and Drug Administration Modernization Act (FDAMA) and Pediatric Rule in the USA have brought forward more research to the benefit of children and adolescents. In 2002 a trend has been noticed in both the European and the German Parliament to improve the general conditions for using drugs that are well-established in adult medicine for the treatment of children. PMID:12474310

  10. Problem Specification for FY12 Modeling of UNF During Extended Storage

    SciTech Connect

    Clarno, Kevin T; Howard, Rob L

    2012-03-01

    The Nuclear Energy Advanced Modeling and Simulation (NEAMS) program of the Advanced Modeling and Simulation Office (AMSO) of the US Department of Energy, Office of Nuclear Energy (DOE/NE) has invested in the initial extension and application of advanced nuclear simulation tools to address relevant needs in evaluating the performance of used nuclear fuel (UNF) during extended periods of dry storage. There are many significant challenges associated with the prediction of the behavior of used fuel during extended periods of dry storage and subsequent transportation. The initial activities are focused on integrating with the Used Fuel Disposition (UFD) Campaign of the DOE/NE and a demonstration that the Advanced Multi-Physics (AMP) Nuclear Fuel Performance code (AMPFuel) for modeling the mechanical state of the cladding after decades of storage. This initial focus will model the long-term storage of the UNF and account for the effect, and generation, of radially and circumferentially oriented hydride precipitates within the cladding and predict the end of storage (EOS) mechanical state (stress, strain) of the cladding. Predicting the EOS state of the cladding is significant because it (1) provides an estimate of the margin to failure of the cladding during nominal storage operation and it (2) establishes the initial state of the fuel for post-storage transportation. Because there are significant uncertainties associated with the storage conditions, hydride precipitate formation, and the beginning of storage (BOS) condition of the UNF, this will also allow for the development of a rigorous capability to evaluate the relative sensitivities of the uncertainties and can help to guide the experimental and analysis efforts of the UFD Campaign. This document is focused on specifying the problem that will be solved with AMPFuel. An associated report, documents the specifics of the constitutive model that will be developed and implemented in AMPFuel to account for the presence

  11. Workers' Education in Industrialised Countries and Its Specific Problems in Relation to Development.

    ERIC Educational Resources Information Center

    Labour Education, 1986

    1986-01-01

    Examines several problems that need to be addressed concerning world crisis: war, poverty, unemployment, overpopulation, environmental issues, and housing; developed versus developing countries; and social justice. The task for workers' education in relation to these problems is discussed. (CT)

  12. 3D reconstruction of a patient-specific surface model of the proximal femur from calibrated x-ray radiographs: A validation study

    SciTech Connect

    Zheng Guoyan; Schumann, Steffen

    2009-04-15

    Twenty-three femurs (one plastic bone and twenty-two cadaver bones) with both nonpathologic and pathologic cases were considered to validate a statistical shape model based technique for three-dimensional (3D) reconstruction of a patient-specific surface model from calibrated x-ray radiographs. The 3D reconstruction technique is based on an iterative nonrigid registration of the features extracted from a statistically instantiated 3D surface model to those interactively identified from the radiographs. The surface models reconstructed from the radiographs were compared to the associated ground truths derived either from a 3D CT-scan reconstruction method or from a 3D laser-scan reconstruction method and an average error distance of 0.95 mm were found. Compared to the existing works, our approach has the advantage of seamlessly handling both nonpathologic and pathologic cases even when the statistical shape model that we used was constructed from surface models of nonpathologic bones.

  13. SAR calibration: A technology review

    NASA Technical Reports Server (NTRS)

    Larson, R. W.; Politis, D. T.; Shuchman, R. A.

    1983-01-01

    Various potential applications of amplitude-calibrated SAR systems are briefly described, along with an estimate of calibration performance requirements. A review of the basic SAR calibration problem is given. For background purposes and to establish consistent definition of terms, various conventional SAR performance parameters are reviewed along with three additional parameters which are directly related to calibrated SAR systems. Techniques for calibrating a SAR are described. Included in the results presented are: calibration philosophy and procedures; review of the calibration signal generator technology development with results describing both the development of instrumentation and internal calibration measurements for two SAR systems; summary of analysis and measurements required to determine optimum retroreflector design and configuration for use as a reference for the absolute calibration of a SAR system; and summary of techniques for in-flight measurements of SAR antenna response.

  14. English for Specific Purposes (ESP) for Jordanian Tourist Police in Their Workplace: Needs and Problems

    ERIC Educational Resources Information Center

    Aldohon, Hatem Ibrahim

    2014-01-01

    With the rapid development of the global tourism industry, designing ESP-based curricula is now more vitally needed than ever. To work towards this goal, analyzing learners' problems and needs has become merely unavoidable. Therefore, this study aimed at examining the needs, functions and problems of 46 tourist police serving in different…

  15. Calibration of a plant uptake model with plant- and site-specific data for uptake of chlorinated organic compounds into radish.

    PubMed

    Trapp, Stefan

    2015-01-01

    The uptake of organic pollutants by plants is an important process for the exposure of humans to toxic chemicals. The objective of this study was to calibrate the parameters of a common plant uptake model by comparison to experimental results from literature. Radish was grown in contaminated soil (maximum concentration 2.9 mg/kg dw) and control plot. Uptake of HCHs, HCB, PCBs, and DDT plus metabolites was studied (log K(ow) 3.66 to 7.18). Measured BCF roots-to-soil were near 1 g/g dw on the control plot and about factor 10 lower for the contaminated soil. With default data set, uptake into roots of most substances was under predicted up to factor 100. The use of site-specific data improved the predictions. Consideration of uptake from air into radish bulbs was relevant for PCBs. Measured BCF shoots ranged from <0.1 to >10 g/g dw and were much better predicted by the standard model. The results with default data and site-specific data were similar. Deposition from air was the major uptake mechanism into shoots. Transport from soil with resuspended particles was only relevant for the contaminated plot. The calculation results (in dry weight) were most sensitive to changes of the water content of plant tissue. PMID:25426767

  16. The Effect of General and Drug-Specific Family Environments on Comorbid and Drug-Specific Problem Behavior: A Longitudinal Examination

    PubMed Central

    Epstein, Marina; Hill, Karl G.; Bailey, Jennifer A.; Hawkins, J. David

    2013-01-01

    Previous research has shown that the development of alcohol and tobacco dependence is linked, and that both are influenced by family environmental and intrapersonal factors, many of which likely interact over the life course. The current study identifies general and substance-specific predictors of comorbid problem behavior, tobacco dependence, and alcohol abuse and dependence. Specifically, we examine the effects of general and alcohol- and tobacco-specific environmental influences in the family of origin (ages 10 – 18) and family of cohabitation (ages 27 – 30) on problem behavior and alcohol- and tobacco-specific outcomes at age 33. General environmental factors include family monitoring, conflict, bonding, and involvement. Alcohol environment includes parental alcohol use, parents’ attitudes toward alcohol, and children’s involvement in family drinking. Tobacco-specific environment is assessed analogously. Additionally, analyses include the effect of childhood behavioral disinhibition and control for demographics and initial behavior problems. Analyses were based on 469 participants drawn from the Seattle Social Development Project (SSDP) sample. Results indicated that (a) environmental factors within the family of origin and the family of cohabitation are both important predictors of problem behavior at age 33; (b) family of cohabitation influences partially mediate the effects of family of origin environments; (c) considerable continuity exists between adolescent and adult general and tobacco (but not alcohol) environments; age 18 alcohol and tobacco use partially mediates these relationships; and (d) childhood behavioral disinhibition, contributed to age 33 outcomes, over and above the effects of family of cohabitation mediators. Implications for preventive interventions are discussed. PMID:22799586

  17. Performance in chemistry problem solving: A study of expert/novice strategies and specific cognitive factors

    NASA Astrophysics Data System (ADS)

    Engemann, Joseph Francis

    The purpose of this study was (a) to determine whether any relationships exist between chemistry problem-solving performance and field dependent-independent cognitive style, logical reasoning ability, mental capacity, age, gender, and/or academic level, and (b) to compare the problem-solving strategies employed by novices, advanced novices, and experts in chemistry. The Group Assessment of Logical Thinking (GALT), the Group Embedded Figures Test (GEFT), and the Figural Intersection Test (FIT) were administered to 29 freshman and junior university chemistry students and 19 Regents and Advanced Placement high school chemistry students. In addition, six mole concept problems were given to these participants, as well as to another 25 participants classified as advanced novices or experts in chemistry. All six solutions for each participant were evaluated in order to obtain a problem-solving performance score. Participants were audiotaped as they "talked aloud" during the problem-solving session. Tapes were transcribed into protocols, 37 of which were selected and analyzed for choice of problem-solving strategy and time to solution. Analyses of variance were conducted to look for significant effects of gender or academic level on field dependent-independent cognitive style, logical reasoning ability, mental capacity, and problem-solving performance. These analyses provided evidence of a significant relationship between the conservation subtest of the GALT and gender (p < .05), between the proportional reasoning subtest of the GALT and gender (p < .05), and between mental capacity and academic level (p < .01). A multiple regression analysis reported that problem-solving performance is related to an interaction between logical reasoning ability and mental capacity. A relationship between academic level and chemistry problem-solving performance was also reported. From an analysis of verbal protocols of successful problem solvers at all three levels of experience, the

  18. Adolescents' internalizing and externalizing problems predict their affect-specific HPA and HPG axes reactivity.

    PubMed

    Han, Georges; Miller, Jonas G; Cole, Pamela M; Zahn-Waxler, Carolyn; Hastings, Paul D

    2015-09-01

    We examined psychopathology-neuroendocrine associations in relation to the transition into adolescence within a developmental framework that acknowledged the interdependence of the HPA and HPG hormone systems in the regulation of responses to everyday affective contexts. Saliva samples were collected during anxiety and anger inductions from 51 young adolescents (M 13.47, SD = .60 years) to evaluate cortisol, DHEA, and testosterone responses. Internalizing and externalizing problems were assessed at pre-adolescence (M = 9.27, SD = .58 years) while youths were in elementary school and concurrently with hormones in early adolescence. Externalizing problems from elementary school predicted adolescents' reduced DHEA reactivity during anxiety induction. Follow up analyses simultaneously examining the contributions of elementary school and adolescent problems showed a trend suggesting that youths with higher levels of internalizing problems during elementary school eventuated in a profile of heightened DHEA reactivity as adolescents undergoing anxiety induction. For both the anxiety and the anger inductions, it was normative for DHEA and testosterone to be positively coupled. Adolescents with high externalizing problems but low internalizing problems marshaled dual axes co-activation during anger induction in the form of positive cortisol-testosterone coupling. This is some of the first evidence suggesting affective context determines whether dual axes coupling is reflective of normative or problematic functioning in adolescence. PMID:25604092

  19. The effect of general and drug-specific family environments on comorbid and drug-specific problem behavior: a longitudinal examination.

    PubMed

    Epstein, Marina; Hill, Karl G; Bailey, Jennifer A; Hawkins, J David

    2013-06-01

    Previous research has shown that the development of alcohol and tobacco dependence is linked and that both are influenced by environmental and intrapersonal factors, many of which likely interact over the life course. The present study examines the effects of general and alcohol- and tobacco-specific environmental influences in the family of origin (ages 10-18) and family of cohabitation (ages 27-30) on problem behavior and alcohol- and tobacco-specific outcomes at age 33. General environmental factors include family management, conflict, bonding, and involvement. Alcohol environment includes parental alcohol use, parents' attitudes toward alcohol, and children's involvement in family drinking. Tobacco-specific environment is assessed analogously. Additionally, analyses include the effects of childhood behavioral disinhibition, initial behavior problems, and age 18 substance use. Analyses were based on 469 participants drawn from the Seattle Social Development Project (SSDP) sample. Results indicated that (a) environmental factors within the family of origin and the family of cohabitation are both important predictors of problem behavior at age 33; (b) family of cohabitation influences partially mediate the effects of family of origin environments; (c) considerable continuity exists between adolescent and adult general and tobacco (but not alcohol) environments; age 18 alcohol and tobacco use partially mediates these relationships; and (d) childhood behavioral disinhibition contributed to age 33 outcomes, over and above the effects of family of cohabitation mediators. Implications for preventive interventions are discussed. PMID:22799586

  20. Neural and cognitive correlates of the common and specific variance across externalizing problems in young adolescence.

    PubMed

    Castellanos-Ryan, Natalie; Struve, Maren; Whelan, Robert; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Bromberg, Uli; Büchel, Christian; Flor, Herta; Fauth-Bühler, Mira; Frouin, Vincent; Gallinat, Juergen; Gowland, Penny; Heinz, Andreas; Lawrence, Claire; Martinot, Jean-Luc; Nees, Frauke; Paus, Tomas; Pausova, Zdenka; Rietschel, Marcella; Robbins, Trevor W; Smolka, Michael N; Schumann, Gunter; Garavan, Hugh; Conrod, Patricia J

    2014-12-01

    Longitudinal and family-based research suggests that conduct disorder, substance misuse, and ADHD involve both unique forms of dysfunction as well as more specific dysfunctions unique to each condition. Using direct measures of brain function, this study also found evidence in both unique and disorder-specific perturbations. PMID:25073448

  1. Increasing the Teacher Rate of Behaviour Specific Praise and its Effect on a Child with Aggressive Behaviour Problems

    ERIC Educational Resources Information Center

    Moffat, Thecla Kudakwashe

    2011-01-01

    A single subject design was used to investigate the effectiveness of an increase in teacher behaviour-specific praise statements to address anti-social behaviours demonstrated by a student who displays aggressive behaviours. Researchers agree that praise is effective in improving problem behaviours. They also agree that training teachers to use…

  2. Brain Hyper-Connectivity and Operation-Specific Deficits during Arithmetic Problem Solving in Children with Developmental Dyscalculia

    ERIC Educational Resources Information Center

    Rosenberg-Lee, Miriam; Ashkenazi, Sarit; Chen, Tianwen; Young, Christina B.; Geary, David C.; Menon, Vinod

    2015-01-01

    Developmental dyscalculia (DD) is marked by specific deficits in processing numerical and mathematical information despite normal intelligence (IQ) and reading ability. We examined how brain circuits used by young children with DD to solve simple addition and subtraction problems differ from those used by typically developing (TD) children who…

  3. Non-Word Repetition in Dutch-Speaking Children with Specific Language Impairment with and without Reading Problems

    ERIC Educational Resources Information Center

    Rispens, Judith; Parigger, Esther

    2010-01-01

    Recently, English studies have shown a relationship between non-word repetition (NWR) and the presence of reading problems (RP). Children with specific language impairment (SLI) but without RP performed similarly to their typically developing (TD) peers, whereas children with SLI and RP performed significantly worse on an NWR task. The current…

  4. Toward Greater Specificity in Identifying Associations among Interparental Aggression, Child Emotional Reactivity to Conflict, and Child Problems

    ERIC Educational Resources Information Center

    Davies, Patrick T.; Cicchetti, Dante; Martin, Meredith J.

    2012-01-01

    This study examined specific forms of emotional reactivity to conflict and temperamental emotionality as explanatory mechanisms in pathways among interparental aggression and child psychological problems. Participants of the multimethod, longitudinal study included 201 two-year-old children and their mothers who had experienced elevated violence…

  5. On Thinking and Feeling Bad: Do Client Problems Derive from a Common Irrationality or Specific Irrational Beliefs?

    ERIC Educational Resources Information Center

    Erickson, Chris D.; And Others

    Two studies have reported that low self-esteem is related to the holding of four specific irrational beliefs; further studies have suggested that these and other irrational beliefs are associated with different client problems. This study attempted to replicate the self-esteem findings with a younger population and improved controls and to explore…

  6. Relations between Young Students' Strategic Behaviours, Domain-Specific Self-Concept, and Performance in a Problem-Solving Situation

    ERIC Educational Resources Information Center

    Dermitzaki, Irini; Leondari, Angeliki; Goudas, Marios

    2009-01-01

    This study aimed at investigating the relations between students' strategic behaviour during problem solving, task performance and domain-specific self-concept. A total of 167 first- and second-graders were individually examined in tasks involving cubes assembly and in academic self-concept in mathematics. Students' cognitive, metacognitive, and…

  7. A comparison of alternative multiobjective calibration strategies for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Savenije, Hubert H. G.; Matgen, Patrick; Pfister, Laurent

    2007-03-01

    A conceptual hydrological model structure contains several parameters that have to be estimated through matching observed and modeled watershed behavior in a calibration process. The requirement that a model simulation matches different aspects of system response at the same time has led the calibration problem toward a multiobjective approach. In this work we compare two multiobjective calibration approaches, each of which represents a different calibration philosophy. The first calibration approach is based on the concept of Pareto optimality and consists of calibrating all parameters with respect to a common set of objectives in one calibration stage. This approach results in a set of Pareto-optimal solutions representing the trade-offs between the selected calibration objectives. The second is a stepped calibration approach (SCA), which implies a stepwise calibration of sets of parameters that are associated with specific aspects of the system response. This approach replicates the steps followed by a hydrologist in manual calibration and develops a single solution. The comparison is performed considering the same set of objectives for the two approaches and two model structures of a different level of complexity. The difference in the two approaches, their reciprocal utility, and the practical implications involved in their application are analyzed and discussed using the Hesperange catchment case, an experimental basin in the Alzette River basin in Luxembourg. We show that the two approaches are not necessarily conflicting but can be complementary. The first approach provides useful information about the deficiencies of a model structure and therefore helps the model development, while the second attempts at determining a solution that is consistent with the data available. We also show that with increasing model complexity it becomes possible to reproduce the observations more accurately. As a result, the solutions for the different calibration objectives

  8. General and Specific Predictors of Behavioral and Emotional Problems Among Adolescents

    ERIC Educational Resources Information Center

    Windle, M.; Mason, W. A.

    2004-01-01

    Based on a sample of 1,218 students in the 10th and 11th grades, 14 variables measuring behavioral and emotional problems were modeled as four factors via confirmatory factor analysis. The factors were labeled Polydrug Use, Delinquency, Negative Affect, and Academic Orientation. A similar four-factor structure was supported 1 year later, and a…

  9. Children's Use of Domain-Specific Knowledge and Domain-General Strategies in Novel Problem Solving.

    ERIC Educational Resources Information Center

    English, Lyn D.

    Seventy-two Australian children aged from 4 years 6 months to 9 years 10 months were individually administered a set of six combinatorial problems involving the dressing of toy bears in all possible combinations of clothing items. Six age groups were represented: eight children were in each of the 4, 5, and 6 year categories; and 16 children were…

  10. Vocabulary Notebook: A Digital Solution to General and Specific Vocabulary Learning Problems in a CLIL Context

    ERIC Educational Resources Information Center

    Bazo, Plácido; Rodríguez, Romén; Fumero, Dácil

    2016-01-01

    In this paper, we will introduce an innovative software platform that can be especially useful in a Content and Language Integrated Learning (CLIL) context. This tool is called Vocabulary Notebook, and has been developed to solve all the problems that traditional (paper) vocabulary notebooks have. This tool keeps focus on the personalisation of…

  11. The Effect of General Versus Specific Heuristics in Mathematical Problem-Solving Tasks.

    ERIC Educational Resources Information Center

    Smith, James Philip

    This study investigated differences in problem-solving performance following instruction varying in the type of heuristic advice given. The subjects, 176 college students with two years of high school mathematics experience, were provided programed instruction over a three-week period in three topic areas: finite geometry, Boolean algebra, and…

  12. Preschool Children with Intellectual Disability: Syndrome Specificity, Behaviour Problems, and Maternal Well-Being

    ERIC Educational Resources Information Center

    Eisenhower, A. S.; Baker, B. L.; Blacher, J.

    2005-01-01

    Background: Children with intellectual disability (ID) are at heightened risk for behaviour problems and diagnosed mental disorder. Likewise, mothers of children with ID are more stressed than mothers of typically developing children. Research on behavioural phenotypes suggests that different syndromes of ID may be associated with distinct child…

  13. A Genre-Specific Investigation of Video Game Engagement and Problem Play in the Early Life Course.

    PubMed

    Ream, Geoffrey L; Elliott, Luther C; Dunlap, Eloise

    2013-05-21

    This study explored predictors of engagement with specific video game genres, and degree of problem play experienced by players of specific genres, during the early life course. Video game players ages 18-29 (n = 692) were recruited in and around video game retail outlets, arcades, conventions, and other video game related contexts in New York City. Participants completed a Computer-Assisted Personal Interview (CAPI) of contemporaneous demographic and personality measures and a Life-History Calendar (LHC) measuring video gaming, school/work engagement, and caffeine and sugar consumption for each year of life ages 6 - present. Findings were that likelihood of engagement with most genres rose during childhood, peaked at some point during the second decade of life, and declined through emerging adulthood. Cohorts effects on engagement also emerged which were probably attributable to changes in the availability and popularity of various genres over the 12-year age range of our participants. The relationship between age and problem play of most genres was either negative or non-significant. Sensation-seeking was the only consistent positive predictor of problem play. Relationships between other variables and engagement with and problem play of specific genres are discussed in detail. PMID:24688802

  14. A Genre-Specific Investigation of Video Game Engagement and Problem Play in the Early Life Course

    PubMed Central

    Ream, Geoffrey L.; Elliott, Luther C.; Dunlap, Eloise

    2013-01-01

    This study explored predictors of engagement with specific video game genres, and degree of problem play experienced by players of specific genres, during the early life course. Video game players ages 18–29 (n = 692) were recruited in and around video game retail outlets, arcades, conventions, and other video game related contexts in New York City. Participants completed a Computer-Assisted Personal Interview (CAPI) of contemporaneous demographic and personality measures and a Life-History Calendar (LHC) measuring video gaming, school/work engagement, and caffeine and sugar consumption for each year of life ages 6 - present. Findings were that likelihood of engagement with most genres rose during childhood, peaked at some point during the second decade of life, and declined through emerging adulthood. Cohorts effects on engagement also emerged which were probably attributable to changes in the availability and popularity of various genres over the 12-year age range of our participants. The relationship between age and problem play of most genres was either negative or non-significant. Sensation-seeking was the only consistent positive predictor of problem play. Relationships between other variables and engagement with and problem play of specific genres are discussed in detail. PMID:24688802

  15. Calibration validation revisited or how to make better use of available data: Sub-period calibration

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Hrachowitz, M.; Fenicia, F.; Savenije, H.

    2012-12-01

    Parameter identification of conceptual hydrological models depends largely on calibration, as model parameters are typically non-measurable quantities. For hydrological modeling the identification of "realistic" parameter sets is a key objective. As a model is intended to be used for prediction in future it is also crucial that the model parameters be time transposable. However, previous studies showed that the "best" parameter set can significantly vary over time. Instead of using the "best fit", this study introduces sub-period (SuPer) calibration as a new framework to identify the most "time consistent" parameterization, although potentially sub-optimal in the calibration period. The SuPer calibration framework includes two steps. First, the time series is split into different sub-periods, such as years or seasons. Then the model is calibrated separately for each sub-period and a Pareto front is obtained as the "best fit" for every sub-period. In the second step those parameter sets are selected that minimize the distance to the Pareto front of each sub-period, which involves an additional multi-objective optimization problem with dimensions equal to the number of sub-periods. The performance of the SuPer calibration framework is evaluated and compared with traditional calibration validation frameworks for two sub-period combinations: 1) Two consecutive years; and 2) Eight consecutive years, as sub-periods. For this evaluation we used the HyMOD model applied to the Wark catchment in the Grand Duchy of Luxembourg. We show that besides being a calibration framework, this approach has also diagnostic capabilities. It can in fact indicate the parameter sets that perform consistently well for all the sub-periods while it does not require subjective thresholds for defining behavioral parameter sets. It appears that SuPer calibration leads to feasible parameter ranges for the individual sub-periods which differ from parameter ranges defined by traditional model

  16. Specificity of Anti-Tau Antibodies when Analyzing Mice Models of Alzheimer's Disease: Problems and Solutions

    PubMed Central

    Petry, Franck R.; Pelletier, Jérôme; Bretteville, Alexis; Morin, Françoise; Calon, Frédéric; Hébert, Sébastien S.; Whittington, Robert A.; Planel, Emmanuel

    2014-01-01

    Aggregates of hyperphosphorylated tau protein are found in a group of diseases called tauopathies, which includes Alzheimer's disease. The causes and consequences of tau hyperphosphorylation are routinely investigated in laboratory animals. Mice are the models of choice as they are easily amenable to transgenic technology; consequently, their tau phosphorylation levels are frequently monitored by Western blotting using a panel of monoclonal/polyclonal anti-tau antibodies. Given that mouse secondary antibodies can recognize endogenous mouse immunoglobulins (Igs) and the possible lack of specificity with some polyclonal antibodies, non-specific signals are commonly observed. Here, we characterized the profiles of commonly used anti-tau antibodies in four different mouse models: non-transgenic mice, tau knock-out (TKO) mice, 3xTg-AD mice, and hypothermic mice, the latter a positive control for tau hyperphosphorylation. We identified 3 tau monoclonal antibody categories: type 1, characterized by high non-specificity (AT8, AT180, MC1, MC6, TG-3), type 2, demonstrating low non-specificity (AT270, CP13, CP27, Tau12, TG5), and type 3, with no non-specific signal (DA9, PHF-1, Tau1, Tau46). For polyclonal anti-tau antibodies, some displayed non-specificity (pS262, pS409) while others did not (pS199, pT205, pS396, pS404, pS422, A0024). With monoclonal antibodies, most of the interfering signal was due to endogenous Igs and could be eliminated by different techniques: i) using secondary antibodies designed to bind only non-denatured Igs, ii) preparation of a heat-stable fraction, iii) clearing Igs from the homogenates, and iv) using secondary antibodies that only bind the light chain of Igs. All of these techniques removed the non-specific signal; however, the first and the last methods were easier and more reliable. Overall, our study demonstrates a high risk of artefactual signal when performing Western blotting with routinely used anti-tau antibodies, and proposes several

  17. Calibration of the Urbana lidar system

    NASA Technical Reports Server (NTRS)

    Cerny, T.; Sechrist, C. F., Jr.

    1980-01-01

    A method for calibrating data obtained by the Urban sodium lidar system is presented. First, an expression relating the number of photocounts originating from a specific altitude range to the soodium concentration is developed. This relation is then simplified by normalizing the sodium photocounts with photocounts originating from the Rayleigh region of the atmosphere. To evaluate the calibration expression, the laser linewidth must be known. Therefore, a method for measuring the laser linewidth using a Fabry-Perot interferometer is given. The laser linewidth was found to be 6 + or - 2.5 pm. Problems due to photomultiplier tube overloading are discussed. Finally, calibrated data is presented. The sodium column abundance exhibits something close to a sinusoidal variation throughout the year with the winter months showing an enhancement of a factor of 5 to 7 over the summer months.

  18. A new general approach for solving the self-calibration problem on large area 2D ultra-precision coordinate measurement machines

    NASA Astrophysics Data System (ADS)

    Ekberg, Peter; Stiblert, Lars; Mattsson, Lars

    2014-05-01

    The manufacturing of flat panel displays requires a number of photomasks for the placement of pixel patterns and supporting transistor arrays. For large area photomasks, dedicated ultra-precision writers have been developed for the production of these chromium patterns on glass or quartz plates. The dimensional tolerances in X and Y for absolute pattern placement on these plates, with areas measured in square meters, are in the range of 200-300 nm (3σ). To verify these photomasks, 2D ultra-precision coordinate measurement machines are used having even tighter tolerance requirements. This paper will present how the world standard metrology tool used for verifying large masks, the Micronic Mydata MMS15000, is calibrated without any other references than the wavelength of the interferometers in an extremely well-controlled temperature environment. This process is called self-calibration and is the only way to calibrate the metrology tool, as no square-meter-sized large area 2D traceable artifact is available. The only parameter that cannot be found using self-calibration is the absolute length scale. To make the MMS15000 traceable, a 1D reference rod, calibrated at a national metrology lab, is used. The reference plates used in the calibration of the MMS15000 may have sizes up to 1 m2 and a weight of 50 kg. Therefore, standard methods for self-calibration on a small scale with exact placements cannot be used in the large area case. A new, more general method had to be developed for the purpose of calibrating the MMS15000. Using this method, it is possible to calibrate the measurement tool down to an uncertainty level of <90 nm (3σ) over an area of (0.8 × 0.8) m2. The method used, which is based on the concept of iteration, does not introduce any more noise than the random noise introduced by the measurements, resulting in the lowest possible noise level that can be achieved by any self-calibration method.

  19. Phonological Working Memory Impairments in Children with Specific Language Impairment: Where Does the Problem Lie?

    ERIC Educational Resources Information Center

    Alt, Mary

    2011-01-01

    Purpose: The purpose of this study was to determine which factors contribute to the lexical learning deficits of children with specific language impairment (SLI). Method: Participants included 40 7-8-year old participants, half of whom were diagnosed with SLI and half of whom had normal language skills. We tested hypotheses about the contributions…

  20. A Theory-Based Framework for Assessing Domain-Specific Problem-Solving Ability.

    ERIC Educational Resources Information Center

    Sugrue, Brenda

    1995-01-01

    A more fragmented approach to assessment of global ability concepts than is generally advocated is suggested, based on the assumption that decomposing a complex ability into cognitive components and tracking performance across multiple measures will yield valid and instructionally useful information. Specifications are suggested for designing…

  1. Specification of the Advanced Burner Test Reactor Multi-Physics Coupling Demonstration Problem

    SciTech Connect

    Shemon, E. R.; Grudzinski, J. J.; Lee, C. H.; Thomas, J. W.; Yu, Y. Q.

    2015-12-21

    This document specifies the multi-physics nuclear reactor demonstration problem using the SHARP software package developed by NEAMS. The SHARP toolset simulates the key coupled physics phenomena inside a nuclear reactor. The PROTEUS neutronics code models the neutron transport within the system, the Nek5000 computational fluid dynamics code models the fluid flow and heat transfer, and the DIABLO structural mechanics code models structural and mechanical deformation. The three codes are coupled to the MOAB mesh framework which allows feedback from neutronics, fluid mechanics, and mechanical deformation in a compatible format.

  2. Problems in planning bimanually incongruent grasp postures relate to simultaneous response specification processes.

    PubMed

    Hughes, Charmayne M L; Seegelke, Christian; Reissig, Paola

    2014-06-01

    The purpose of the current experiments was to examine whether the problems associated with grasp posture planning during bimanually incongruent movements are due to crosstalk at the motor programming level. Participants performed a grasping and placing task in which they grasped two objects from a table and placed them onto a board to targets that required identical (congruent) or non-identical degrees of rotation (incongruent). The interval between the presentation of the first stimulus and the second stimulus (stimulus onset asynchrony: SOA) was manipulated. Results demonstrate that the problems associated with bimanually incongruent grasp posture planning are reduced at SOA durations longer than 1000ms, indicating that the costs associated with bimanual incongruent movements arise from crosstalk at the motor programming level. In addition, reach-to-grasp times were shorter, and interlimb limb coupling was higher, for congruent, compared to incongruent, object end-orientation conditions in both Experiment 1 and 2. The bimanual interference observed during reach-to-grasp execution is postulated to arise from limitations in the visual motor system or from conceptual language representations. The present results emphasize that bimanual interference arises from constraints active at multiple levels of the neurobiological-cognitive system. PMID:24650762

  3. Hepatitis C virus infection: Are there still specific problems with genotype 3?

    PubMed Central

    Gondeau, Claire; Pageaux, Georges Philippe; Larrey, Dominique

    2015-01-01

    Hepatitis C virus (HCV) infection is one of the most common causes of chronic liver disease and the main indication for liver transplantation worldwide. As promising specific treatments have been introduced for genotype 1, clinicians and researchers are now focusing on patients infected by non-genotype 1 HCV, particularly genotype 3. Indeed, in the golden era of direct-acting antiviral drugs, genotype 3 infections are no longer considered as easy to treat and are associated with higher risk of developing severe liver injuries, such as cirrhosis and hepatocellular carcinoma. Moreover, HCV genotype 3 accounts for 40% of all HCV infections in Asia and is the most frequent genotype among HCV-positive injecting drug users in several countries. Here, we review recent data on HCV genotype 3 infection/treatment, including clinical aspects and the underlying genotype-specific molecular mechanisms. PMID:26576095

  4. Kinetics of Hydrogen Radical Reactions with Toluene Including Chemical Activation Theory Employing System-Specific Quantum RRK Theory Calibrated by Variational Transition State Theory.

    PubMed

    Bao, Junwei Lucas; Zheng, Jingjing; Truhlar, Donald G

    2016-03-01

    Pressure-dependent reactions are ubiquitous in combustion and atmospheric chemistry. We employ a new calibration procedure for quantum Rice-Ramsperger-Kassel (QRRK) unimolecular rate theory within a chemical activation mechanism to calculate the pressure-falloff effect of a radical association with an aromatic ring. The new theoretical framework is applied to the reaction of H with toluene, which is a prototypical reaction in the combustion chemistry of aromatic hydrocarbons present in most fuels. Both the hydrogen abstraction reactions and the hydrogen addition reactions are calculated. Our system-specific (SS) QRRK approach is adjusted with SS parameters to agree with multistructural canonical variational transition state theory with multidimensional tunneling (MS-CVT/SCT) at the high-pressure limit. The new method avoids the need for the usual empirical estimations of the QRRK parameters, and it eliminates the need for variational transition state theory calculations as a function of energy, although in this first application we do validate the falloff curves by comparing SS-QRRK results without tunneling to multistructural microcanonical variational transition state theory (MS-μVT) rate constants without tunneling. At low temperatures, the two approaches agree well with each other, but at high temperatures, SS-QRRK tends to overestimate falloff slightly. We also show that the variational effect is important in computing the energy-resolved rate constants. Multiple-structure anharmonicity, torsional-potential anharmonicity, and high-frequency-mode vibrational anharmonicity are all included in the rate computations, and torsional anharmonicity effects on the density of states are investigated. Branching fractions, which are both temperature- and pressure-dependent (and for which only limited data is available from experiment), are predicted as a function of pressure. PMID:26841076

  5. Protocols for calibrating multibeam sonar.

    PubMed

    Foote, Kenneth G; Chu, Dezhang; Hammar, Terence R; Baldwin, Kenneth C; Mayer, Larry A; Hufnagle, Lawrence C; Jech, J Michael

    2005-04-01

    Development of protocols for calibrating multibeam sonar by means of the standard-target method is documented. Particular systems used in the development work included three that provide the water-column signals, namely the SIMRAD SM2000/90- and 200-kHz sonars and RESON SeaBat 8101 sonar, with operating frequency of 240 kHz. Two facilities were instrumented specifically for the work: a sea well at the Woods Hole Oceanographic Institution and a large, indoor freshwater tank at the University of New Hampshire. Methods for measuring the transfer characteristics of each sonar, with transducers attached, are described and illustrated with measurement results. The principal results, however, are the protocols themselves. These are elaborated for positioning the target, choosing the receiver gain function, quantifying the system stability, mapping the directionality in the plane of the receiving array and in the plane normal to the central axis, measuring the directionality of individual beams, and measuring the nearfield response. General preparations for calibrating multibeam sonars and a method for measuring the receiver response electronically are outlined. Advantages of multibeam sonar calibration and outstanding problems, such as that of validation of the performance of multibeam sonars as configured for use, are mentioned. PMID:15898644

  6. The COS Calibration Pipeline

    NASA Astrophysics Data System (ADS)

    Hodge, Philip E.; Kaiser, M. E.; Keyes, C. D.; Ake, T. B.; Aloisi, A.; Friedman, S. D.; Oliveira, C. M.; Shaw, B.; Sahnow, D. J.; Penton, S. V.; Froning, C. S.; Beland, S.; Osterman, S.; Green, J.; COS/STIS STScI Team; IDT, COS

    2008-05-01

    The Cosmic Origins Spectrograph, COS, (Green, J, et al., 2000, Proc SPIE, 4013) will be installed in the Hubble Space Telescope (HST) during the next servicing mission. This will be the most sensitive ultraviolet spectrograph ever flown aboard HST. The program (CALCOS) for pipeline calibration of HST/COS data has been developed by the Space Telescope Science Institute. As with other HST pipelines, CALCOS uses an association table to list the data files to be included, and it employs header keywords to specify the calibration steps to be performed and the reference files to be used. COS includes both a cross delay line detector for the far ultraviolet (FUV) and a MAMA detector for the near ultraviolet (NUV). CALCOS uses a common structure for both channels, but the specific calibration steps differ. The calibration steps include pulse-height filtering and geometric correction for FUV, and flat-field, deadtime, and Doppler correction for both detectors. A 1-D spectrum will be extracted and flux calibrated. Data will normally be taken in TIME-TAG mode, recording the time and location of each detected photon, although ACCUM mode will also be supported. The wavelength calibration uses an on-board spectral line lamp. To enable precise wavelength calibration, default operations will simultaneously record the science target and lamp spectrum by executing brief (tag-flash) lamp exposures at least once per external target exposure.

  7. Anemometer calibrator

    NASA Technical Reports Server (NTRS)

    Bate, T.; Calkins, D. E.; Price, P.; Veikins, O.

    1971-01-01

    Calibrator generates accurate flow velocities over wide range of gas pressure, temperature, and composition. Both pressure and flow velocity can be maintained within 0.25 percent. Instrument is essentially closed loop hydraulic system containing positive displacement drive.

  8. Problems in the experimental determination of substrate-specific H+/O ratios during respiration.

    PubMed

    Hendler, R W; Shrager, R I

    1987-10-01

    Krab et al. (1984) have recently tried to resolve the long-standing controversy as to whether the mechanistic H+/O coupling ratio for electrons passing through sites II and III of the mammalian electron transport chain to O2 is 6 or 8. Using a mathematical model they concluded that the higher number reported by Costa et al. (1984) was an overestimate because of the unaccounted for delayed response of the O2 electrode. Responding to criticisms of Lehninger et al. (1985), they have recently used (Krab and Wikström, 1986) an improved mathematical model which shows that the higher number found by Costa et al. was probably due to an inadequate accounting for the effects of the proton leak process which accompanies the translocation process. The impression is left that the situation is now resolved in favor of the lower number. We agree that the procedures of Costa et al. do not properly account for the leak process, and provide further evidence in this paper of the magnitude of the problem. However, we disagree that the number 6.0, favored by Wikström et al., rests on any more solid experimental support. We provide evidence here for this conclusion and raise the question as to whether or not any unique, fixed, integral number exists for the H+/O ratio accompanying the oxidation of a particular substrate. PMID:2826412

  9. Comprehension problems in children with specific language impairment: literal and inferential meaning.

    PubMed

    Bishop, D V; Adams, C

    1992-02-01

    A group of 61 schoolchildren with specific language impairment (SLI) was compared with a control group on a comprehension task, in which the child was questioned about a story that had been presented either orally or as a series of pictures. Half the questions were literal, requiring the child to provide a detail that had been mentioned or shown explicitly in the story. The remainder required the child to make an inference about what had not been directly shown or stated. SLI children were impaired on this task, even after taking into account "comprehension age," as assessed on a multiple-choice test. However, the effects of mode of presentation and question type were similar for control and SLI groups. Children who fitted the clinical picture of semantic-pragmatic disorder had lower scores than other SLI children on this task. In addition, they were more prone to give answers that suggested they had not understood the question. However, as with the other SLI children, there was no indication that they had disproportionate difficulty with inferential questions. It is concluded that SLI children are impaired in constructing an integrated representation from a sequence of propositions, even when such propositions are presented nonverbally. PMID:1735960

  10. Calibration Under Uncertainty.

    SciTech Connect

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  11. Brain hyper-connectivity and operation-specific deficits during arithmetic problem solving in children with developmental dyscalculia.

    PubMed

    Rosenberg-Lee, Miriam; Ashkenazi, Sarit; Chen, Tianwen; Young, Christina B; Geary, David C; Menon, Vinod

    2015-05-01

    Developmental dyscalculia (DD) is marked by specific deficits in processing numerical and mathematical information despite normal intelligence (IQ) and reading ability. We examined how brain circuits used by young children with DD to solve simple addition and subtraction problems differ from those used by typically developing (TD) children who were matched on age, IQ, reading ability, and working memory. Children with DD were slower and less accurate during problem solving than TD children, and were especially impaired on their ability to solve subtraction problems. Children with DD showed significantly greater activity in multiple parietal, occipito-temporal and prefrontal cortex regions while solving addition and subtraction problems. Despite poorer performance during subtraction, children with DD showed greater activity in multiple intra-parietal sulcus (IPS) and superior parietal lobule subdivisions in the dorsal posterior parietal cortex as well as fusiform gyrus in the ventral occipito-temporal cortex. Critically, effective connectivity analyses revealed hyper-connectivity, rather than reduced connectivity, between the IPS and multiple brain systems including the lateral fronto-parietal and default mode networks in children with DD during both addition and subtraction. These findings suggest the IPS and its functional circuits are a major locus of dysfunction during both addition and subtraction problem solving in DD, and that inappropriate task modulation and hyper-connectivity, rather than under-engagement and under-connectivity, are the neural mechanisms underlying problem solving difficulties in children with DD. We discuss our findings in the broader context of multiple levels of analysis and performance issues inherent in neuroimaging studies of typical and atypical development. PMID:25098903

  12. [Burnout, work disruptions, interpersonal and psychosomatic problems--degree-specific comparison of students at a German university].

    PubMed

    Gumz, A; Brähler, E; Heilmann, V K; Erices, R

    2014-03-01

    In the context of the public debate on psychological strain among students, the prevalence of burnout, procrastination, test anxiety, other work disruptions, interpersonal problems and psychic symptoms were analyzed depending on academic degree. The data of 358 college students (of Leipzig University) were examined. The academic degree had only a marginal effect on burnout- and work disruptions-related variables. In terms of interpersonal problems and psychic symptoms, differences between students were identified, depending on the academic degree. Diploma students reported many complaints, whereas undergraduates aspiring for a State Examination, were comparatively less affected. Knowledge of the population-specific psychological load is useful in order to develop preventive and therapeutic measures. PMID:23780858

  13. Autonomous Phase Retrieval Calibration

    NASA Technical Reports Server (NTRS)

    Estlin, Tara A.; Chien, Steve A.; Castano, Rebecca; Gaines, Daniel M.; Doubleday, Joshua R.; Schoolcraft, Josua B.; Oyake, Amalaye; Vaughs, Ashton G.; Torgerson, Jordan L.

    2011-01-01

    The Palomar Adaptive Optics System actively corrects for changing aberrations in light due to atmospheric turbulence. However, the underlying internal static error is unknown and uncorrected by this process. The dedicated wavefront sensor device necessarily lies along a different path than the science camera, and, therefore, doesn't measure the true errors along the path leading to the final detected imagery. This is a standard problem in adaptive optics (AO) called "non-common path error." The Autonomous Phase Retrieval Calibration (APRC) software suite performs automated sensing and correction iterations to calibrate the Palomar AO system to levels that were previously unreachable.

  14. HAWC Timing Calibration

    NASA Astrophysics Data System (ADS)

    Kelley-Hoskins, Nathan; Huentemeyer, Petra; Matthews, John; Dingus, Brenda; HAWC Collaboration

    2011-04-01

    The High-Altitude Water Cherenkov (HAWC) Experiment is a second-generation high sensitivity gamma-ray and cosmic-ray detector that builds on the experience and technology of the Milagro observatory. HAWC utilizes the water Cherenkov technique to measure extensive air showers. Instead of a pond filled with water (as in Milagro), an array of closely packed water tanks with 3 PMTs each is used. The cosmic ray's direction will be reconstructed using the times when the PMTs in each tank are triggered. Therefore, the timing calibration will be crucial for reaching an angular resolution as low as 0.1 degrees. We propose to use a laser calibration system, patterned after the calibration system in Milagro. The HAWC optical calibration system uses less than 1 ns laser light pulses, directed into two optical fiber networks. Each network will use optical fan-outs and switches to direct light to specific tanks. The first network is used to measure the light transit time out to each pair of tanks, and the second network sends light to each tank, calibrating each tank's 3 PMTs. Time slewing corrections will be made using neutral density filters to control the light intensity over 4 orders of magnitude. This system is envisioned to run both continuously at a low rate, or at a high rate with many intensity levels. In this presentation, we present the design of the calibration system and first measurements of its performance.

  15. Calibration of 109Cd KXRF systems for in vivo bone lead measurements: the guiding role of the assumptions for least-squares regression in practical problem solving

    NASA Astrophysics Data System (ADS)

    de Brito, J. A. A.; de Carvalho, M. L.; Chettle, D. R.

    2009-02-01

    The use of least-squares regression to probe the level of lead contamination of plaster of Paris standards in the calibration of 109Cd KXRF systems for bone lead measurement, as well as the use of iteratively reweighted least-squares (IRLS) in the case of violation of the assumptions for ordinary least-squares (OLS), is discussed here. One common violation is non-uniform residual variance, which makes the use of OLS inappropriate due to strong influence of points with large variance on the calibration line and variance of the slope and intercept. Comparison between OLS and IRLS in that case showed that IRLS estimates of the intercept are significantly smaller and more precise than OLS estimates, while a less marked increase in the calibration slope is observed when IRLS is used. Moreover, OLS underestimates bone lead concentrations at low levels of lead exposure and overestimates those concentrations at higher levels. These discrepancies are smaller in magnitude than the measurement uncertainty of conventional systems, except for high concentrations. For the newly developed cloverleaf systems, the suggested differences at bone lead concentrations below 17 ppm are comparable to the minimum detection limit, but are larger than the measurement uncertainty for bone lead concentrations above 60 ppm.

  16. Image Calibration

    NASA Technical Reports Server (NTRS)

    Peay, Christopher S.; Palacios, David M.

    2011-01-01

    Calibrate_Image calibrates images obtained from focal plane arrays so that the output image more accurately represents the observed scene. The function takes as input a degraded image along with a flat field image and a dark frame image produced by the focal plane array and outputs a corrected image. The three most prominent sources of image degradation are corrected for: dark current accumulation, gain non-uniformity across the focal plane array, and hot and/or dead pixels in the array. In the corrected output image the dark current is subtracted, the gain variation is equalized, and values for hot and dead pixels are estimated, using bicubic interpolation techniques.

  17. Water content reflectometer calibration, field versus laboratory

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For soils with large amounts of high-charge clays, site-specific calibrations for the newer permittivity probes that operate at lower frequencies, often have higher permittivity values than factory calibrations. The purpose of this study was to determine site-specific calibration of water content re...

  18. Preschool-Age Male Psychiatric Patients with Specific Developmental Disorders and Those Without: Do They Differ in Behavior Problems and Treatment Outcome?

    ERIC Educational Resources Information Center

    Achtergarde, Sandra; Becke, Johanna; Beyer, Thomas; Postert, Christian; Romer, Georg; Müller, Jörg Michael

    2014-01-01

    Specific developmental disorders of speech, language, and motor function in children are associated with a wide range of mental health problems. We examined whether preschool-age psychiatric patients with specific developmental disorders and those without differed in the severity of emotional and behavior problems. In addition, we examined whether…

  19. Is Poor Frequency Modulation Detection Linked to Literacy Problems? A Comparison of Specific Reading Disability and Mild to Moderate Sensorineural Hearing Loss

    ERIC Educational Resources Information Center

    Halliday, L. F.; Bishop, D. V. M.

    2006-01-01

    Specific reading disability (SRD) is now widely recognised as often being caused by phonological processing problems, affecting analysis of spoken as well as written language. According to one theoretical account, these phonological problems are due to low-level problems in auditory perception of dynamic acoustic cues. Evidence for this has come…

  20. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    PubMed Central

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments. PMID:25540608

  1. Concept of ASTER calibration requirement

    NASA Technical Reports Server (NTRS)

    Ono, A.

    1992-01-01

    The document of ASTER Calibration Requirement specifies the following items related to spectral and radiometric characteristics of the ASTER instrument: (1) characteristics whose knowledge is specified, (2) requirement for knowledge of the characteristics, (3) methodology for characteristics evaluation, and (4) supplementary information and data related with characteristics evaluation. This document is applicable to the document of the ASTER Instrument Specification on Observational Performances, and will be a part of the ASTER Calibration Plan. ASTER Calibration Requirement is scheduled to establish the concept and framework by March 1992 when the 5th Calibration and Data Validation Panel Meeting is held, and to determine details including requirement values and evaluation methodologies by October 1992 around which the Calibration Peer Review may be held. The ASTER Calibration Plan is planned to finish by the same time.

  2. Calibration validation revisited or how to make better use of available data: Sub-period calibration

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Hrachowitz, M.; Fenicia, F.; Savenije, H. H. G.

    2012-04-01

    Parameter identification of conceptual hydrological models depends largely on calibration, as model parameters are typically non-measurable quantities. For hydrological modeling the identification of "realistic" parameter sets is a key objective. As a model is intended to be used for prediction in future it is also crucial that the model parameters be time transposable. However, previous studies showed that the "best" parameter set can significantly vary over time. Instead of using the "best fit", this study introduces sub-period (SuPer) calibration as a new framework to identify the most "realistic" parameterization, although potentially sub-optimal in the calibration period. The SuPer calibration framework includes two steps. First, the time series is split into different sub-periods, such as years or seasons. Then the model is calibrated separately for each sub-period and a Pareto front is obtained as the "best fit" for every sub-period. In the second step those parameter sets are selected which minimize the distance to the Pareto front of each sub-period, which involves an additional multi-objective optimization problem with dimensions equal to the number of sub-periods. The performance of the SuPer calibration framework is evaluated and compared with traditional calibration-validation frameworks for three consecutive years for the Wark catchment in Grand Duchy of Luxembourg, using the conceptual rainfall/runoff model HyMOD. We show that besides being a calibration framework, this approach has also diagnostic capabilities. It can in fact indicate the parameter sets that perform consistently well for all the sub-periods and does not require subjective thresholds for defining behavioral parameter sets. For the parameters that show similar feasible ranges for the individual sub-periods, SuPer calibration focuses on the overlap range while for the parameters which vary significantly (although sometimes well identifiable in individual sub-period) SuPer calibration

  3. Towards Greater Specificity in Identifying Associations Among Interparental Aggression, Child Emotional Reactivity to Conflict, and Child Problems

    PubMed Central

    Davies, Patrick T.; Cicchetti, Dante; Martin, Meredith J.

    2012-01-01

    This study examined specific forms of emotional reactivity to conflict and temperamental emotionality as explanatory mechanisms in pathways among interparental aggression and child psychological problems. Participants of the multi-method, longitudinal study included 201 two-year-old children and their mothers who had experienced elevated violence in the home. Consistent with emotional security theory, autoregressive structural equation model analyses indicated that children’s fearful reactivity to conflict was the only consistent mediator in the associations among interparental aggression and their internalizing and externalizing symptoms one year later. Pathways remained significant across maternal and observer ratings of children’s symptoms and with the inclusion of other predictors and mediators, including children’s sad and angry forms of reactivity to conflict, temperamental emotionality, gender, and socioeconomic status. PMID:22716918

  4. Integration of the Two-Dimensional Power Spectral Density into Specifications for the X-ray Domain -- Problems and Opportunities

    SciTech Connect

    McKinney, Wayne R.; Howells, M. R.; Yashchuk, V. V.

    2008-09-30

    An implementation of the two-dimensional statistical scattering theory of Church and Takacs for the prediction of scattering from x-ray mirrors is presented with a graphical user interface. The process of this development has clarified several problems which are of significant interest to the synchrotron community. These problems have been addressed to some extent, for example, for large astronomical telescopes, and at the National Ignition Facility for normal incidence optics, but not in the synchrotron community for grazing incidence optics. Since it is based on the Power Spectral Density (PSD) to provide a description of the deviations from ideal shape of the surface, accurate prediction of the scattering requires an accurate estimation of the PSD. Specifically, the spatial frequency range of measurement must be the correct one for the geometry of use of the optic--including grazing incidence and coherence effects, and the modifications to the PSD of the Optical Transfer Functions (OTF) of the measuring instruments must be removed. A solution for removal of OTF effects has been presented previously, the Binary Pseudo-Random Grating. Typically, the frequency range of a single instrument does not cover the range of interest, requiring the stitching together of PSD estimations. This combination generates its own set of difficulties in two dimensions. Fitting smooth functions to two dimensional PSDs, particularly in the case of spatial non-isotropy of the surface, which is often the case for optics in synchrotron beam lines, can be difficult. The convenient, and physically accurate fractal for one dimension does not readily transfer to two dimensions. Finally, a completely statistical description of scattering must be integrated with a deterministic low spatial frequency component in order to completely model the intensity near the image. An outline for approaching these problems, and our proposed experimental program is given.

  5. The Effectiveness of Self-regulatory Speech Training for Planning and Problem Solving in Children with Specific Language Impairment.

    PubMed

    Abdul Aziz, Safiyyah; Fletcher, Janet; Bayliss, Donna M

    2016-08-01

    Self-regulatory speech has been shown to be important for the planning and problem solving of children. Our intervention study, including comparisons to both wait-list and typically developing controls, examined the effectiveness of a training programme designed to improve self-regulatory speech, and consequently, the planning and problem solving performance of 87 (60 males, 27 females) children aged 4-7 years with Specific Language Impairment (SLI) who were delayed in their self-regulatory speech development. The self-regulatory speech and Tower of London (TOL) performance of children with SLI who received the intervention initially or after a waiting period was compared with that of 80 (48 male, 32 female) typically developing children who did not receive any intervention. Children were tested at three time points: Time 1- prior to intervention; Time 2 - after the first SLI group had received training and the second SLI group provided a wait-list control; and Time 3 - when the second SLI group had received training. At Time 1 children with SLI produced less self-regulatory speech and were impaired on the TOL relative to the typically developing children. At Time 2, the TOL performance of children with SLI in the first training group improved significantly, whereas there was no improvement for the second training group (the wait-list group). At Time 3, the second training group improved their TOL performance and the first group maintained their performance. No significant differences in TOL performance were evident between typically developing children and those with SLI at Time 3. Moreover, decreases in social speech and increases in inaudible muttering following self-regulatory speech training were associated with improvements in TOL performance. Together, the results show that self-regulatory speech training was effective in increasing self-regulatory speech and in improving planning and problem solving performance in children with SLI. PMID:26678398

  6. A problem-specific inverse method for two-dimensional doping profile determination from capacitance-voltage measurements

    NASA Astrophysics Data System (ADS)

    Ouwerling, G. J. L.

    1991-02-01

    The nondestructive and experimentally straightforward capacitance-voltage method for doping profile determination has always been impaired by certain weakness. The abrupt depletion approximation introduces error for steep profiles; the required differentiation of the measurement data causes a considerable noise sensitivity. More fundamentally, the method is restricted to one spatial dimension, perpendicular to the wafer surface. To overcome these limitations, in this paper the use of a numerical inverse method for the interpretation of the measurement data is presented. The method inspired by the use of similar methods for material profiling problems in biophysics and geophysics. It is specific for doping profiling problems and involves the iterative solution of a linear least squares system of equations. In this system, the known vector is formed by the measured capacitance values, and the unknown vector by the discretization of the doping profile on a grid in the measurement device. The matrix elements are found by the solution of Poisson's equation for each measurement bias case. To resolve possible ill-posedness, the system is solved by singular value decomposition of the least squares matrix. The validity of the method is verified and its error sensitivity studied by applying it to the reconstruction of both one- and two-dimensional doping profiles from synthetic measurement data. A test structure suitable for two-dimensional doping profiling, the Trimos device, is proposed and investigated by numerical experiments.

  7. Simple solution for a complex problem: proanthocyanidins, galloyl glucoses and ellagitannins fit on a single calibration curve in high performance-gel permeation chromatography.

    PubMed

    Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene

    2011-10-28

    This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. PMID:21930278

  8. Nondisease-Specific Problems and All-Cause Mortality in the REasons for Geographic and Racial Differences in Stroke (REGARDS) Study

    PubMed Central

    Bowling, C. Barrett; Booth, John N.; Safford, Monika; Whitson, Heather E.; Ritchie, Christine; Wadley, Virginia G.; Cushman, Mary; Howard, Virginia; Allman, Richard M.; Muntner, Paul

    2013-01-01

    Background/Objectives Problems that cross multiple domains of health are frequently assessed in older adults. We evaluated the association between six of these nondisease-specific problems and mortality among middle-aged and older adults. Design Prospective, observational cohort Setting U.S. population sample Participants Participants included 23,669 black and white US adults ≥ 45 years of age enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study. Measurements Nondisease-specific problems included cognitive impairment, depressive symptoms, falls, polypharmacy, impaired mobility and exhaustion. Age-stratified (<65, 65-74, and ≥ 75 years) hazard ratios for all-cause mortality were calculated for each problem individually and by number of problems. Results Among participants < 65, 65-74, ≥ 75 years old, one or more nondisease-specific problems occurred in 40%, 45% and 55% of participants, respectively. Compared to those with none of these problems the multivariable adjusted hazard ratios and 95% confidence intervals for all-cause mortality associated with each additional nondisease-specific problem was 1.34 (1.23–1.46), 1.24 (1.15–1.35) and 1.30 (1.21–1.39), among participants < 65, 65 – 74 years, ≥ 75 years of age, respectively. Conclusion Nondisease-specific problems were associated with mortality across a wide age spectrum. Future studies should determine if treating these problems will improve survival and identify innovative healthcare models to address multiple nondisease-specific problems simultaneously. PMID:23617688

  9. ALTEA calibration

    NASA Astrophysics Data System (ADS)

    Zaconte, V.; Altea Team

    The ALTEA project is aimed at studying the possible functional damages to the Central Nervous System (CNS) due to particle radiation in space environment. The project is an international and multi-disciplinary collaboration. The ALTEA facility is an helmet-shaped device that will study concurrently the passage of cosmic radiation through the brain, the functional status of the visual system and the electrophysiological dynamics of the cortical activity. The basic instrumentation is composed by six active particle telescopes, one ElectroEncephaloGraph (EEG), a visual stimulator and a pushbutton. The telescopes are able to detect the passage of each particle measuring its energy, trajectory and released energy into the brain and identifying nuclear species. The EEG and the Visual Stimulator are able to measure the functional status of the visual system, the cortical electrophysiological activity, and to look for a correlation between incident particles, brain activity and Light Flash perceptions. These basic instruments can be used separately or in any combination, permitting several different experiments. ALTEA is scheduled to fly in the International Space Station (ISS) in November, 15th 2004. In this paper the calibration of the Flight Model of the silicon telescopes (Silicon Detector Units - SDUs) will be shown. These measures have been taken at the GSI heavy ion accelerator in Darmstadt. First calibration has been taken out in November 2003 on the SDU-FM1 using C nuclei at different energies: 100, 150, 400 and 600 Mev/n. We performed a complete beam scan of the SDU-FM1 to check functionality and homogeneity of all strips of silicon detector planes, for each beam energy we collected data to achieve good statistics and finally we put two different thickness of Aluminium and Plexiglas in front of the detector in order to study fragmentations. This test has been carried out with a Test Equipment to simulate the Digital Acquisition Unit (DAU). We are scheduled to

  10. Teaching Methods for Modelling Problems and Students' Task-Specific Enjoyment, Value, Interest and Self-Efficacy Expectations

    ERIC Educational Resources Information Center

    Schukajlow, Stanislaw; Leiss, Dominik; Pekrun, Reinhard; Blum, Werner; Muller, Marcel; Messner, Rudolf

    2012-01-01

    In this study which was part of the DISUM-project, 224 ninth graders from 14 German classes from middle track schools (Realschule) were asked about their enjoyment, interest, value and self-efficacy expectations concerning three types of mathematical problems: intra-mathematical problems, word problems and modelling problems. Enjoyment, interest,…

  11. LOFAR Facet Calibration

    NASA Astrophysics Data System (ADS)

    van Weeren, R. J.; Williams, W. L.; Hardcastle, M. J.; Shimwell, T. W.; Rafferty, D. A.; Sabater, J.; Heald, G.; Sridhar, S. S.; Dijkema, T. J.; Brunetti, G.; Brüggen, M.; Andrade-Santos, F.; Ogrean, G. A.; Röttgering, H. J. A.; Dawson, W. A.; Forman, W. R.; de Gasperin, F.; Jones, C.; Miley, G. K.; Rudnick, L.; Sarazin, C. L.; Bonafede, A.; Best, P. N.; Bîrzan, L.; Cassano, R.; Chyży, K. T.; Croston, J. H.; Ensslin, T.; Ferrari, C.; Hoeft, M.; Horellou, C.; Jarvis, M. J.; Kraft, R. P.; Mevius, M.; Intema, H. T.; Murray, S. S.; Orrú, E.; Pizzo, R.; Simionescu, A.; Stroe, A.; van der Tol, S.; White, G. J.

    2016-03-01

    LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides excellent short baseline coverage to map diffuse extended emission. However, producing high-quality deep images is challenging due to the presence of direction-dependent calibration errors, caused by imperfect knowledge of the station beam shapes and the ionosphere. Furthermore, the large data volume and presence of station clock errors present additional difficulties. In this paper we present a new calibration scheme, which we name facet calibration, to obtain deep high-resolution LOFAR High Band Antenna images using the Dutch part of the array. This scheme solves and corrects the direction-dependent errors in a number of facets that cover the observed field of view. Facet calibration provides close to thermal noise limited images for a typical 8 hr observing run at ∼ 5\\prime\\prime resolution, meeting the specifications of the LOFAR Tier-1 northern survey.

  12. Multivariate Regression with Calibration*

    PubMed Central

    Liu, Han; Wang, Lie; Zhao, Tuo

    2014-01-01

    We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models. Compared to existing methods, CMR calibrates the regularization for each regression task with respect to its noise level so that it is simultaneously tuning insensitive and achieves an improved finite-sample performance. Computationally, we develop an efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy. Theoretically, we prove that CMR achieves the optimal rate of convergence in parameter estimation. We illustrate the usefulness of CMR by thorough numerical simulations and show that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR on a brain activity prediction problem and find that CMR is as competitive as the handcrafted model created by human experts. PMID:25620861

  13. Calibration method for radiometric and wavelength calibration of a spectrometer

    NASA Astrophysics Data System (ADS)

    Granger, Edward M.

    1998-12-01

    A new calibration target or Certified Reference Material (CRM) has been designed that uses violet, orange, green and cyan dyes ont cotton paper. This paper type was chosen because it has a relatively flat spectral response from 400 nm to 700 nm and good keeping properties. These specific dyes were chosen because the difference signal between the orange, cyan, green and purple dyes have certain characteristics that then a low the calibration of an instrument. The ratio between the difference readings is a direct function of the center wavelength of a given spectral band. Therefore, the radiometric and spectral calibration can be determined simultaneously from the physical properties of the reference materials.

  14. Cosmogenic Chlorine-36 Global Production Rate Parameter Calibration

    NASA Astrophysics Data System (ADS)

    Marrero, S.; Borchers, B.; Phillips, F. M.; Aumer, R.; Stone, J.

    2010-12-01

    As part of the CRONUS-Earth project, geological calibrations of in-situ production rates of cosmogenic nuclides, including chlorine-36, are being conducted as part of a larger project to improve the accuracy of techniques employing cosmogenic nuclides. Previous chlorine-36 production rate calibrations have been particularly difficult, likely due to the multiple production pathways. We are performing a step-wise calibration in order to specifically address the uncertainties and problems in previous studies. The low-energy neutrons will be constrained first using a depth profile analysis and then the spallation rates will be calibrated using surface and depth profile samples from five additional sites. This study will produce production rate parameters for each of the main spallation reactions (K, Ca) as well as the production by low-energy neutrons from Cl. Muons are based on Heisinger, 2002 and are not calibrated in this study. The geological calibration locations include the Peruvian Andes; Lake Bonneville, UT; Isle of Skye, Scotland; Hawaii; Dry Valleys of Antactica; and Copper Canyon, NM.

  15. Role of Task-Specific Adapted Feedback on a Computer-Based Collaborative Problem-Solving Task. CSE Report 684

    ERIC Educational Resources Information Center

    Chuang, San-hui; O'Neil, Harold F.

    2006-01-01

    Collaborative problem solving and collaborative skills are considered necessary skills for success in today's world. Collaborative problem solving is defined as problem solving activities that involve interactions among a group of individuals. Large-scale and small-scale assessment programs increasingly use collaborative group tasks in which…

  16. Extracting the MESA SR4000 calibrations

    NASA Astrophysics Data System (ADS)

    Charleston, Sean A.; Dorrington, Adrian A.; Streeter, Lee; Cree, Michael J.

    2015-05-01

    Time-of-flight range imaging cameras are capable of acquiring depth images of a scene. Some algorithms require these cameras to be run in `raw mode', where any calibrations from the off-the-shelf manufacturers are lost. The calibration of the MESA SR4000 is herein investigated, with an attempt to reconstruct the full calibration. Possession of the factory calibration enables calibrated data to be acquired and manipulated even in "raw mode." This work is motivated by the problem of motion correction, in which the calibration must be separated into component parts to be applied at different stages in the algorithm. There are also other applications, in which multiple frequencies are required, such as multipath interference correction. The other frequencies can be calibrated in a similar way, using the factory calibration as a base. A novel technique for capturing the calibration data is described; a retro-reflector is used on a moving platform, which acts as a point source at a distance, resulting in planar waves on the sensor. A number of calibrations are retrieved from the camera, and are then modelled and compared to the factory calibration. When comparing the factory calibration to both the "raw mode" data, and the calibration described herein, a root mean squared error improvement of 51:3mm was seen, with a standard deviation improvement of 34:9mm.

  17. Insecure attachment styles, relationship-drinking contexts, and marital alcohol problems: Testing the mediating role of relationship-specific drinking-to-cope motives.

    PubMed

    Levitt, Ash; Leonard, Kenneth E

    2015-09-01

    Research and theory suggest that romantic couple members are motivated to drink to cope with interpersonal distress. Additionally, this behavior and its consequences appear to be differentially associated with insecure attachment styles. However, no research has directly examined drinking to cope that is specific to relationship problems, or with relationship-specific drinking outcomes. Based on alcohol motivation and attachment theories, the current study examines relationship-specific drinking-to-cope processes over the early years of marriage. Specifically, it was hypothesized that drinking to cope with a relationship problem would mediate the associations between insecure attachment styles (i.e., anxious and avoidant) and frequencies of drinking with and apart from one's partner and marital alcohol problems in married couples. Multilevel models were tested via the actor-partner interdependence model using reports of both members of 470 couples over the first nine years of marriage. As expected, relationship-specific drinking-to-cope motives mediated the effects of actor anxious attachment on drinking apart from one's partner and on marital alcohol problems, but, unexpectedly, not on drinking with the partner. No mediated effects were found for attachment avoidance. Results suggest that anxious (but not avoidant) individuals are motivated to use alcohol to cope specifically with relationship problems in certain contexts, which may exacerbate relationship difficulties associated with attachment anxiety. Implications for theory and future research on relationship-motivated drinking are discussed. PMID:25799439

  18. Calibration of radionuclide calibrators in Canadian hospitals

    SciTech Connect

    Santry, D.C.

    1986-01-01

    The major user of radioactive isotopes in Canada is the medical profession. Because of this a program has been initiated at the National Research Council of Canada (NRCC) to assist the nuclear medicine community to determine more accurately, the rather large amounts of radioactive materials administered to patients either for therapeutic or medical diagnostics. Since radiation exposure to the human body has deleterious effects, it is important for the patient that the correct amount of radioactive material is administered to minimize the induction of a fatal cancer at a later time. Hospitals in many other countries have a legal requirement to have their instruments routinely calibrated and have previously entered into intercomparisons with other hospitals or their national standards laboratories. In Canada, hospitals and clinics can participate on a voluntary basis to have the proper operation of measuring devices (radionuclide calibrators in particular) examined through intercomparisons. The program looks primarily at laboratory performance. This includes not only the instrument's performance but the performance of the individual doing the procedure and the technical procedure or method employed. In an effort to provide personal assistance to those having problems, it is essential that the comparisons should be pertinent to the daily work of the laboratory and that the most capable technologist not be selected to carry out the assay.

  19. Automated Attitude Sensor Calibration: Progress and Plans

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph; Hashmall, Joseph

    2004-01-01

    This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.

  20. Mercury Continuous Emmission Monitor Calibration

    SciTech Connect

    John Schabron; Eric Kalberer; Ryan Boysen; William Schuster; Joseph Rovani

    2009-03-12

    Mercury continuous emissions monitoring systems (CEMs) are being implemented in over 800 coal-fired power plant stacks throughput the U.S. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor calibrators/generators. These devices are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 and vacated by a Federal appeals court in early 2008 required that calibration be performed with NIST-traceable standards. Despite the vacature, mercury emissions regulations in the future will require NIST traceable calibration standards, and EPA does not want to interrupt the effort towards developing NIST traceability protocols. The traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued a conceptual interim traceability protocol for elemental mercury calibrators. The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The EPA traceability protocol document is divided into two separate sections. The first deals with the qualification of calibrator models by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the calibrators that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma

  1. CALIBRATION IS BOTH FUNCTIONAL AND ANATOMICAL

    PubMed Central

    Bingham, Geoffrey P.; Pan, Jing S.; Mon-Williams, Mark A.

    2014-01-01

    Bingham and Pagano (1998) described calibration as a mapping from embodied perceptual units to an embodied action unit and suggested that it is an inherent component of perception/action that yields accurate targeted actions. We tested two predictions of this ‘Mapping Theory’. First, calibration should transfer between limbs, because it involves a mapping from perceptual units to an action unit, and thus is functionally specific to the action (Pan, et al., submitted). We used distorted haptic feedback to calibrate feedforward right hand reaches and tested right and left hand reaches after calibration. The calibration transferred. Second, the Mapping Theory predicts that limb specific calibration should be possible because the units are embodied and anatomy contributes to their scaling. Limbs must be calibrated to one another given potential anatomical differences among limbs. We used distorted haptic feedback to calibrate feedforward reaches with right and left arms simultaneously in opposite directions relative to a visually specified target. Reaches tested after calibration revealed reliable limb specific calibration. Both predictions were confirmed. This resolves a prevailing controversy as to whether calibration is functional (Rieser, et al., 1995; Bruggeman & Warren, 2010) or anatomical (Durgin, et al., 1999; 2003). Necessarily, it is both. PMID:23855525

  2. Emotion-recognition abilities and behavior problem dimensions in preschoolers: evidence for a specific role for childhood hyperactivity.

    PubMed

    Chronaki, Georgia; Garner, Matthew; Hadwin, Julie A; Thompson, Margaret J J; Chin, Cheryl Y; Sonuga-Barke, Edmund J S

    2015-01-01

    Facial emotion-recognition difficulties have been reported in school-aged children with behavior problems; little is known, however, about either this association in preschool children or with regard to vocal emotion recognition. The current study explored the association between facial and vocal emotion recognition and behavior problems in a sample of 3 to 6-year-old children. A sample of 57 children enriched for risk of behavior problems (41 were recruited from the general population while 16 had been referred for behavior problems to local clinics) were each presented with a series of vocal and facial stimuli expressing different emotions (i.e., angry, happy, and sad) of low and high intensity. Parents rated children's externalizing and internalizing behavior problems. Vocal and facial emotion recognition accuracy was negatively correlated with externalizing but not internalizing behavior problems independent of emotion type. The effects with the externalizing domain were independently associated with hyperactivity rather than conduct problems. The results highlight the importance of using vocal as well as facial stimuli when studying the relationship between emotion-recognition and behavior problems. Future studies should test the hypothesis that difficulties in responding to adult instructions and commands seen in children with attention deficit/hyperactivity disorder (ADHD) may be due to deficits in the processing of vocal emotions. PMID:24344768

  3. Parenting Specificity An Examination of the Relation Between Three Parenting Behaviors and Child Problem Behaviors in the Context of a History of Caregiver Depression

    PubMed Central

    McKee, Laura; Forehand, Rex; Rakow, Aaron; Reeslund, Kristen; Roland, Erin; Hardcastle, Emily; Compas, Bruce

    2009-01-01

    The aim of this study was to advance our understanding of the relations between three specific parenting behaviors (warmth, monitoring, and discipline) and two child outcomes (internalizing and externalizing problems) within the context of parental depression. Using an approach recommended by A. Caron, B. Weiss, V. Harris, and T. Carron (2006), unique and differential specificity were examined. Ninety-seven parents with a history of depression and 136 of their 9- to 15-year-old children served as participants. Children reported parenting behaviors and parents reported child problem behaviors. The findings indicated that warmth/involvement, but not monitoring or discipline, was uniquely related to externalizing problems and differentially related to internalizing and externalizing problems. The findings suggest that parental warmth has implications for interventions conducted with children living in families with a history of parental depression. PMID:18391048

  4. Virtual camera calibration using optical design software.

    PubMed

    Poulin-Girard, Anne-Sophie; Dallaire, Xavier; Thibault, Simon; Laurendeau, Denis

    2014-05-01

    Camera calibration is a critical step in many vision applications. It is a delicate and complex process that is highly sensitive to environmental conditions. This paper presents a novel virtual calibration technique that can be used to study the impact of various factors on the calibration parameters. To highlight the possibilities of the method, the calibration parameters' behavior has been studied regarding the effects of tolerancing and temperature for a specific lens. This technique could also be used in many other promising areas to make calibration in the laboratory or in the field easier. PMID:24921866

  5. The Effect of General and Drug-Specific Family Environments on Comorbid and Drug-Specific Problem Behavior: A Longitudinal Examination

    ERIC Educational Resources Information Center

    Epstein, Marina; Hill, Karl G.; Bailey, Jennifer A.; Hawkins, J. David

    2013-01-01

    Previous research has shown that the development of alcohol and tobacco dependence is linked and that both are influenced by environmental and intrapersonal factors, many of which likely interact over the life course. The present study examines the effects of general and alcohol- and tobacco-specific environmental influences in the family of…

  6. Calibrated Properties Model

    SciTech Connect

    H. H. Liu

    2003-02-14

    This report has documented the methodologies and the data used for developing rock property sets for three infiltration maps. Model calibration is necessary to obtain parameter values appropriate for the scale of the process being modeled. Although some hydrogeologic property data (prior information) are available, these data cannot be directly used to predict flow and transport processes because they were measured on scales smaller than those characterizing property distributions in models used for the prediction. Since model calibrations were done directly on the scales of interest, the upscaling issue was automatically considered. On the other hand, joint use of data and the prior information in inversions can further increase the reliability of the developed parameters compared with those for the prior information. Rock parameter sets were developed for both the mountain and drift scales because of the scale-dependent behavior of fracture permeability. Note that these parameter sets, except those for faults, were determined using the 1-D simulations. Therefore, they cannot be directly used for modeling lateral flow because of perched water in the unsaturated zone (UZ) of Yucca Mountain. Further calibration may be needed for two- and three-dimensional modeling studies. As discussed above in Section 6.4, uncertainties for these calibrated properties are difficult to accurately determine, because of the inaccuracy of simplified methods for this complex problem or the extremely large computational expense of more rigorous methods. One estimate of uncertainty that may be useful to investigators using these properties is the uncertainty used for the prior information. In most cases, the inversions did not change the properties very much with respect to the prior information. The Output DTNs (including the input and output files for all runs) from this study are given in Section 9.4.

  7. Calibration of sound calibrators: an overview

    NASA Astrophysics Data System (ADS)

    Milhomem, T. A. B.; Soares, Z. M. D.

    2016-07-01

    This paper presents an overview of calibration of sound calibrators. Initially, traditional calibration methods are presented. Following, the international standard IEC 60942 is discussed emphasizing parameters, target measurement uncertainty and criteria for conformance to the requirements of the standard. Last, Regional Metrology Organizations comparisons are summarized.

  8. Mercury Calibration System

    SciTech Connect

    John Schabron; Eric Kalberer; Joseph Rovani; Mark Sanderson; Ryan Boysen; William Schuster

    2009-03-11

    U.S. Environmental Protection Agency (EPA) Performance Specification 12 in the Clean Air Mercury Rule (CAMR) states that a mercury CEM must be calibrated with National Institute for Standards and Technology (NIST)-traceable standards. In early 2009, a NIST traceable standard for elemental mercury CEM calibration still does not exist. Despite the vacature of CAMR by a Federal appeals court in early 2008, a NIST traceable standard is still needed for whatever regulation is implemented in the future. Thermo Fisher is a major vendor providing complete integrated mercury continuous emissions monitoring (CEM) systems to the industry. WRI is participating with EPA, EPRI, NIST, and Thermo Fisher towards the development of the criteria that will be used in the traceability protocols to be issued by EPA. An initial draft of an elemental mercury calibration traceability protocol was distributed for comment to the participating research groups and vendors on a limited basis in early May 2007. In August 2007, EPA issued an interim traceability protocol for elemental mercury calibrators. Various working drafts of the new interim traceability protocols were distributed in late 2008 and early 2009 to participants in the Mercury Standards Working Committee project. The protocols include sections on qualification and certification. The qualification section describes in general terms tests that must be conducted by the calibrator vendors to demonstrate that their calibration equipment meets the minimum requirements to be established by EPA for use in CAMR monitoring. Variables to be examined include linearity, ambient temperature, back pressure, ambient pressure, line voltage, and effects of shipping. None of the procedures were described in detail in the draft interim documents; however they describe what EPA would like to eventually develop. WRI is providing the data and results to EPA for use in developing revised experimental procedures and realistic acceptance criteria based on

  9. Autonomous Attitude Sensor Calibration (ASCAL)

    NASA Technical Reports Server (NTRS)

    Peterson, Chariya; Rowe, John; Mueller, Karl; Ziyad, Nigel

    1998-01-01

    In this paper, an approach to increase the degree of autonomy of flight software is proposed. We describe an enhancement of the Attitude Determination and Control System by augmenting it with self-calibration capability. Conventional attitude estimation and control algorithms are combined with higher level decision making and machine learning algorithms in order to deal with the uncertainty and complexity of the problem.

  10. An update on laboratory measurements of Dabigatran: Smart specific and calibrated dedicated assays for measuring anti-IIa activity in plasma.

    PubMed

    Amiral, Jean; Dunois, Claire; Amiral, Cédric; Seghatchian, Jerard

    2016-06-01

    Use of Direct Oral Anticoagulants (DOACs) is continuously increasing for clinical application. The first product released was Dabigatran, which was proposed for many preventive and curative applications, especially for prevention of stroke in patients with non-valvular atrial fibrillation. Although measurement of Dabigatran Anti-IIa activity in plasma is not requested on a routine basis, in some situations its measurement is clinically useful. Especially, before an emergency surgery in treated patients, where its presence at high concentrations, which will expose the patient at an increased bleeding risk, has to be excluded. Hence, smart, specific, rapid and accurate quantitative assays are warranted as an essential required. Hemoclot™ Thrombin Inhibitors and Biophen® DTI were specifically designed for these applications, and can be used on all automated instruments with a standard range protocol for measuring concentrations at peak, or with a low range protocol for testing residual concentrations. Both functional assays have a good correlation with the reference LC-MS/MS method, and concentrations measured are similar. Performances of these assays and interferences of various substances or drugs are discussed. Some differences in variations of clotting times are observed between mechanical or optical clot detection instruments, which could be explained by the fibrin clot structure, altered by direct Factor Xa inhibitors, and more especially Rivaroxaban. Both clotting and chromogenic assays offer a safe and accurate quantitative measurement of Dabigatran in plasma in all situations where this determination is requested. In short this manuscript provides an in depth update on current opinions on laboratory aspects of measuring Dabigatran concentrations in plasma, when required. PMID:27216543

  11. Specific treatment of problems of the spine (STOPS): design of a randomised controlled trial comparing specific physiotherapy versus advice for people with subacute low back disorders

    PubMed Central

    2011-01-01

    Background Low back disorders are a common and costly cause of pain and activity limitation in adults. Few treatment options have demonstrated clinically meaningful benefits apart from advice which is recommended in all international guidelines. Clinical heterogeneity of participants in clinical trials is hypothesised as reducing the likelihood of demonstrating treatment effects, and sampling of more homogenous subgroups is recommended. We propose five subgroups that allow the delivery of specific physiotherapy treatment targeting the pathoanatomical, neurophysiological and psychosocial components of low back disorders. The aim of this article is to describe the methodology of a randomised controlled trial comparing specific physiotherapy treatment to advice for people classified into five subacute low back disorder subgroups. Methods/Design A multi-centre parallel group randomised controlled trial is proposed. A minimum of 250 participants with subacute (6 weeks to 6 months) low back pain and/or referred leg pain will be classified into one of five subgroups and then randomly allocated to receive either physiotherapy advice (2 sessions over 10 weeks) or specific physiotherapy treatment (10 sessions over 10 weeks) tailored according to the subgroup of the participant. Outcomes will be assessed at 5 weeks, 10 weeks, 6 months and 12 months following randomisation. Primary outcomes will be activity limitation measured with a modified Oswestry Disability Index as well as leg and back pain intensity measured on separate 0-10 Numerical Rating Scales. Secondary outcomes will include a 7-point global rating of change scale, satisfaction with physiotherapy treatment, satisfaction with treatment results, the Sciatica Frequency and Bothersomeness Scale, quality of life (EuroQol-5D), interference with work, and psychosocial risk factors (Orebro Musculoskeletal Pain Questionnaire). Adverse events and co-interventions will also be measured. Data will be analysed according to

  12. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  13. Exploring the Domain Specificity of Creativity in Children: The Relationship between a Non-Verbal Creative Production Test and Creative Problem-Solving Activities

    ERIC Educational Resources Information Center

    Mohamed, Ahmed; Maker, C. June; Lubart, Todd

    2012-01-01

    In this study, we explored whether creativity was domain specific or domain general. The relationships between students' scores on three creative problem-solving activities (math, spatial artistic, and oral linguistic) in the DISCOVER assessment (Discovering Intellectual Strengths and Capabilities While Observing Varied Ethnic Responses) and the…

  14. Parenting Specificity: An Examination of the Relation between Three Parenting Behaviors and Child Problem Behaviors in the Context of a History of Caregiver Depression

    ERIC Educational Resources Information Center

    McKee, Laura; Forehand, Rex; Rakow, Aaron; Reeslund, Kristen; Roland, Erin; Hardcastle, Emily; Compas, Bruce

    2008-01-01

    The aim of this study was to advance our understanding of the relations between three specific parenting behaviors (warmth, monitoring, and discipline) and two child outcomes (internalizing and externalizing problems) within the context of parental depression. Using an approach recommended by A. Caron, B. Weiss, V. Harris, and T. Carron (2006),…

  15. Technical Problem Solving among 10-Year-Old Students as Related to Science Achievement, Out-of-School Experience, Domain-Specific Control Beliefs, and Attribution Patterns.

    ERIC Educational Resources Information Center

    Baumert, Jurgen; Evans, Robert H.; Geiser, Helmut

    1998-01-01

    Ten-year-old students (n=531) from the U.S. and Germany were studied to determine the relationships between everyday experience, domain-specific control beliefs, acquisition of science knowledge, and solving of everyday technical problems. A causal model, developed and tested through structural equation modeling, showed that domain-specific…

  16. Online Sensor Calibration Assessment in Nuclear Power Systems

    SciTech Connect

    Coble, Jamie B.; Ramuhalli, Pradeep; Meyer, Ryan M.; Hashemian, Hash

    2013-06-01

    Safe, efficient, and economic operation of nuclear systems (nuclear power plants, fuel fabrication and storage, used fuel processing, etc.) relies on transmission of accurate and reliable measurements. During operation, sensors degrade due to age, environmental exposure, and maintenance interventions. Sensor degradation can affect the measured and transmitted signals, including sensor failure, signal drift, sensor response time, etc. Currently, periodic sensor recalibration is performed to avoid these problems. Sensor recalibration activities include both calibration assessment and adjustment (if necessary). In nuclear power plants, periodic recalibration of safety-related sensors is required by the plant technical specifications. Recalibration typically occurs during refueling outages (about every 18 to 24 months). Non-safety-related sensors also undergo recalibration, though not as frequently. However, this approach to maintaining sensor calibration and performance is time-consuming and expensive, leading to unnecessary maintenance, increased radiation exposure to maintenance personnel, and potential damage to sensors. Online monitoring (OLM) of sensor performance is a non-invasive approach to assess instrument calibration. OLM can mitigate many of the limitations of the current periodic recalibration practice by providing more frequent assessment of calibration and identifying those sensors that are operating outside of calibration tolerance limits without removing sensors or interrupting operation. This can support extended operating intervals for unfaulted sensors and target recalibration efforts to only degraded sensors.

  17. Transportation Problems in Special Education Programs in Rural Areas - A Specific Solution and Some Suggestions for Delivery System Development.

    ERIC Educational Resources Information Center

    Brody, Z. H.

    The paper describes transportation problems encountered and solutions employed in delivering systems of comprehensive services to handicapped children in Anderson County, Tennessee, a predominantly rural area with considerable mountain area. Detailed are methods of transportation utilized in the four different program areas of the county special…

  18. Over- and Undercontrolled Clinic Referral Problems of Jamaican and American Children and Adolescents: The Culture General and the Culture Specific.

    ERIC Educational Resources Information Center

    Lambert, Michael C.; And Others

    1989-01-01

    Studied clinical referrals of two different societies (Jamaica, where Afro-British culture discourages child aggression, and United States, where uncontrolled child behavior appears more accepted) to determine influence of cultural factors in clinical referral patterns. Found significant difference in clinic-referred problems of American and…

  19. Collective efficacy as a task specific process: examining the relationship between social ties, neighborhood cohesion and the capacity to respond to violence, delinquency and civic problems.

    PubMed

    Wickes, Rebecca; Hipp, John R; Sargeant, Elise; Homel, Ross

    2013-09-01

    In the neighborhood effects literature, collective efficacy is viewed as the key explanatory process associated with the spatial distribution of a range of social problems. While many studies usefully focus on the consequences of collective efficacy, in this paper we examine the task specificity of collective efficacy and consider the individual and neighborhood factors that influence residents' perceptions of neighborhood collective efficacy for specific tasks. Utilizing survey and administrative data from 4,093 residents nested in 148 communities in Australia, we distinguish collective efficacy for particular threats to social order and assess the relative importance of social cohesion and neighborhood social ties to the development of collective efficacy for violence, delinquency and civic/political issues. Our results indicate that a model separating collective efficacy for specific problems from social ties and the more generalized notions of social cohesion is necessary when understanding the regulation potential of neighborhoods. PMID:23812906

  20. Large-scale learning of structure-activity relationships using a linear support vector machine and problem-specific metrics.

    PubMed

    Hinselmann, Georg; Rosenbaum, Lars; Jahn, Andreas; Fechner, Nikolas; Ostermann, Claude; Zell, Andreas

    2011-02-28

    The goal of this study was to adapt a recently proposed linear large-scale support vector machine to large-scale binary cheminformatics classification problems and to assess its performance on various benchmarks using virtual screening performance measures. We extended the large-scale linear support vector machine library LIBLINEAR with state-of-the-art virtual high-throughput screening metrics to train classifiers on whole large and unbalanced data sets. The formulation of this linear support machine has an excellent performance if applied to high-dimensional sparse feature vectors. An additional advantage is the average linear complexity in the number of non-zero features of a prediction. Nevertheless, the approach assumes that a problem is linearly separable. Therefore, we conducted an extensive benchmarking to evaluate the performance on large-scale problems up to a size of 175000 samples. To examine the virtual screening performance, we determined the chemotype clusters using Feature Trees and integrated this information to compute weighted AUC-based performance measures and a leave-cluster-out cross-validation. We also considered the BEDROC score, a metric that was suggested to tackle the early enrichment problem. The performance on each problem was evaluated by a nested cross-validation and a nested leave-cluster-out cross-validation. We compared LIBLINEAR against a Naïve Bayes classifier, a random decision forest classifier, and a maximum similarity ranking approach. These reference approaches were outperformed in a direct comparison by LIBLINEAR. A comparison to literature results showed that the LIBLINEAR performance is competitive but without achieving results as good as the top-ranked nonlinear machines on these benchmarks. However, considering the overall convincing performance and computation time of the large-scale support vector machine, the approach provides an excellent alternative to established large-scale classification approaches. PMID

  1. Calibration Procedures in Mid Format Camera Setups

    NASA Astrophysics Data System (ADS)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  2. Calibration and verification of environmental models

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Weinberg, N.; Hiser, H.

    1976-01-01

    The problems of calibration and verification of mesoscale models used for investigating power plant discharges are considered. The value of remote sensors for data acquisition is discussed as well as an investigation of Biscayne Bay in southern Florida.

  3. A BPM calibration procedure using TBT data

    SciTech Connect

    Yang, M.J.; Crisp, J.; Prieto, P.; /Fermilab

    2007-06-01

    Accurate BPM calibration is crucial for lattice analysis. It is also reassuring when the calibration can be independently verified. This paper outlines a procedure that can extract BPM calibration information from TBT orbit data. The procedure is developed as an extension to the Turn-By-Turn lattice analysis [1]. Its application to data from both Recycler Ring and Main Injector (MI) at Fermilab have produced very encouraging results. Some specifics in hardware design will be mentioned to contrast that of analysis results.

  4. Use of Imperfect Calibration for Seismic Location

    SciTech Connect

    Myers, S C; Schultz, C A

    2000-07-12

    Efforts to more effectively monitor nuclear explosions include the calibration of travel times along specific paths. Benchmark events are used to improve travel-time prediction by (1) improving models, (2) determining travel times empirically, or (3) using a hybrid approach. Even velocity models that are determined using geophysical analogy (i.e. models determined without the direct use of calibration data) require validation with calibration events. Ideally, the locations and origin times of calibration events would be perfectly known. However, the existing set of perfectly known events is spatially limited and many of these events occurred prior to the installation of current monitoring stations, thus limiting their usefulness. There are, however, large numbers of well (but not perfectly) located events that are spatially distributed, and many of these events may be used for calibration. Identifying the utility and limitations of the spatially distributed set of imperfect calibration data is of paramount importance to the calibration effort. In order to develop guidelines for calibration utility, we examine the uncertainty and correlation of location parameters under several network configurations that are commonly used to produce calibration-grade locations. We then map these calibration uncertainties through location procedures with network configurations that are likely in monitoring situations. By examining the ramifications of depth and origin-time uncertainty, we expand on previous studies that focus strictly on epicenter accuracy. Particular attention is given to examples where calibration events are determined with teleseismic or local networks and monitoring is accomplished with a regional network.

  5. A definitive calibration record for the Landsat-5 thematic mapper anchored to the Landsat-7 radiometric scale

    USGS Publications Warehouse

    Teillet, P.M.; Helder, D.L.; Ruggles, T.A.; Landry, R.; Ahern, F.J.; Higgs, N.J.; Barsi, J.; Chander, G.; Markham, B.L.; Barker, J.L.; Thome, K.J.; Schott, J.R.; Palluconi, Frank Don

    2004-01-01

    A coordinated effort on the part of several agencies has led to the specification of a definitive radiometric calibration record for the Landsat-5 thematic mapper (TM) for its lifetime since launch in 1984. The time-dependent calibration record for Landsat-5 TM has been placed on the same radiometric scale as the Landsat-7 enhanced thematic mapper plus (ETM+). It has been implemented in the National Landsat Archive Production Systems (NLAPS) in use in North America. This paper documents the results of this collaborative effort and the specifications for the related calibration processing algorithms. The specifications include (i) anchoring of the Landsat-5 TM calibration record to the Landsat-7 ETM+ absolute radiometric calibration, (ii) new time-dependent calibration processing equations and procedures applicable to raw Landsat-5 TM data, and (iii) algorithms for recalibration computations applicable to some of the existing processed datasets in the North American context. The cross-calibration between Landsat-5 TM and Landsat-7 ETM+ was achieved using image pairs from the tandem-orbit configuration period that was programmed early in the Laridsat-7 mission. The time-dependent calibration for Landsat-5 TM is based on a detailed trend analysis of data from the on-board internal calibrator. The new lifetime radiometric calibration record for Landsat-5 will overcome problems with earlier product generation owing to inadequate maintenance and documentation of the calibration over time and will facilitate the quantitative examination of a continuous, near-global dataset at 30-m scale that spans almost two decades.

  6. Cardiac mechanical parameter calibration based on the unscented transform.

    PubMed

    Marchesseau, Stéphanie; Delingette, Hervé; Sermesant, Maxime; Rhode, Kawal; Duckett, Simon G; Rinaldi, C Aldo; Razavi, Reza; Ayache, Nicholas

    2012-01-01

    Patient-specific cardiac modelling can help in understanding pathophysiology and predict therapy planning. However it requires to personalize the model geometry, kinematics, electrophysiology and mechanics. Calibration aims at providing global values (space invariant) of parameters before performing the personalization stage which involves solving an inverse problem to find regional values. We propose an automatic calibration method of the mechanical parameters of the Bestel-Clément-Sorine (BCS) electromechanical model of the heart based on the Unscented Transform algorithm. A sensitivity analysis is performed that reveals which observations on the volume and pressure evolution are significant to characterize the global behaviour of the myocardium. We show that the calibration method gives satisfying results by optimizing up to 7 parameters of the BCS model in only one iteration. This method was evaluated on 7 volunteers and 2 heart failure patients, with a mean relative error from the real data of 11%. This calibration enabled furthermore a preliminary study of the specific parameters to the studied pathologies. PMID:23286030

  7. Assessing calibration of prognostic risk scores.

    PubMed

    Crowson, Cynthia S; Atkinson, Elizabeth J; Therneau, Terry M

    2016-08-01

    Current methods used to assess calibration are limited, particularly in the assessment of prognostic models. Methods for testing and visualizing calibration (e.g. the Hosmer-Lemeshow test and calibration slope) have been well thought out in the binary regression setting. However, extension of these methods to Cox models is less well known and could be improved. We describe a model-based framework for the assessment of calibration in the binary setting that provides natural extensions to the survival data setting. We show that Poisson regression models can be used to easily assess calibration in prognostic models. In addition, we show that a calibration test suggested for use in survival data has poor performance. Finally, we apply these methods to the problem of external validation of a risk score developed for the general population when assessed in a special patient population (i.e. patients with particular comorbidities, such as rheumatoid arthritis). PMID:23907781

  8. Cognitive-Behavioral Treatment for Specific Phobias with a Child Demonstrating Severe Problem Behavior and Developmental Delays

    ERIC Educational Resources Information Center

    Davis, Thompson E., III; Kurtz, Patricia F.; Gardner, Andrew W.; Carman, Nicole B.

    2007-01-01

    Cognitive-behavioral treatments (CBTs) are widely used for anxiety disorders in typically developing children; however, there has been no previous attempt to administer CBT for specific phobia (in this case study, one-session treatment) to developmentally or intellectually disabled children. This case study integrates both cognitive-behavioral and…

  9. Thematic Mapper. Volume 1: Calibration report flight model, LANDSAT 5

    NASA Technical Reports Server (NTRS)

    Cooley, R. C.; Lansing, J. C.

    1984-01-01

    The calibration of the Flight 1 Model Thematic Mapper is discussed. Spectral response, scan profile, coherent noise, line spread profiles and white light leaks, square wave response, radiometric calibration, and commands and telemetry are specifically addressed.

  10. Improving self-calibration

    NASA Astrophysics Data System (ADS)

    Enßlin, Torsten A.; Junklewitz, Henrik; Winderling, Lars; Greiner, Maksim; Selig, Marco

    2014-10-01

    Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore, better schemes, in sense of minimal square error, can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise, the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that nonparametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.

  11. Mercury CEM Calibration

    SciTech Connect

    John Schabron; Joseph Rovani; Mark Sanderson

    2008-02-29

    Mercury continuous emissions monitoring systems (CEMS) are being implemented in over 800 coal-fired power plant stacks. The power industry desires to conduct at least a full year of monitoring before the formal monitoring and reporting requirement begins on January 1, 2009. It is important for the industry to have available reliable, turnkey equipment from CEM vendors. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The generators are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 requires that calibration be performed with NIST-traceable standards (Federal Register 2007). Traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued an interim traceability protocol for elemental mercury generators (EPA 2007). The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The document is divided into two separate sections. The first deals with the qualification of generators by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the generator models that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma/mass spectrometry performed by NIST in Gaithersburg, MD. The

  12. Implicit Spacecraft Gyro Calibration

    NASA Technical Reports Server (NTRS)

    Harman, Richard; Bar-Itzhack, Itzhack Y.

    2003-01-01

    This paper presents an implicit algorithm for spacecraft onboard instrument calibration, particularly to onboard gyro calibration. This work is an extension of previous work that was done where an explicit gyro calibration algorithm was applied to the AQUA spacecraft gyros. The algorithm presented in this paper was tested using simulated data and real data that were downloaded from the Microwave Anisotropy Probe (MAP) spacecraft. The calibration tests gave very good results. A comparison between the use of the implicit calibration algorithm used here with the explicit algorithm used for AQUA spacecraft indicates that both provide an excellent estimation of the gyro calibration parameters with similar accuracies.

  13. Parameter estimation in distributed hydrological catchment modelling using automatic calibration with multiple objectives

    NASA Astrophysics Data System (ADS)

    Madsen, Henrik

    A consistent framework for parameter estimation in distributed hydrological catchment modelling using automatic calibration is formulated. The framework focuses on the different steps in the estimation process from model parameterisation and selection of calibration parameters, formulation of calibration criteria, and choice of optimisation algorithm. The calibration problem is formulated in a general multi-objective context in which different objective functions that measure individual process descriptions can be optimised simultaneously. Within this framework it is possible to tailor the model calibration to the specific objectives of the model application being considered. A test example is presented that illustrates the use of the calibration framework for parameter estimation in the MIKE SHE integrated and distributed hydrological modelling system. A significant trade-off between the performance of the groundwater level simulations and the catchment runoff is observed in this case, defining a Pareto front with a very sharp structure. The Pareto optimum solution corresponding to a proposed balanced aggregated objective function is seen to provide a proper balance between the two objectives. Compared to a manual expert calibration, the balanced Pareto optimum solution provides generally better simulation of the runoff, whereas virtually similar performance is obtained for the groundwater level simulations.

  14. Infrared stereo calibration for unmanned ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  15. Automated Camera Calibration

    NASA Technical Reports Server (NTRS)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  16. Risk-based evaluation of technical specification problems at the La Salle County Nuclear Station: Final report

    SciTech Connect

    Bizzak, D.J.; Trainer, J.E.; McClymont, A.S.

    1987-06-01

    Probabilistic risk assessment (PRA) methods are used to evaluate alternatives to existing requirements for three operationally burdensome technical specifications at La Salle Nuclear Station. The study employs a decision logic to minimize the detailed analysis necessary to show compliance with given acceptance criteria; in this case, no risk increase resulting from a proposed change. The analyses provide insights to choose from among alternative options. The SOCRATES computer code was used for the probabilistic analysis. Results support a change to less frequent diesel generator testing, eliminations of one reactor scram setpoint, and establishing an allowed out-of-service time for valves in a reactor scram system. In each case, the change would result in a safety improvement.

  17. Analytical multicollimator camera calibration

    USGS Publications Warehouse

    Tayman, W.P.

    1978-01-01

    Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.

  18. Adaptive self-calibrating iterative GRAPPA reconstruction.

    PubMed

    Park, Suhyung; Park, Jaeseok

    2012-06-01

    Parallel magnetic resonance imaging in k-space such as generalized auto-calibrating partially parallel acquisition exploits spatial correlation among neighboring signals over multiple coils in calibration to estimate missing signals in reconstruction. It is often challenging to achieve accurate calibration information due to data corruption with noises and spatially varying correlation. The purpose of this work is to address these problems simultaneously by developing a new, adaptive iterative generalized auto-calibrating partially parallel acquisition with dynamic self-calibration. With increasing iterations, under a framework of the Kalman filter spatial correlation is estimated dynamically updating calibration signals in a measurement model and using fixed-point state transition in a process model while missing signals outside the step-varying calibration region are reconstructed, leading to adaptive self-calibration and reconstruction. Noise statistic is incorporated in the Kalman filter models, yielding coil-weighted de-noising in reconstruction. Numerical and in vivo studies are performed, demonstrating that the proposed method yields highly accurate calibration and thus reduces artifacts and noises even at high acceleration. PMID:21994010

  19. Quality Management and Calibration

    NASA Astrophysics Data System (ADS)

    Merkus, Henk G.

    Good specification of a product’s performance requires adequate characterization of relevant properties. Particulate products are usually characterized by some PSD, shape or porosity parameter(s). For proper characterization, adequate sampling, dispersion, and measurement procedures should be available or developed and skilful personnel should use appropriate, well-calibrated/qualified equipment. The characterization should be executed, in agreement with customers, in a wellorganized laboratory. All related aspects should be laid down in a quality handbook. The laboratory should provide proof for its capability to perform the characterization of stated products and/or reference materials within stated confidence limits. This can be done either by internal validation and audits or by external GLP accreditation.

  20. ASCAL: Autonomous Attitude Sensor Calibration

    NASA Technical Reports Server (NTRS)

    Peterson, Chariya; Rowe, John; Mueller, Karl; Ziyad, Nigel

    1999-01-01

    Abstract In this paper, an approach to increase the degree of autonomy of flight software is proposed. We describe an enhancement of the Attitude Determination and Control System by augmenting it with self-calibration capability. Conventional attitude estimation and control algorithms are combined with higher level decision making and machine learning algorithms in order to deal with the uncertainty and complexity of the problem.

  1. Design and utilization of a portable seismic/acoustic calibration system

    SciTech Connect

    Stump, B.W.; Pearson, D.C.

    1996-10-01

    Empirical results from the current GSETT-3 illustrate the need for source specific information for the purpose of calibrating the monitoring system. With the specified location design goal of 1,000 km{sup 2}, preliminary analysis indicates the importance of regional calibration of travel times. This calibration information can be obtained in a passive manner utilizing locations derived from local seismic array arrival times and assumes the resulting locations are accurate. Alternatively, an active approach to the problem can be undertaken, attempting to make near-source observations of seismic sources of opportunity to provide specific information on the time, location and characteristics of the source. Moderate to large mining explosions are one source type that may be amenable to such calibration. This paper describes an active ground truthing procedure for regional calibration. A prototype data acquisition system that includes the primary ground motion component for source time and location determination, and secondary, optional acoustic and video components for improved source phenomenology is discussed. The system costs approximately $25,000 and can be deployed and operated by one to two people thus providing a cost effective system for calibration and documentation of sources of interest. Practical implementation of the system is illustrated, emphasizing the minimal impact on an active mining operation.

  2. 40 CFR 1066.920 - Enclosure calibrations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and refueling emissions must meet the calibration specifications described in 40 CFR 86.116-94 and 86... CONTROLS VEHICLE-TESTING PROCEDURES Evaporative Emission Test Procedures Test Equipment and...

  3. Preserving Flow Variability in Watershed Model Calibrations

    EPA Science Inventory

    Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...

  4. Automation of Resistance Bridge Calibrator

    NASA Astrophysics Data System (ADS)

    Podgornik, Tadej; Bojkovski, Jovan; Batagelj, Valentin; Drnovšek, Janko

    2008-02-01

    The article addresses the automation of the resistance bridge calibrator (RBC). The automation of the RBC is performed in order to facilitate the operation of the RBC, improve the reliability, and enable several additional possibilities compared to the tedious manual operation, thereby making the RBC a more practical device for routine use. The RBC is used to calibrate AC and DC resistance bridges, which are mainly used in a primary thermometry laboratory. It consists of a resistor network made up from four main resistors from which 35 different resistance values can be realized using toggle switches. Literature shows that the resistors’ non-zero temperature coefficient can influence the measurements, causing difficulties when calibrating resistance bridges with low uncertainty. Placing the RBC in a thermally stable environment can reduce this, but it does not solve the problem of the time-consuming manual selection of the resistance values. To solve this, an automated means to manipulate the switches, while the device is placed within a thermally stable environment, was created. Computer operation completely substitutes for any manual operation during which an operator would normally have to be present. The computer also acquires measurements from the bridge. In this way, repeated and reproducible calibration measurements inside a temperature-stable environment can be carried out with no active involvement of personnel. The automation process itself was divided into several stages. They included the construction of a servo-manipulator to move the switches, the design of a dedicated electronic controller that also provides a serial interface (RS-232) to the computer, and the development of custom computer software to configure the servo-manipulator and control the calibration process. Measurements show that automation does not affect the long-term stability and mechanical repeatability of the RBC. The repeatability and reproducibility of bridge calibration ratios

  5. The gender specific mediational pathways between parenting styles, neuroticism, pathological reasons for drinking, and alcohol-related problems in emerging adulthood.

    PubMed

    Patock-Peckham, Julie A; Morgan-Lopez, Antonio A

    2009-03-01

    Mediational links between parenting styles, neuroticism, pathological reasons for drinking, alcohol use and alcohol-related problems were tested. A two-group SEM path model with 441 (216 female, 225 male) college students was examined. In general, pathological reasons for drinking mediated the impact of neuroticism on alcohol use and alcohol-related problems. A different pattern of relationships was found for each of the two genders. Perceptions of having an authoritarian father were positively linked to higher levels of neuroticism among males but this pattern was not found among females. For males, neuroticism mediated the impact of having an authoritarian father on pathological reasons for drinking with pathological reasons for drinking mediating the impact of neuroticism on alcohol-related problems. Perceptions of having a permissive father were linked to lower levels of neuroticism in females (but have been found as a consistent risk factor for other pathways to alcohol use elsewhere). Compared with other work in this area, these findings indicate parental influences regarding vulnerabilities for alcohol use may be specific to parent-child gender matches for some pathways and specific to one parent (irrespective of child gender) for other pathways. PMID:19000941

  6. Specific absorption rate variation in a brain phantom due to exposure by a 3G mobile phone: problems in dosimetry.

    PubMed

    Behari, J; Nirala, Jay Prakash

    2013-12-01

    A specific absorption rate (SAR) measurements system has been developed for compliance testing of personal mobile phone in a brain phantom material contained in a Perspex box. The volume of the box has been chosen corresponding to the volume of a small rat and illuminated by a 3G mobile phone frequency (1718.5 MHz), and the emitted radiation directed toward brain phantom .The induced fields in the phantom material are measured. Set up to lift the plane carrying the mobile phone is run by a pulley whose motion is controlled by a stepper motor. The platform is made to move at a pre-determined rate of 2 degrees per min limited up to 20 degrees. The measured data for induced fields in various locations are used to compute corresponding SAR values and inter comparison obtained. These data are also compared with those when the mobile phone is placed horizontally with respect to the position of the animal. The SAR data is also experimentally obtained by measuring a rise in temperature due to this mobile exposures and data compared with those obtained in the previous set. To seek a comparison with the safety criteria same set of measurements are performed in 10 g phantom material contained in a cubical box. These results are higher than those obtained with the knowledge of induced field measurements. It is concluded that SAR values are sensitive to the angular position of the moving platform and are well below the safety criteria prescribed for human exposure. The data are suggestive of having a fresh look to understand the mode of electromagnetic field -bio interaction. PMID:24579373

  7. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  8. Black-box calibration for complex-system simulation.

    PubMed

    Forrester, Alexander I J

    2010-08-13

    Predicting or measuring the output of complex systems is an important and challenging part of many areas of science. If multiple observations are required for parameter studies and optimization, accurate, computationally intensive predictions or expensive experiments are intractable. This paper looks at the use of Gaussian-process-based correlations to correct simple computer models with sparse data from physical experiments or more complex computer models. In essence, physics-based computer codes and experiments are replaced by fast problem-specific statistics-based codes. Two aerodynamic design examples are presented. First, a cheap two-dimensional potential-flow solver is calibrated to represent the flow over the wing of an unmanned air vehicle. The rear wing of a racing car is then optimized using rear-wing simulations calibrated to include the effects of the flow over the whole car. PMID:20603368

  9. Energy calibration of a multilayer photon detector

    SciTech Connect

    Johnson, R.A.

    1983-01-01

    The job of energy calibration was broken into three parts: gain normalization of all equivalent elements; determination of the functions for conversion of pulse height to energy; and gain stabilization. It is found that calorimeter experiments are no better than their calibration systems - calibration errors will be the major source of error at high energies. Redundance is found to be necessary - the system should be designed such that every element could be replaced during the life of the experiment. It is found to be important to have enough data taken during calibration runs and during the experiment to be able to sort out where the calibration problems were after the experiment is over. Each layer was normalized independently with electrons, and then the pulse height to energy conversion was determined with photons. The primary method of gain stabilization used the light flasher system. (LEW)

  10. A derivative standard for polarimeter calibration

    SciTech Connect

    Mulhollan, G.; Clendenin, J.; Saez, P.

    1996-10-01

    A long-standing problem in polarized electron physics is the lack of a traceable standard for calibrating electron spin polarimeters. While several polarimeters are absolutely calibrated to better than 2%, the typical instrument has an inherent accuracy no better than 10%. This variability among polarimeters makes it difficult to compare advances in polarized electron sources between laboratories. The authors have undertaken an effort to establish 100 nm thick molecular beam epitaxy grown GaAs(110) as a material which may be used as a derivative standard for calibrating systems possessing a solid state polarized electron source. The near-bandgap spin polarization of photoelectrons emitted from this material has been characterized for a variety of conditions and several laboratories which possess well calibrated polarimeters have measured the photoelectron polarization of cathodes cut from a common wafer. Despite instrumentation differences, the spread in the measurements is sufficiently small that this material may be used as a derivative calibration standard.

  11. Residual gas analyzer calibration

    NASA Technical Reports Server (NTRS)

    Lilienkamp, R. H.

    1972-01-01

    A technique which employs known gas mixtures to calibrate the residual gas analyzer (RGA) is described. The mass spectra from the RGA are recorded for each gas mixture. This mass spectra data and the mixture composition data each form a matrix. From the two matrices the calibration matrix may be computed. The matrix mathematics requires the number of calibration gas mixtures be equal to or greater than the number of gases included in the calibration. This technique was evaluated using a mathematical model of an RGA to generate the mass spectra. This model included shot noise errors in the mass spectra. Errors in the gas concentrations were also included in the valuation. The effects of these errors was studied by varying their magnitudes and comparing the resulting calibrations. Several methods of evaluating an actual calibration are presented. The effects of the number of gases in then, the composition of the calibration mixture, and the number of mixtures used are discussed.

  12. Calibration of multi-camera photogrammetric systems

    NASA Astrophysics Data System (ADS)

    Detchev, I.; Mazaheri, M.; Rondeel, S.; Habib, A.

    2014-11-01

    Due to the low-cost and off-the-shelf availability of consumer grade cameras, multi-camera photogrammetric systems have become a popular means for 3D reconstruction. These systems can be used in a variety of applications such as infrastructure monitoring, cultural heritage documentation, biomedicine, mobile mapping, as-built architectural surveys, etc. In order to ensure that the required precision is met, a system calibration must be performed prior to the data collection campaign. This system calibration should be performed as efficiently as possible, because it may need to be completed many times. Multi-camera system calibration involves the estimation of the interior orientation parameters of each involved camera and the estimation of the relative orientation parameters among the cameras. This paper first reviews a method for multi-camera system calibration with built-in relative orientation constraints. A system stability analysis algorithm is then presented which can be used to assess different system calibration outcomes. The paper explores the required calibration configuration for a specific system in two situations: major calibration (when both the interior orientation parameters and relative orientation parameters are estimated), and minor calibration (when the interior orientation parameters are known a-priori and only the relative orientation parameters are estimated). In both situations, system calibration results are compared using the system stability analysis methodology.

  13. 40 CFR 86.526-90 - Calibration of other equipment.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Calibration of other equipment. 86.526... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.526-90 Calibration of other equipment... necessary according to good practice. Specific equipment requiring calibration is the gas chromatograph...

  14. 40 CFR 86.526-90 - Calibration of other equipment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Calibration of other equipment. 86.526... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.526-90 Calibration of other equipment... necessary according to good practice. Specific equipment requiring calibration is the gas chromatograph...

  15. 40 CFR 86.126-90 - Calibration of other equipment.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Calibration of other equipment. 86.126... Complete Heavy-Duty Vehicles; Test Procedures § 86.126-90 Calibration of other equipment. Other test... according to good practice. Specific equipment requiring calibration are the gas chromatograph and...

  16. 40 CFR 86.126-90 - Calibration of other equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Calibration of other equipment. 86.126... Complete Heavy-Duty Vehicles; Test Procedures § 86.126-90 Calibration of other equipment. Other test... according to good practice. Specific equipment requiring calibration are the gas chromatograph and...

  17. 40 CFR 86.526-90 - Calibration of other equipment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Calibration of other equipment. 86.526... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.526-90 Calibration of other equipment... necessary according to good practice. Specific equipment requiring calibration is the gas chromatograph...

  18. 40 CFR 86.126-90 - Calibration of other equipment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Calibration of other equipment. 86.126... Complete Heavy-Duty Vehicles; Test Procedures § 86.126-90 Calibration of other equipment. Other test... according to good practice. Specific equipment requiring calibration are the gas chromatograph and...

  19. 40 CFR 86.526-90 - Calibration of other equipment.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Calibration of other equipment. 86.526... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.526-90 Calibration of other equipment... necessary according to good practice. Specific equipment requiring calibration is the gas chromatograph...

  20. 40 CFR 86.526-90 - Calibration of other equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Calibration of other equipment. 86.526... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.526-90 Calibration of other equipment... necessary according to good practice. Specific equipment requiring calibration is the gas chromatograph...

  1. 40 CFR 86.126-90 - Calibration of other equipment.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Calibration of other equipment. 86.126... Complete Heavy-Duty Vehicles; Test Procedures § 86.126-90 Calibration of other equipment. Other test... according to good practice. Specific equipment requiring calibration are the gas chromatograph and...

  2. 40 CFR 86.126-90 - Calibration of other equipment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Calibration of other equipment. 86.126... Complete Heavy-Duty Vehicles; Test Procedures § 86.126-90 Calibration of other equipment. Other test... according to good practice. Specific equipment requiring calibration are the gas chromatograph and...

  3. 46 CFR 164.009-13 - Furnace calibration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 6 2013-10-01 2013-10-01 false Furnace calibration. 164.009-13 Section 164.009-13...: SPECIFICATIONS AND APPROVAL MATERIALS Noncombustible Materials for Merchant Vessels § 164.009-13 Furnace calibration. A calibration is performed on each new furnace and on each existing furnace as often as...

  4. 46 CFR 164.009-13 - Furnace calibration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 6 2012-10-01 2012-10-01 false Furnace calibration. 164.009-13 Section 164.009-13...: SPECIFICATIONS AND APPROVAL MATERIALS Noncombustible Materials for Merchant Vessels § 164.009-13 Furnace calibration. A calibration is performed on each new furnace and on each existing furnace as often as...

  5. 46 CFR 164.009-13 - Furnace calibration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 6 2010-10-01 2010-10-01 false Furnace calibration. 164.009-13 Section 164.009-13...: SPECIFICATIONS AND APPROVAL MATERIALS Noncombustible Materials for Merchant Vessels § 164.009-13 Furnace calibration. A calibration is performed on each new furnace and on each existing furnace as often as...

  6. 46 CFR 164.009-13 - Furnace calibration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 6 2011-10-01 2011-10-01 false Furnace calibration. 164.009-13 Section 164.009-13...: SPECIFICATIONS AND APPROVAL MATERIALS Noncombustible Materials for Merchant Vessels § 164.009-13 Furnace calibration. A calibration is performed on each new furnace and on each existing furnace as often as...

  7. 46 CFR 164.009-13 - Furnace calibration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 6 2014-10-01 2014-10-01 false Furnace calibration. 164.009-13 Section 164.009-13...: SPECIFICATIONS AND APPROVAL MATERIALS Noncombustible Materials for Merchant Vessels § 164.009-13 Furnace calibration. A calibration is performed on each new furnace and on each existing furnace as often as...

  8. Cherenkov Source for PMT Calibrations

    NASA Astrophysics Data System (ADS)

    Kaptanoglu, Tanner; SNO+ at UC Berkeley Collaboration

    2013-10-01

    My research is focused on building a deployable source for PMT calibrations in the SNO+ detector. I work for the SNO+ group at UC Berkeley headed by Gabriel Orebi Gann. SNO+ is an addition to the SNO project, and its main goal is to search for neutrinoless double beta decay. The detector will be monitored by over 9500 photomultiplier tubes (PMTs). In order to characterize the PMTs, several calibration sources are being constructed. One of which, the Cherenkov Source, will provide a well-understood source of non-isotropic light for calibrating the detector response. My goal is to design and construct multiple aspects of the Cherenkov Source. However, there are multiple questions that arose with its design. How do we keep the scintillation light inside the Cherenkov source so it does not contaminate calibration? How do we properly build the Cherenkov source: a hollow acrylic sphere with a neck? Can we maintain a clean source throughout these processes? These are some of the problems I have been working on, and will continue to work on, until the deployment of the source. Additionally, I have worked to accurately simulate the physics inside the source, mainly the energy deposition of alphas.

  9. TOD to TTP calibration

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.

    2011-05-01

    The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.

  10. An Example Multi-Model Analysis: Calibration and Ranking

    NASA Astrophysics Data System (ADS)

    Ahlmann, M.; James, S. C.; Lowry, T. S.

    2007-12-01

    Modeling solute transport is a complex process governed by multiple site-specific parameters like porosity and hydraulic conductivity as well as many solute-dependent processes such as diffusion and reaction. Furthermore, it must be determined whether a steady or time-variant model is most appropriate. A problem arises because over-parameterized conceptual models may be easily calibrated to exactly reproduce measured data, even if these data contain measurement noise. During preliminary site investigation stages where available data may be scarce it is often advisable to develop multiple independent conceptual models, but the question immediately arises: which model is best? This work outlines a method for quickly calibrating and ranking multiple models using the parameter estimation code PEST in conjunction with the second-order-bias-corrected Akaike Information Criterion (AICc). The method is demonstrated using the twelve analytical solutions to the one- dimensional convective-dispersive-reactive solute transport equation as the multiple conceptual models (van~Genuchten M. Th. and W. J. Alves, 1982. Analytical solutions of the one-dimensional convective- dispersive solute transport equation, USDA ARS Technical Bulletin Number 1661. U.S. Salinity Laboratory, 4500 Glenwood Drive, Riverside, CA 92501.). Each solution is calibrated to three data sets, each comprising an increasing number of calibration points that represent increased knowledge of the modeled site (calibration points are selected from one of the analytical solutions that provides the "correct" model). The AICc is calculated after each successive calibration to the three data sets yielding model weights that are functions of the sum of the squared, weighted residuals, the number of parameters, and the number of observations (calibration data points) and ultimately indicates which model has the highest likelihood of being correct. The results illustrate how the sparser data sets can be modeled

  11. SAR calibration technology review

    NASA Technical Reports Server (NTRS)

    Walker, J. L.; Larson, R. W.

    1981-01-01

    Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.

  12. Radiometer Calibration and Characterization

    Energy Science and Technology Software Center (ESTSC)

    1994-12-31

    The Radiometer Calibration and Characterization (RCC) software is a data acquisition and data archival system for performing Broadband Outdoor Radiometer Calibrations (BORCAL). RCC provides a unique method of calibrating solar radiometers using techniques that reduce measurement uncertainty and better characterize a radiometer’s response profile. The RCC software automatically monitors and controls many of the components that contribute to uncertainty in an instrument’s responsivity.

  13. Calibration methods for rotating shadowband irradiometers and evaluation of calibration duration

    NASA Astrophysics Data System (ADS)

    Jessen, W.; Wilbert, S.; Nouri, B.; Geuder, N.; Fritz, H.

    2015-10-01

    Resource assessment for Concentrated Solar Power (CSP) needs accurate Direct Normal Irradiance (DNI) measurements. An option for such measurement campaigns are Rotating Shadowband Irradiometers (RSIs) with a thorough calibration. Calibration of RSIs and Si-sensors in general is complex because of the inhomogeneous spectral response of such sensors and incorporates the use of several correction functions. A calibration for a given atmospheric condition and air mass might not work well for a different condition. This paper covers procedures and requirements for two calibration methods for the calibration of Rotating Shadowband Irradiometers. The necessary duration of acquisition of test measurements is examined in regard to the site specific conditions at Plataforma Solar de Almeria (PSA) in Spain. Data sets of several long-term calibration periods from PSA are used to evaluate the deviation of results from calibrations with varying duration from the long-term result. The findings show that seasonal changes of environmental conditions are causing small but noticeable fluctuation of calibration results. Certain periods (i.e. November to January and April to May) show a higher likelihood of particularly adverse calibration results. These effects can partially be compensated by increasing the inclusions of measurements from outside these periods. Consequently, the duration of calibrations at PSA can now be selected depending on the time of the year in which measurements are commenced.

  14. The Science of Calibration

    NASA Astrophysics Data System (ADS)

    Kent, S. M.

    2016-05-01

    This paper presents a broad overview of the many issues involved in calibrating astronomical data, covering the full electromagnetic spectrum from radio waves to gamma rays, and considering both ground-based and space-based missions. These issues include the science drivers for absolute and relative calibration, the physics behind calibration and the mechanisms used to transfer it from the laboratory to an astronomical source, the need for networks of calibrated astronomical standards, and some of the challenges faced by large surveys and missions.

  15. LWIR polarimeter calibration

    NASA Astrophysics Data System (ADS)

    Blumer, Robert V.; Miller, Miranda A.; Howe, James D.; Stevens, Mark A.

    2002-01-01

    Performance reported efforts to calibrate a MWIR imaging polarimeter met with moderate success. Recent efforts to calibrate a LWIR sensor using a different technique have been much more fruitful. For our sensor, which is based on a rotating retarder, we have improved system calibration substantially be including nonuniformity correction at all measurement positions of the retarder in our polarization data analysis. This technique can account for effects such as spurious optical reflections within a camera system that had been masquerading as false polarization in our previous data analysis methodology. Our techniques will be described and our calibration results will be quantified. Data from field-testing will be presented.

  16. Energy calibration via correlation

    NASA Astrophysics Data System (ADS)

    Maier, Daniel; Limousin, Olivier

    2016-03-01

    The main task of an energy calibration is to find a relation between pulse-height values and the corresponding energies. Doing this for each pulse-height channel individually requires an elaborated input spectrum with an excellent counting statistics and a sophisticated data analysis. This work presents an easy to handle energy calibration process which can operate reliably on calibration measurements with low counting statistics. The method uses a parameter based model for the energy calibration and concludes on the optimal parameters of the model by finding the best correlation between the measured pulse-height spectrum and multiple synthetic pulse-height spectra which are constructed with different sets of calibration parameters. A CdTe-based semiconductor detector and the line emissions of an 241Am source were used to test the performance of the correlation method in terms of systematic calibration errors for different counting statistics. Up to energies of 60 keV systematic errors were measured to be less than ~ 0.1 keV. Energy calibration via correlation can be applied to any kind of calibration spectra and shows a robust behavior at low counting statistics. It enables a fast and accurate calibration that can be used to monitor the spectroscopic properties of a detector system in near realtime.

  17. The COS Calibration Pipeline

    NASA Astrophysics Data System (ADS)

    Hodge, Philip E.; Keyes, C.; Kaiser, M.

    2007-12-01

    The COS calibration pipeline (CALCOS) includes three main components: basic calibration, wavelength calibration, and spectral extraction. Calibration of modes using the far ultraviolet (FUV) and near ultraviolet (NUV) detectors share a common structure, although the individual reference files differ and there are some additional steps for the FUV channel. The pipeline is designed to calibrate data acquired in either ACCUM or time-tag mode. The basic calibration includes pulse-height filtering and geometric correction for FUV, and flat-field, deadtime, and Doppler correction for both detectors. Wavelength calibration can be done either by using separate lamp exposures or by taking several short lamp exposures concurrently with a science exposure. For time-tag data, the latter mode ("tagflash") will allow better correction of potential drift of the spectrum on the detector. One-dimensional spectra will be extracted and saved in a FITS binary table. Separate columns will be used for the flux-calibrated spectrum, error estimate, and the associated wavelengths. CALCOS is written in Python, with some functions in C. It is similar in style to other HST pipeline code in that it uses an association table to specify which files to be included, and the calibration steps to be performed and the reference files to use are specified by header keywords. Currently, in conjunction with the Instrument Definition Team (led by J. Green), the ground-based reference files are being refined, delivered, and tested with the pipeline.

  18. Laser interferometer calibration station

    NASA Astrophysics Data System (ADS)

    Campolmi, R. W.; Krupski, S. J.

    1981-10-01

    The laser interferometer is a versatile tool, used for calibration over both long and short distances. It is considered traceable to the National Bureau of Standards. The system developed under this project was to be capable of providing for the calibration of many types of small linear measurement devices. The logistics of the original concept of one location for calibration of all mics, calipers, etc. at a large manufacturing facility proved unworkable. The equipment was instead used for the calibration of the large machines used to manufacture cannon tubes.

  19. Series: The research agenda for general practice/family medicine and primary health care in Europe. Part 4. Results: specific problem solving skills.

    PubMed

    Hummers-Pradier, Eva; Beyer, Martin; Chevallier, Patrick; Eilat-Tsanani, Sophia; Lionis, Christos; Peremans, Lieve; Petek, Davorina; Rurik, Imre; Soler, Jean Karl; Stoffers, Henri Ejh; Topsever, Pinar; Ungan, Mehmet; van Royen, Paul

    2010-09-01

    The 'Research Agenda for General Practice/Family Medicine and Primary Health Care in Europe' summarizes the evidence relating to the core competencies and characteristics of the Wonca Europe definition of GP/FM, and its implications for general practitioners/family doctors, researchers and policy makers. The European Journal of General Practice publishes a series of articles based on this document. The previous articles presented background, objectives, and methodology, as well results on 'primary care management' and 'community orientation' and the person-related core competencies of GP/FM. This article reflects on the general practitioner's 'specific problem solving skills'. These include decision making on diagnosis and therapy of specific diseases, accounting for the properties of primary care, but also research questions related to quality management and resource use, shared decision making, or professional education and development. Clinical research covers most specific diseases, but often lacks pragmatism and primary care relevance. Quality management is a stronghold of GP/FM research. Educational interventions can be effective when well designed for a specific setting and situation. However, their message that 'usual care' by general practitioners is insufficient may be problematic. GP and their patients need more research into diagnostic reasoning with a step-wise approach to increase predictive values in a setting characterized by uncertainty and low prevalence of specific diseases. Pragmatic comparative effectiveness studies of new and established drugs or non-pharmaceutical therapy are needed. Multi-morbidity and complexity should be addressed. Studies on therapy, communication strategies and educational interventions should consider impact on health and sustainability of effects. PMID:20825274

  20. Calibration facility safety plan

    NASA Technical Reports Server (NTRS)

    Fastie, W. G.

    1971-01-01

    A set of requirements is presented to insure the highest practical standard of safety for the Apollo 17 Calibration Facility in terms of identifying all critical or catastrophic type hazard areas. Plans for either counteracting or eliminating these areas are presented. All functional operations in calibrating the ultraviolet spectrometer and the testing of its components are described.

  1. OLI Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff

    2011-01-01

    Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI

  2. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  3. Radiometric Calibration of Osmi Imagery Using Solar Calibration

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Han; Kim, Yong-Seung

    2000-12-01

    OSMI (Ocean Scanning Multi-Spectral Imager) raw image data (Level 0) were acquired and radiometrically corrected. We have applied two methods, using solar & dark calibration data from OSMI sensor and comparing with the SeaWiFS data, to the radiometric correction of OSMI raw image data. First, we could get the values of the gain and the offset for each pixel and each band from comparing the solar & dark calibration data with the solar input radiance values, calculated from the transmittance, BRDF (Bidirectional Reflectance Distribution Function) and the solar incidence angle (¥â,¥è) of OSMI sensor. Applying this calibration data to OSMI raw image data, we got the two odd results, the lower value of the radiometric corrected image data than the expected value, and the Venetian Blind Effect in the radiometric corrected image data. Second, we could get the reasonable results from comparing OSMI raw image data with the SeaWiFS data, and get a new problem of OSMI sensor.

  4. Sandia WIPP calibration traceability

    SciTech Connect

    Schuhen, M.D.; Dean, T.A.

    1996-05-01

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.

  5. Definition of energy-calibrated spectra for national reachback

    SciTech Connect

    Kunz, Christopher L.; Hertz, Kristin L.

    2014-01-01

    Accurate energy calibration is critical for the timeliness and accuracy of analysis results of spectra submitted to National Reachback, particularly for the detection of threat items. Many spectra submitted for analysis include either a calibration spectrum using 137Cs or no calibration spectrum at all. The single line provided by 137Cs is insufficient to adequately calibrate nonlinear spectra. A calibration source that provides several lines that are well-spaced, from the low energy cutoff to the full energy range of the detector, is needed for a satisfactory energy calibration. This paper defines the requirements of an energy calibration for the purposes of National Reachback, outlines a method to validate whether a given spectrum meets that definition, discusses general source considerations, and provides a specific operating procedure for calibrating the GR-135.

  6. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, David R.

    1998-01-01

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets.

  7. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, D.R.

    1998-11-17

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets. 3 figs.

  8. Calibration methods for rotating shadowband irradiometers and optimizing the calibration duration

    NASA Astrophysics Data System (ADS)

    Jessen, Wilko; Wilbert, Stefan; Nouri, Bijan; Geuder, Norbert; Fritz, Holger

    2016-04-01

    Resource assessment for concentrated solar power (CSP) needs accurate direct normal irradiance (DNI) measurements. An option for such measurement campaigns is the use of thoroughly calibrated rotating shadowband irradiometers (RSIs). Calibration of RSIs and Si-sensors is complex because of the inhomogeneous spectral response of these sensors and incorporates the use of several correction functions. One calibration for a given atmospheric condition and air mass might not be suitable under different conditions. This paper covers procedures and requirements of two calibration methods for the calibration of rotating shadowband irradiometers. The necessary duration of acquisition of test measurements is examined with regard to the site-specific conditions at Plataforma Solar de Almería (PSA) in Spain. Seven data sets of long-term test measurements were collected. For each data set, calibration results of varying durations were compared to its respective long-term result. Our findings show that seasonal changes of environmental conditions are causing small but noticeable fluctuation of calibration results. Calibration results within certain periods (i.e. November to January and April to May) show a higher likelihood of deviation. These effects can partially be attenuated by including more measurements from outside these periods. Consequently, the duration of calibrations at PSA can now be selected depending on the time of year in which measurements commence.

  9. Gemini facility calibration unit

    NASA Astrophysics Data System (ADS)

    Ramsay-Howat, Suzanne K.; Harris, John W.; Gostick, David C.; Laidlaw, Ken; Kidd, Norrie; Strachan, Mel; Wilson, Ken

    2000-08-01

    High-quality, efficient calibration instruments is a pre- requisite for the modern observatory. Each of the Gemini telescopes will be equipped with identical facility calibration units (GCALs) designed to provide wavelength and flat-field calibrations for the suite of instruments. The broad range of instrumentation planned for the telescopes heavily constrains the design of GCAL. Short calibration exposures are required over wavelengths from 0.3micrometers to 5micrometers , field sizes up to 7 arcminutes and spectral resolution from R-5 to 50,000. The output from GCAL must mimic the f-16 beam of the telescope and provide a uniform illumination of the focal plane. The calibration units are mounted on the Gemini Instrument Support Structure, two meters from the focal pane, necessitating the use of large optical components. We will discuss the opto-mechanical design of the Gemini calibration unit, with reference to those feature which allow these stringent requirements to be met. A novel reflector/diffuser unit replaces the integration sphere more normally found in calibration systems. The efficiency of this system is an order of magnitude greater than for an integration sphere. A system of two off-axis mirrors reproduces the telescope pupil and provides the 7 foot focal plane. The results of laboratory test of the uniformity and throughput of the GCAL will be presented.

  10. Automatic colorimetric calibration of human wounds

    PubMed Central

    2010-01-01

    Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly