Sample records for correction model demanda

  1. Zarit Burden Interview Psychometric Indicators Applied in Older People Caregivers of Other Elderly.

    PubMed

    Bianchi, Mariana; Flesch, Leticia Decimo; Alves, Erika Valeska da Costa; Batistoni, Samila Sathler Taveres; Neri, Anita Liberalesso

    2016-11-28

    to derive psychometric indicators of construct validity and internal consistence of the Zarit Burden Interview scale for caregivers, describing associations of the scale with metrics related to care demands, coping strategies and depression in aged caregivers. crosscutting descriptive and correlational study. The convenience sample was composed by a hundred and twenty one senior caregivers (Avg=70.5 ± 7.2 years, 73% women). They answered a questionnaire to check the physical and cognitive demands of care, the Zarit Burden Interview (ZBI), the California Inventory of Coping Strategies and the Geriatric Depression Scale (GDS-15). ZBI showed good internal consistency and also for the three factors emerging from factor analysis, explaining 44% of variability. ZBI is positively related with objective care demands (p < 0.001), depression (p = 0.006) and use of dysfunctional coping strategies (p = 0.0007). ZBI is of interest to be applied to aged caregivers and the association of higher degrees of burden, dysfunctional coping and depression show a vulnerability scenario that may affect to older people taking care of other elderly. derivar indicadores psicométricos de validade de construto e consistência interna da escala de sobrecarga de cuidadores de Zarit Burden Interview e descrever associações desta com medidas referentes às demandas de cuidado, estratégias de enfrentamento e depressão em cuidadores idosos. estudo descritivo, transversal e correlacional. Cento e vinte e um cuidadores idosos (M= 70,5 ± 7,2 anos, 73% feminino) compuseram uma amostra de conveniência e responderam a um protocolo de pesquisa para levantamento de demandas de cuidado de natureza física e cognitiva, à Zarit Burden Interview (ZBI), ao Inventário de Estratégias de Enfrentamento da Califórnia e à Escala de Depressão Geriátrica (GDS-15). a ZBI revelou bons índices de consistência interna e para os três fatores resultantes da análise fatorial, os quais explicaram 44% da variabilidade. A ZBI correlacionou-se positivamente com demandas de cuidado objetivas (p < 0,001), depressão (p = 0,006) e uso de estratégias de enfrentamento disfuncionais (p = 0,0007). a ZBI revela-se interessante para a aplicação a cuidadores idosos e as associações entre altos graus de sobrecarga, enfrentamento disfuncional e depressão apontam um cenário peculiar de vulnerabilidade a que este idoso que cuida de outro idoso pode estar exposto. obtener indicadores psicométricos de validez de constructo y consistencia interna de la escala de sobrecarga de cuidadores Zarit Burden Interview y describir asociaciones de ésta con medidas referentes a demandas de cuidado, estrategias de enfrentamiento y depresión, en cuidadores ancianos. estudio descriptivo, transversal y correlacional. Ciento veinte y un cuidadores ancianos (P = 70,5 ± 7,2 años, 73% femenino) compusieron una muestra de conveniencia, los que respondieron a protocolos de investigación, para levantamiento de demandas de cuidado de naturaleza física y cognitiva, a través de: la Zarit Burden Interview (ZBI), el Inventario de Estrategias de Enfrentamiento de California y la Escala de Depresión Geriátrica (GDS-15). la ZBI reveló buenos índices de consistencia interna para los tres factores resultantes del análisis factorial que explicaron 44% de la variabilidad. La ZBI se correlacionó positivamente con demandas de cuidado específicas (p < 0,001): depresión (p = 0,006) y uso de estrategias de enfrentamiento disfuncionales (p = 0,0007). la ZBI se revela interesante para explicar los cuidadores ancianos y las asociaciones entre altos grados de sobrecarga, enfrentamiento disfuncional y depresión; los resultados apuntan un escenario peculiar de vulnerabilidad a la que éste anciano, que cuida de otro anciano, puede estar expuesto.

  2. Technology Education Teacher Supply and Demand--A Critical Situation

    ERIC Educational Resources Information Center

    Moye, Johnny J.

    2009-01-01

    Technology education is an excellent format to integrate science, technology, engineering, and mathematics (STEM) studies by employing problem-based learning activities. However, the benefits of technology education are still generally "misunderstood by the public." The effects of technology education on increased student mathematics…

  3. Descentralizacion de la educacion: Financiamiento basado en la demanda. Tendencias del Desarrollo. (Decentralization of Education: Demand-Side Financing. Directions in Development.)

    ERIC Educational Resources Information Center

    Patrinos, Harry Anthony; Ariasingam, David Lakshmanan

    Central government's supply-side expansions of schooling have not equally benefited all members of society, especially girls, indigenous peoples, tribal groups, disadvantaged minorities, and the poor. Public spending on education is often inefficient, higher education is subsidized at primary education's expense, and costs are becoming…

  4. The Future Subjunctive in Galician-Portuguese: A Review of "Cantigas de Santa Maria" and "A Demanda do Santo Graal"

    ERIC Educational Resources Information Center

    Schultheis, Maria Luiza Carrano

    2009-01-01

    The usage and disappearance of the Central Ibero-Romance future subjunctive have been extensively researched through Old Spanish texts. Studies on the future subjunctive as it evolved in the farther Western Ibero-Romance languages, represented by Galician and Portuguese, have been scarce, if not incomplete. This dissertation partially fills the…

  5. Learning versus correct models: influence of model type on the learning of a free-weight squat lift.

    PubMed

    McCullagh, P; Meyer, K N

    1997-03-01

    It has been assumed that demonstrating the correct movement is the best way to impart task-relevant information. However, empirical verification with simple laboratory skills has shown that using a learning model (showing an individual in the process of acquiring the skill to be learned) may accelerate skill acquisition and increase retention more than using a correct model. The purpose of the present study was to compare the effectiveness of viewing correct versus learning models on the acquisition of a sport skill (free-weight squat lift). Forty female participants were assigned to four learning conditions: physical practice receiving feedback, learning model with model feedback, correct model with model feedback, and learning model without model feedback. Results indicated that viewing either a correct or learning model was equally effective in learning correct form in the squat lift.

  6. Analysis of interacting entropy-corrected holographic and new agegraphic dark energies

    NASA Astrophysics Data System (ADS)

    Ranjit, Chayan; Debnath, Ujjal

    In the present work, we assume the flat FRW model of the universe is filled with dark matter and dark energy where they are interacting. For dark energy model, we consider the entropy-corrected HDE (ECHDE) model and the entropy-corrected NADE (ECNADE). For entropy-corrected models, we assume logarithmic correction and power law correction. For ECHDE model, length scale L is assumed to be Hubble horizon and future event horizon. The ωde-ωde‧ analysis for our different horizons are discussed.

  7. Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin

    2018-04-01

    ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.

  8. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.

    2012-11-01

    Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.

  9. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  10. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.

  11. The relationship between tree growth patterns and likelihood of mortality: A study of two tree species in the Sierra Nevada

    USGS Publications Warehouse

    Das, A.J.; Battles, J.J.; Stephenson, N.L.; van Mantgem, P.J.

    2007-01-01

    We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ???20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk. ?? 2007 NRC.

  12. How does bias correction of RCM precipitation affect modelled runoff?

    NASA Astrophysics Data System (ADS)

    Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.

    2014-09-01

    Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.

  13. How does bias correction of regional climate model precipitation affect modelled runoff?

    NASA Astrophysics Data System (ADS)

    Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.

    2015-02-01

    Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.

  14. Roi-Orientated Sensor Correction Based on Virtual Steady Reimaging Model for Wide Swath High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.

    2017-09-01

    To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.

  15. An experimental comparison of ETM+ image geometric correction methods in the mountainous areas of Yunnan Province, China

    NASA Astrophysics Data System (ADS)

    Wang, Jinliang; Wu, Xuejiao

    2010-11-01

    Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.

  16. A compact quantum correction model for symmetric double gate metal-oxide-semiconductor field-effect transistor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Edward Namkyu; Shin, Yong Hyeon; Yun, Ilgu, E-mail: iyun@yonsei.ac.kr

    2014-11-07

    A compact quantum correction model for a symmetric double gate (DG) metal-oxide-semiconductor field-effect transistor (MOSFET) is investigated. The compact quantum correction model is proposed from the concepts of the threshold voltage shift (ΔV{sub TH}{sup QM}) and the gate capacitance (C{sub g}) degradation. First of all, ΔV{sub TH}{sup QM} induced by quantum mechanical (QM) effects is modeled. The C{sub g} degradation is then modeled by introducing the inversion layer centroid. With ΔV{sub TH}{sup QM} and the C{sub g} degradation, the QM effects are implemented in previously reported classical model and a comparison between the proposed quantum correction model and numerical simulationmore » results is presented. Based on the results, the proposed quantum correction model can be applicable to the compact model of DG MOSFET.« less

  17. Analysis of different models for atmospheric correction of meteosat infrared images. A new approach

    NASA Astrophysics Data System (ADS)

    Pérez, A. M.; Illera, P.; Casanova, J. L.

    A comparative study of several atmospheric correction models has been carried out. As primary data, atmospheric profiles of temperature and humidity obtained from radiosoundings on cloud-free days have been used. Special attention has been paid to the model used operationally in the European Space operations Centre (ESOC) for sea temperature calculations. The atmospheric correction results are expressed in terms of the increase in the brightness temperature and the surface temperature. A difference of up to a maximum of 1.4 degrees with respect to the correction obtained in the studied models has been observed. The radiances calculated by models are also compared with those obtained directly from the satellite. The temperature corrections by the latter are greater than the former in practically every case. As a result of this, the operational calibration coefficients should be first recalculated if we wish to apply an atmospheric correction model to the satellite data. Finally, a new simplified calculation scheme which may be introduced into any model is proposed.

  18. Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.

    PubMed

    Dixit, Purushottam D; Dill, Ken A

    2018-02-13

    Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.

  19. Refraction error correction for deformation measurement by digital image correlation at elevated temperature

    NASA Astrophysics Data System (ADS)

    Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji

    2017-03-01

    An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.

  20. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    NASA Astrophysics Data System (ADS)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  1. Towards process-informed bias correction of climate change simulations

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Shepherd, Theodore G.; Widmann, Martin; Zappa, Giuseppe; Walton, Daniel; Gutiérrez, José M.; Hagemann, Stefan; Richter, Ingo; Soares, Pedro M. M.; Hall, Alex; Mearns, Linda O.

    2017-11-01

    Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.

  2. Correction of ultrasonic wave aberration with a time delay and amplitude filter.

    PubMed

    Måsøy, Svein-Erik; Johansen, Tonni F; Angelsen, Bjørn

    2003-04-01

    Two-dimensional simulations with propagation through two different heterogeneous human body wall models have been performed to analyze different correction filters for ultrasonic wave aberration due to forward wave propagation. The different models each produce most of the characteristic aberration effects such as phase aberration, relatively strong amplitude aberration, and waveform deformation. Simulations of wave propagation from a point source in the focus (60 mm) of a 20 mm transducer through the body wall models were performed. Center frequency of the pulse was 2.5 MHz. Corrections of the aberrations introduced by the two body wall models were evaluated with reference to the corrections obtained with the optimal filter: a generalized frequency-dependent phase and amplitude correction filter [Angelsen, Ultrasonic Imaging (Emantec, Norway, 2000), Vol. II]. Two correction filters were applied, a time delay filter, and a time delay and amplitude filter. Results showed that correction with a time delay filter produced substantial reduction of the aberration in both cases. A time delay and amplitude correction filter performed even better in both cases, and gave correction close to the ideal situation (no aberration). The results also indicated that the effect of the correction was very sensitive to the accuracy of the arrival time fluctuations estimate, i.e., the time delay correction filter.

  3. Regional geoid computation by least squares modified Hotine's formula with additive corrections

    NASA Astrophysics Data System (ADS)

    Märdla, Silja; Ellmann, Artu; Ågren, Jonas; Sjöberg, Lars E.

    2018-03-01

    Geoid and quasigeoid modelling from gravity anomalies by the method of least squares modification of Stokes's formula with additive corrections is adapted for the usage with gravity disturbances and Hotine's formula. The biased, unbiased and optimum versions of least squares modification are considered. Equations are presented for the four additive corrections that account for the combined (direct plus indirect) effect of downward continuation (DWC), topographic, atmospheric and ellipsoidal corrections in geoid or quasigeoid modelling. The geoid or quasigeoid modelling scheme by the least squares modified Hotine formula is numerically verified, analysed and compared to the Stokes counterpart in a heterogeneous study area. The resulting geoid models and the additive corrections computed both for use with Stokes's or Hotine's formula differ most in high topography areas. Over the study area (reaching almost 2 km in altitude), the approximate geoid models (before the additive corrections) differ by 7 mm on average with a 3 mm standard deviation (SD) and a maximum of 1.3 cm. The additive corrections, out of which only the DWC correction has a numerically significant difference, improve the agreement between respective geoid or quasigeoid models to an average difference of 5 mm with a 1 mm SD and a maximum of 8 mm.

  4. Using a bias aware EnKF to account for unresolved structure in an unsaturated zone model

    NASA Astrophysics Data System (ADS)

    Erdal, D.; Neuweiler, I.; Wollschläger, U.

    2014-01-01

    When predicting flow in the unsaturated zone, any method for modeling the flow will have to define how, and to what level, the subsurface structure is resolved. In this paper, we use the Ensemble Kalman Filter to assimilate local soil water content observations from both a synthetic layered lysimeter and a real field experiment in layered soil in an unsaturated water flow model. We investigate the use of colored noise bias corrections to account for unresolved subsurface layering in a homogeneous model and compare this approach with a fully resolved model. In both models, we use a simplified model parameterization in the Ensemble Kalman Filter. The results show that the use of bias corrections can increase the predictive capability of a simplified homogeneous flow model if the bias corrections are applied to the model states. If correct knowledge of the layering structure is available, the fully resolved model performs best. However, if no, or erroneous, layering is used in the model, the use of a homogeneous model with bias corrections can be the better choice for modeling the behavior of the system.

  5. Recalculating the quasar luminosity function of the extended Baryon Oscillation Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Caditz, David M.

    2017-12-01

    Aims: The extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey provides a uniform sample of over 13 000 variability selected quasi-stellar objects (QSOs) in the redshift range 0.68

  6. Lipid correction model of carbon stable isotopes for a cosmopolitan predator, spiny dogfish Squalus acanthias.

    PubMed

    Reum, J C P

    2011-12-01

    Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.

  7. Staff Movements

    NASA Astrophysics Data System (ADS)

    Huber, M. C. E.

    1986-09-01

    Los astronomos de ESO dedican una considerable parte de su tiempo a la preparacion de solicitudes para tiempo de observacion en La Silla. Sin embargo, debido a la gran demanda par los telescopios, se debe hacer una seleccion, aveces drastica, de los programas de observacion presentados. EI Comite de Programas de Observacion (OPC) tiene como tarea evaluar el merito cientifico de las solicitudes presentadas. Basada en las recomendaciones dei OPC, ESO prepara una Lista de Tiempos de Observacion en la cual distribuye el tiempo disponible en los telescopios a los programas mejor evaluados.

  8. Impact of a statistical bias correction on the projected simulated hydrological changes obtained from three GCMs and two hydrology models

    NASA Astrophysics Data System (ADS)

    Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio

    2010-05-01

    Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.

  9. Effect of tubing length on the dispersion correction of an arterially sampled input function for kinetic modeling in PET.

    PubMed

    O'Doherty, Jim; Chilcott, Anna; Dunn, Joel

    2015-11-01

    Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.

  10. Solar array model corrections from Mars Pathfinder lander data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewell, R.C.; Burger, D.R.

    1997-12-31

    The MESUR solar array power model initially assumed values for input variables. After landing early surface variables such as array tilt and azimuth or early environmental variables such as array temperature can be corrected. Correction of later environmental variables such as tau versus time, spectral shift, dust deposition, and UV darkening is dependent upon time, on-board science instruments, and ability to separate effects of variables. Engineering estimates had to be made for additional shadow losses and Voc sensor temperature corrections. Some variations had not been expected such as tau versus time of day, and spectral shift versus time of day.more » Additions needed to the model are thermal mass of lander petal and correction between Voc sensor and temperature sensor. Conclusions are: the model works well; good battery predictions are difficult; inclusion of Isc and Voc sensors was valuable; and the IMP and MAE science experiments greatly assisted the data analysis and model correction.« less

  11. The Detection and Correction of Bias in Student Ratings of Instruction.

    ERIC Educational Resources Information Center

    Haladyna, Thomas; Hess, Robert K.

    1994-01-01

    A Rasch model was used to detect and correct bias in Likert rating scales used to assess student perceptions of college teaching, using a database of ratings. Statistical corrections were significant, supporting the model's potential utility. Recommendations are made for a theoretical rationale and further research on the model. (Author/MSE)

  12. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  13. Correcting for static shift of magnetotelluric data with airborne electromagnetic measurements: a case study from Rathlin Basin, Northern Ireland

    NASA Astrophysics Data System (ADS)

    Delhaye, Robert; Rath, Volker; Jones, Alan G.; Muller, Mark R.; Reay, Derek

    2017-05-01

    Galvanic distortions of magnetotelluric (MT) data, such as the static-shift effect, are a known problem that can lead to incorrect estimation of resistivities and erroneous modelling of geometries with resulting misinterpretation of subsurface electrical resistivity structure. A wide variety of approaches have been proposed to account for these galvanic distortions, some depending on the target area, with varying degrees of success. The natural laboratory for our study is a hydraulically permeable volume of conductive sediment at depth, the internal resistivity structure of which can be used to estimate reservoir viability for geothermal purposes; however, static-shift correction is required in order to ensure robust and precise modelling accuracy.We present here a possible method to employ frequency-domain electromagnetic data in order to correct static-shift effects, illustrated by a case study from Northern Ireland. In our survey area, airborne frequency domain electromagnetic (FDEM) data are regionally available with high spatial density. The spatial distributions of the derived static-shift corrections are analysed and applied to the uncorrected MT data prior to inversion. Two comparative inversion models are derived, one with and one without static-shift corrections, with instructive results. As expected from the one-dimensional analogy of static-shift correction, at shallow model depths, where the structure is controlled by a single local MT site, the correction of static-shift effects leads to vertical scaling of resistivity-thickness products in the model, with the corrected model showing improved correlation to existing borehole wireline resistivity data. In turn, as these vertical scalings are effectively independent of adjacent sites, lateral resistivity distributions are also affected, with up to half a decade of resistivity variation between the models estimated at depths down to 2000 m. Simple estimation of differences in bulk porosity, derived using Archie's Law, between the two models reinforces our conclusion that the suborder of magnitude resistivity contrasts induced by the correction of static shifts correspond to similar contrasts in estimated porosities, and hence, for purposes of reservoir investigation or similar cases requiring accurate absolute resistivity estimates, galvanic distortion correction, especially static-shift correction, is essential.

  14. Apparent resistivity for transient electromagnetic induction logging and its correction in radial layer identification

    NASA Astrophysics Data System (ADS)

    Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei

    2018-04-01

    We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.

  15. Regional ionospheric model for improvement of navigation position with EGNOS

    NASA Astrophysics Data System (ADS)

    Swiatek, Anna; Tomasik, Lukasz; Jaworski, Leszek

    The problem of insufficient accuracy of EGNOS correction for the territory of Poland, located at the edge of EGNOS range is well known. The EEI PECS project (EGNOS EUPOS Integration) assumed improving the EGNOS correction by using the GPS observations from Polish ASG-EUPOS stations. A ionospheric delay parameter is a part of EGNOS correction. The comparative analysis of TEC values obtained from EGNOS and regional permanent GNSS stations showed the systematic shift. The TEC from EGNOS correction is underestimated related to computed regional TEC value. The new-‘improved’ corrections computed based on regional model were substituted for the EGNOS correction for suitable message. Dynamic measurements managed using the Mobile GPS Laboratory (MGL), showed the improvement of navigation position with TEC regional model.

  16. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  17. Streamflow Bias Correction for Climate Change Impact Studies: Harmless Correction or Wrecking Ball?

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Chegwidden, O.

    2017-12-01

    Projections of the hydrologic impacts of climate change rely on a modeling chain that includes estimates of future greenhouse gas emissions, global climate models, and hydrologic models. The resulting streamflow time series are used in turn as input to impact studies. While these flows can sometimes be used directly in these impact studies, many applications require additional post-processing to remove model errors. Water resources models and regulation studies are a prime example of this type of application. These models rely on specific flows and reservoir levels to trigger reservoir releases and diversions and do not function well if the unregulated streamflow inputs are significantly biased in time and/or amount. This post-processing step is typically referred to as bias-correction, even though this step corrects not just the mean but the entire distribution of flows. Various quantile-mapping approaches have been developed that adjust the modeled flows to match a reference distribution for some historic period. Simulations of future flows are then post-processed using this same mapping to remove hydrologic model errors. These streamflow bias-correction methods have received far less scrutiny than the downscaling and bias-correction methods that are used for climate model output, mostly because they are less widely used. However, some of these methods introduce large artifacts in the resulting flow series, in some cases severely distorting the climate change signal that is present in future flows. In this presentation, we discuss our experience with streamflow bias-correction methods as part of a climate change impact study in the Columbia River basin in the Pacific Northwest region of the United States. To support this discussion, we present a novel way to assess whether a streamflow bias-correction method is merely a harmless correction or is more akin to taking a wrecking ball to the climate change signal.

  18. Ionospheric propagation correction modeling for satellite altimeters

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.

    1981-01-01

    The theoretical basis and avaliable accuracy verifications were reviewed and compared for ionospheric correction procedures based on a global ionsopheric model driven by solar flux, and a technique in which measured electron content (using Faraday rotation measurements) for one path is mapped into corrections for a hemisphere. For these two techniques, RMS errors for correcting satellite altimeters data (at 14 GHz) are estimated to be 12 cm and 3 cm, respectively. On the basis of global accuracy and reliability after implementation, the solar flux model is recommended.

  19. Modeling boundary measurements of scattered light using the corrected diffusion approximation

    PubMed Central

    Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.

    2012-01-01

    We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102

  20. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  1. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  2. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  3. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  4. A corrected model for static and dynamic electromechanical instability of narrow nanotweezers: Incorporation of size effect, surface layer and finite dimensions

    NASA Astrophysics Data System (ADS)

    Koochi, Ali; Hosseini-Toudeshky, Hossein; Abadyan, Mohamadreza

    2018-03-01

    Herein, a corrected theoretical model is proposed for modeling the static and dynamic behavior of electrostatically actuated narrow-width nanotweezers considering the correction due to finite dimensions, size dependency and surface energy. The Gurtin-Murdoch surface elasticity in conjunction with the modified couple stress theory is employed to consider the coupling effect of surface stresses and size phenomenon. In addition, the model accounts for the external force corrections by incorporating the impact of narrow width on the distribution of Casimir attraction, van der Waals (vdW) force and the fringing field effect. The proposed model is beneficial for the precise modeling of the narrow nanotweezers in nano-scale.

  5. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    DTIC Science & Technology

    2008-03-01

    multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space

  6. Conjunctive and Disjunctive Extensions of the Least Squares Distance Model of Cognitive Diagnosis

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Atanasov, Dimitar V.

    2012-01-01

    Many models of cognitive diagnosis, including the "least squares distance model" (LSDM), work under the "conjunctive" assumption that a correct item response occurs when all latent attributes required by the item are correctly performed. This article proposes a "disjunctive" version of the LSDM under which the correct item response occurs when "at…

  7. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Estimating Uncertainties of Ship Course and Speed in Early Navigations using ICOADS3.0

    NASA Astrophysics Data System (ADS)

    Chan, D.; Huybers, P. J.

    2017-12-01

    Information on ship position and its uncertainty is potentially important for mapping out climatologists and changes in SSTs. Using the 2-hourly ship reports from the International Comprehensive Ocean Atmosphere Dataset 3.0 (ICOADS 3.0), we estimate the uncertainties of ship course, ship speed, and latitude/longitude corrections during 1870-1900. After reviewing the techniques used in early navigations, we build forward navigation model that uses dead reckoning technique, celestial latitude corrections, and chronometer longitude corrections. The modeled ship tracks exhibit jumps in longitude and latitude, when a position correction is applied. These jumps are also seen in ICOADS3.0 observations. In this model, position error at the end of each day increases following a 2D random walk; the latitudinal/longitude errors are reset when a latitude/longitude correction is applied.We fit the variance of the magnitude of latitude/longitude corrections in the observation against model outputs, and estimate that the standard deviation of uncertainty is 5.5 degree for ship course, 32% for ship speed, 22km for latitude correction, and 27km for longitude correction. The estimates here are informative priors for Bayesian methods that quantify position errors of individual tracks.

  9. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    NASA Astrophysics Data System (ADS)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  10. Compressibility Considerations for kappa-omega Turbulence Models in Hypersonic Boundary Layer Applications

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.

    2009-01-01

    The ability of kappa-omega models to predict compressible turbulent skin friction in hypersonic boundary layers is investigated. Although uncorrected two-equation models can agree well with correlations for hot-wall cases, they tend to perform progressively worse - particularly for cold walls - as the Mach number is increased in the hypersonic regime. Simple algebraic models such as Baldwin-Lomax perform better compared to experiments and correlations in these circumstances. Many of the compressibility corrections described in the literature are summarized here. These include corrections that have only a small influence for kappa-omega models, or that apply only in specific circumstances. The most widely-used general corrections were designed for use with jet or mixing-layer free shear flows. A less well-known dilatation-dissipation correction intended for boundary layer flows is also tested, and is shown to agree reasonably well with the Baldwin-Lomax model at cold-wall conditions. It exhibits a less dramatic influence than the free shear type of correction. There is clearly a need for improved understanding and better overall physical modeling for turbulence models applied to hypersonic boundary layer flows.

  11. Characterizing bias correction uncertainty in wheat yield predictions

    NASA Astrophysics Data System (ADS)

    Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam

    2017-04-01

    Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.

  12. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.

  13. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction.

    PubMed

    Wennberg, Berit M; Baumann, Pia; Gagliardi, Giovanna; Nyman, Jan; Drugge, Ninni; Hoyer, Morten; Traberg, Anders; Nilsson, Kristina; Morhed, Elisabeth; Ekberg, Lars; Wittgren, Lena; Lund, Jo-Åsmund; Levin, Nina; Sederholm, Christer; Lewensohn, Rolf; Lax, Ingmar

    2011-05-01

    In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D(0) = 1.0 Gy, [Formula: see text] = 10, α = 0.206 Gy(-1) and d(T) = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether "high doses to small volumes" or "low doses to large volumes" are most important for lung toxicity. NTCP analysis with the LKB-model using parameters m = 0.4, D(50) = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D(50) = 20 Gy n = 0.93 with LQ correction and n = 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling.

  14. HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.

    2012-04-01

    Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.

  15. Retinal image contrast obtained by a model eye with combined correction of chromatic and spherical aberrations

    PubMed Central

    Ohnuma, Kazuhiko; Kayanuma, Hiroyuki; Lawu, Tjundewo; Negishi, Kazuno; Yamaguchi, Takefumi; Noda, Toru

    2011-01-01

    Correcting spherical and chromatic aberrations in vitro in human eyes provides substantial visual acuity and contrast sensitivity improvements. We found the same improvement in the retinal images using a model eye with/without correction of longitudinal chromatic aberrations (LCAs) and spherical aberrations (SAs). The model eye included an intraocular lens (IOL) and artificial cornea with human ocular LCAs and average human SAs. The optotypes were illuminated using a D65 light source, and the images were obtained using two-dimensional luminance colorimeter. The contrast improvement from the SA correction was higher than the LCA correction, indicating the benefit of an aspheric achromatic IOL. PMID:21698008

  16. Use of the Ames Check Standard Model for the Validation of Wall Interference Corrections

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Amaya, M.; Flach, R.

    2018-01-01

    The new check standard model of the NASA Ames 11-ft Transonic Wind Tunnel was chosen for a future validation of the facility's wall interference correction system. The chosen validation approach takes advantage of the fact that test conditions experienced by a large model in the slotted part of the tunnel's test section will change significantly if a subset of the slots is temporarily sealed. Therefore, the model's aerodynamic coefficients have to be recorded, corrected, and compared for two different test section configurations in order to perform the validation. Test section configurations with highly accurate Mach number and dynamic pressure calibrations were selected for the validation. First, the model is tested with all test section slots in open configuration while keeping the model's center of rotation on the tunnel centerline. In the next step, slots on the test section floor are sealed and the model is moved to a new center of rotation that is 33 inches below the tunnel centerline. Then, the original angle of attack sweeps are repeated. Afterwards, wall interference corrections are applied to both test data sets and response surface models of the resulting aerodynamic coefficients in interference-free flow are generated. Finally, the response surface models are used to predict the aerodynamic coefficients for a family of angles of attack while keeping dynamic pressure, Mach number, and Reynolds number constant. The validation is considered successful if the corrected aerodynamic coefficients obtained from the related response surface model pair show good agreement. Residual differences between the corrected coefficient sets will be analyzed as well because they are an indicator of the overall accuracy of the facility's wall interference correction process.

  17. Study on the influence of stochastic properties of correction terms on the reliability of instantaneous network RTK

    NASA Astrophysics Data System (ADS)

    Próchniewicz, Dominik

    2014-03-01

    The reliability of precision GNSS positioning primarily depends on correct carrier-phase ambiguity resolution. An optimal estimation and correct validation of ambiguities necessitates a proper definition of mathematical positioning model. Of particular importance in the model definition is the taking into account of the atmospheric errors (ionospheric and tropospheric refraction) as well as orbital errors. The use of the network of reference stations in kinematic positioning, known as Network-based Real-Time Kinematic (Network RTK) solution, facilitates the modeling of such errors and their incorporation, in the form of correction terms, into the functional description of positioning model. Lowered accuracy of corrections, especially during atmospheric disturbances, results in the occurrence of unaccounted biases, the so-called residual errors. The taking into account of such errors in Network RTK positioning model is possible by incorporating the accuracy characteristics of the correction terms into the stochastic model of observations. In this paper we investigate the impact of the expansion of the stochastic model to include correction term variances on the reliability of the model solution. In particular the results of instantaneous solution that only utilizes a single epoch of GPS observations, is analyzed. Such a solution mode due to the low number of degrees of freedom is very sensitive to an inappropriate mathematical model definition. Thus the high level of the solution reliability is very difficult to achieve. Numerical tests performed for a test network located in mountain area during ionospheric disturbances allows to verify the described method for the poor measurement conditions. The results of the ambiguity resolution as well as the rover positioning accuracy shows that the proposed method of stochastic modeling can increase the reliability of instantaneous Network RTK performance.

  18. Solution to the spectral filter problem of residual terrain modelling (RTM)

    NASA Astrophysics Data System (ADS)

    Rexer, Moritz; Hirt, Christian; Bucha, Blažej; Holmes, Simon

    2018-06-01

    In physical geodesy, the residual terrain modelling (RTM) technique is frequently used for high-frequency gravity forward modelling. In the RTM technique, a detailed elevation model is high-pass-filtered in the topography domain, which is not equivalent to filtering in the gravity domain. This in-equivalence, denoted as spectral filter problem of the RTM technique, gives rise to two imperfections (errors). The first imperfection is unwanted low-frequency (LF) gravity signals, and the second imperfection is missing high-frequency (HF) signals in the forward-modelled RTM gravity signal. This paper presents new solutions to the RTM spectral filter problem. Our solutions are based on explicit modelling of the two imperfections via corrections. The HF correction is computed using spectral domain gravity forward modelling that delivers the HF gravity signal generated by the long-wavelength RTM reference topography. The LF correction is obtained from pre-computed global RTM gravity grids that are low-pass-filtered using surface or solid spherical harmonics. A numerical case study reveals maximum absolute signal strengths of ˜ 44 mGal (0.5 mGal RMS) for the HF correction and ˜ 33 mGal (0.6 mGal RMS) for the LF correction w.r.t. a degree-2160 reference topography within the data coverage of the SRTM topography model (56°S ≤ φ ≤ 60°N). Application of the LF and HF corrections to pre-computed global gravity models (here the GGMplus gravity maps) demonstrates the efficiency of the new corrections over topographically rugged terrain. Over Switzerland, consideration of the HF and LF corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 4.41 to 3.27 mGal, which translates into ˜ 26% improvement. Over a second test area (Canada), our corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 5.65 to 5.30 mGal (˜ 6% improvement). Particularly over Switzerland, geophysical signals (associated, e.g. with valley fillings) were found to stand out more clearly in the RTM-reduced gravity measurements when the HF and LF correction are taken into account. In summary, the new RTM filter corrections can be easily computed and applied to improve the spectral filter characteristics of the popular RTM approach. Benefits are expected, e.g. in the context of the development of future ultra-high-resolution global gravity models, smoothing of observed gravity data in mountainous terrain and geophysical interpretations of RTM-reduced gravity measurements.

  19. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    NASA Technical Reports Server (NTRS)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  20. Sandmeier model based topographic correction to lunar spectral profiler (SP) data from KAGUYA satellite.

    PubMed

    Chen, Sheng-Bo; Wang, Jing-Ran; Guo, Peng-Ju; Wang, Ming-Chang

    2014-09-01

    The Moon may be considered as the frontier base for the deep space exploration. The spectral analysis is one of the key techniques to determine the lunar surface rock and mineral compositions. But the lunar topographic relief is more remarkable than that of the Earth. It is necessary to conduct the topographic correction for lunar spectral data before they are used to retrieve the compositions. In the present paper, a lunar Sandmeier model was proposed by considering the radiance effect from the macro and ambient topographic relief. And the reflectance correction model was also reduced based on the Sandmeier model. The Spectral Profile (SP) data from KAGUYA satellite in the Sinus Iridum quadrangle was taken as an example. And the digital elevation data from Lunar Orbiter Laser Altimeter are used to calculate the slope, aspect, incidence and emergence angles, and terrain-viewing factor for the topographic correction Thus, the lunar surface reflectance from the SP data was corrected by the proposed model after the direct component of irradiance on a horizontal surface was derived. As a result, the high spectral reflectance facing the sun is decreased and low spectral reflectance back to the sun is compensated. The statistical histogram of reflectance-corrected pixel numbers presents Gaussian distribution Therefore, the model is robust to correct lunar topographic effect and estimate lunar surface reflectance.

  1. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  2. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 97: Yucca Flat/Climax Mine Nevada National Security Site, Nevada, Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farnham, Irene

    This corrective action decision document (CADD)/corrective action plan (CAP) has been prepared for Corrective Action Unit (CAU) 97, Yucca Flat/Climax Mine, Nevada National Security Site (NNSS), Nevada. The Yucca Flat/Climax Mine CAU is located in the northeastern portion of the NNSS and comprises 720 corrective action sites. A total of 747 underground nuclear detonations took place within this CAU between 1957 and 1992 and resulted in the release of radionuclides (RNs) in the subsurface in the vicinity of the test cavities. The CADD portion describes the Yucca Flat/Climax Mine CAU data-collection and modeling activities completed during the corrective action investigationmore » (CAI) stage, presents the corrective action objectives, and describes the actions recommended to meet the objectives. The CAP portion describes the corrective action implementation plan. The CAP presents CAU regulatory boundary objectives and initial use-restriction boundaries identified and negotiated by DOE and the Nevada Division of Environmental Protection (NDEP). The CAP also presents the model evaluation process designed to build confidence that the groundwater flow and contaminant transport modeling results can be used for the regulatory decisions required for CAU closure. The UGTA strategy assumes that active remediation of subsurface RN contamination is not feasible with current technology. As a result, the corrective action is based on a combination of characterization and modeling studies, monitoring, and institutional controls. The strategy is implemented through a four-stage approach that comprises the following: (1) corrective action investigation plan (CAIP), (2) CAI, (3) CADD/CAP, and (4) closure report (CR) stages.« less

  3. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  4. Spherical aberration correction with an in-lens N-fold symmetric line currents model.

    PubMed

    Hoque, Shahedul; Ito, Hiroyuki; Nishi, Ryuji

    2018-04-01

    In our previous works, we have proposed N-SYLC (N-fold symmetric line currents) models for aberration correction. In this paper, we propose "in-lens N-SYLC" model, where N-SYLC overlaps rotationally symmetric lens. Such overlap is possible because N-SYLC is free of magnetic materials. We analytically prove that, if certain parameters of the model are optimized, an in-lens 3-SYLC (N = 3) doublet can correct 3rd order spherical aberration. By computer simulation, we show that the required excitation current for correction is less than 0.25 AT for beam energy 5 keV, and the beam size after correction is smaller than 1 nm at the corrector image plane for initial slope less than 4 mrad. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  6. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  7. The Swiss cheese model of safety incidents: are there holes in the metaphor?

    PubMed Central

    Perneger, Thomas V

    2005-01-01

    Background Reason's Swiss cheese model has become the dominant paradigm for analysing medical errors and patient safety incidents. The aim of this study was to determine if the components of the model are understood in the same way by quality and safety professionals. Methods Survey of a volunteer sample of persons who claimed familiarity with the model, recruited at a conference on quality in health care, and on the internet through quality-related websites. The questionnaire proposed several interpretations of components of the Swiss cheese model: a) slice of cheese, b) hole, c) arrow, d) active error, e) how to make the system safer. Eleven interpretations were compatible with this author's interpretation of the model, 12 were not. Results Eighty five respondents stated that they were very or quite familiar with the model. They gave on average 15.3 (SD 2.3, range 10 to 21) "correct" answers out of 23 (66.5%) – significantly more than 11.5 "correct" answers that would expected by chance (p < 0.001). Respondents gave on average 2.4 "correct" answers regarding the slice of cheese (out of 4), 2.7 "correct" answers about holes (out of 5), 2.8 "correct" answers about the arrow (out of 4), 3.3 "correct" answers about the active error (out of 5), and 4.1 "correct" answers about improving safety (out of 5). Conclusion The interpretations of specific features of the Swiss cheese model varied considerably among quality and safety professionals. Reaching consensus about concepts of patient safety requires further work. PMID:16280077

  8. Assessments of higher-order ionospheric effects on GPS coordinate time series: A case study of CMONOC with longer time series

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Deng, Liansheng; Zhou, Xiaohui; Ma, Yifang

    2014-05-01

    Higher-order ionospheric (HIO) corrections are proposed to become a standard part for precise GPS data analysis. For this study, we deeply investigate the impacts of the HIO corrections on the coordinate time series by implementing re-processing of the GPS data from Crustal Movement Observation Network of China (CMONOC). Nearly 13 year data are used in our three processing runs: (a) run NO, without HOI corrections, (b) run IG, both second- and third-order corrections are modeled using the International Geomagnetic Reference Field 11 (IGRF11) to model the magnetic field, (c) run ID, the same with IG but dipole magnetic model are applied. Both spectral analysis and noise analysis are adopted to investigate these effects. Results show that for CMONOC stations, HIO corrections are found to have brought an overall improvement. After the corrections are applied, the noise amplitudes decrease, with the white noise amplitudes showing a more remarkable variation. Low-latitude sites are more affected. For different coordinate components, the impacts vary. The results of an analysis of stacked periodograms show that there is a good match between the seasonal amplitudes and the HOI corrections, and the observed variations in the coordinate time series are related to HOI effects. HOI delays partially explain the seasonal amplitudes in the coordinate time series, especially for the U component. The annual amplitudes for all components are decreased for over one-half of the selected CMONOC sites. Additionally, the semi-annual amplitudes for the sites are much more strongly affected by the corrections. However, when diplole model is used, the results are not as optimistic as IGRF model. Analysis of dipole model indicate that HIO delay lead to the increase of noise amplitudes, and that HIO delays with dipole model can generate false periodic signals. When dipole model are used in modeling HIO terms, larger residual and noise are brought in rather than the effective improvements.

  9. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  10. Contribution of energy values to the analysis of global searching molecular dynamics simulations of transmembrane helical bundles.

    PubMed Central

    Torres, Jaume; Briggs, John A G; Arkin, Isaiah T

    2002-01-01

    Molecular interactions between transmembrane alpha-helices can be explored using global searching molecular dynamics simulations (GSMDS), a method that produces a group of probable low energy structures. We have shown previously that the correct model in various homooligomers is always located at the bottom of one of various possible energy basins. Unfortunately, the correct model is not necessarily the one with the lowest energy according to the computational protocol, which has resulted in overlooking of this parameter in favor of experimental data. In an attempt to use energetic considerations in the aforementioned analysis, we used global searching molecular dynamics simulations on three homooligomers of different sizes, the structures of which are known. As expected, our results show that even when the conformational space searched includes the correct structure, taking together simulations using both left and right handedness, the correct model does not necessarily have the lowest energy. However, for the models derived from the simulation that uses the correct handedness, the lowest energy model is always at, or very close to, the correct orientation. We hypothesize that this should also be true when simulations are performed using homologous sequences, and consequently lowest energy models with the right handedness should produce a cluster around a certain orientation. In contrast, using the wrong handedness the lowest energy structures for each sequence should appear at many different orientations. The rationale behind this is that, although more than one energy basin may exist, basins that do not contain the correct model will shift or disappear because they will be destabilized by at least one conservative (i.e. silent) mutation, whereas the basin containing the correct model will remain. This not only allows one to point to the possible handedness of the bundle, but can be used to overcome ambiguities arising from the use of homologous sequences in the analysis of global searching molecular dynamics simulations. In addition, because clustering of lowest energy models arising from homologous sequences only happens when the estimation of the helix tilt is correct, it may provide a validation for the helix tilt estimate. PMID:12023229

  11. Super-global distortion correction for a rotational C-arm x-ray image intensifier.

    PubMed

    Liu, R R; Rudin, S; Bednarek, D R

    1999-09-01

    Image intensifier (II) distortion changes as a function of C-arm rotation angle because of changes in the orientation of the II with the earth's or other stray magnetic fields. For cone-beam computed tomography (CT), distortion correction for all angles is essential. The new super-global distortion correction consists of a model to continuously correct II distortion not only at each location in the image but for every rotational angle of the C arm. Calibration bead images were acquired with a standard C arm in 9 in. II mode. The super-global (SG) model is obtained from the single-plane global correction of the selected calibration images with given sampling angle interval. The fifth-order single-plane global corrections yielded a residual rms error of 0.20 pixels, while the SG model yielded a rms error of 0.21 pixels, a negligibly small difference. We evaluated the accuracy dependence of the SG model on various factors, such as the single-plane global fitting order, SG order, and angular sampling interval. We found that a good SG model can be obtained using a sixth-order SG polynomial fit based on the fifth-order single-plane global correction, and that a 10 degrees sampling interval was sufficient. Thus, the SG model saves processing resources and storage space. The residual errors from the mechanical errors of the x-ray system were also investigated, and found comparable with the SG residual error. Additionally, a single-plane global correction was done in the cylindrical coordinate system, and physical information about pincushion distortion and S distortion were observed and analyzed; however, this method is not recommended due to a lack of calculational efficiency. In conclusion, the SG model provides an accurate, fast, and simple correction for rotational C-arm images, which may be used for cone-beam CT.

  12. A model-free method for mass spectrometer response correction. [for oxygen consumption and cardiac output calculation

    NASA Technical Reports Server (NTRS)

    Shykoff, Barbara E.; Swanson, Harvey T.

    1987-01-01

    A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.

  13. Dose-to-water conversion for the backscatter-shielded EPID: A frame-based method to correct for EPID energy response to MLC transmitted radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zwan, Benjamin J., E-mail: benjamin.zwan@uon.edu.au; O’Connor, Daryl J.; King, Brian W.

    2014-08-15

    Purpose: To develop a frame-by-frame correction for the energy response of amorphous silicon electronic portal imaging devices (a-Si EPIDs) to radiation that has transmitted through the multileaf collimator (MLC) and to integrate this correction into the backscatter shielded EPID (BSS-EPID) dose-to-water conversion model. Methods: Individual EPID frames were acquired using a Varian frame grabber and iTools acquisition software then processed using in-house software developed inMATLAB. For each EPID image frame, the region below the MLC leaves was identified and all pixels in this region were multiplied by a factor of 1.3 to correct for the under-response of the imager tomore » MLC transmitted radiation. The corrected frames were then summed to form a corrected integrated EPID image. This correction was implemented as an initial step in the BSS-EPID dose-to-water conversion model which was then used to compute dose planes in a water phantom for 35 IMRT fields. The calculated dose planes, with and without the proposed MLC transmission correction, were compared to measurements in solid water using a two-dimensional diode array. Results: It was observed that the integration of the MLC transmission correction into the BSS-EPID dose model improved agreement between modeled and measured dose planes. In particular, the MLC correction produced higher pass rates for almost all Head and Neck fields tested, yielding an average pass rate of 99.8% for 2%/2 mm criteria. A two-sample independentt-test and fisher F-test were used to show that the MLC transmission correction resulted in a statistically significant reduction in the mean and the standard deviation of the gamma values, respectively, to give a more accurate and consistent dose-to-water conversion. Conclusions: The frame-by-frame MLC transmission response correction was shown to improve the accuracy and reduce the variability of the BSS-EPID dose-to-water conversion model. The correction may be applied as a preprocessing step in any pretreatment portal dosimetry calculation and has been shown to be beneficial for highly modulated IMRT fields.« less

  14. Off-target model based OPC

    NASA Astrophysics Data System (ADS)

    Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III

    2005-11-01

    Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.

  15. Designing multifocal corneal models to correct presbyopia by laser ablation

    NASA Astrophysics Data System (ADS)

    Alarcón, Aixa; Anera, Rosario G.; Del Barco, Luis Jiménez; Jiménez, José R.

    2012-01-01

    Two multifocal corneal models and an aspheric model designed to correct presbyopia by corneal photoablation were evaluated. The design of each model was optimized to achieve the best visual quality possible for both near and distance vision. In addition, we evaluated the effect of myosis and pupil decentration on visual quality. The corrected model with the central zone for near vision provides better results since it requires less ablated corneal surface area, permits higher addition values, presents stabler visual quality with pupil-size variations and lower high-order aberrations.

  16. Problems and Limitations of Satellite Image Orientation for Determination of Height Models

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.

    2017-05-01

    The usual satellite image orientation is based on bias corrected rational polynomial coefficients (RPC). The RPC are describing the direct sensor orientation of the satellite images. The locations of the projection centres today are without problems, but an accuracy limit is caused by the attitudes. Very high resolution satellites today are very agile, able to change the pointed area over 200km within 10 to 11 seconds. The corresponding fast attitude acceleration of the satellite may cause a jitter which cannot be expressed by the third order RPC, even if it is recorded by the gyros. Only a correction of the image geometry may help, but usually this will not be done. The first indication of jitter problems is shown by systematic errors of the y-parallaxes (py) for the intersection of corresponding points during the computation of ground coordinates. These y-parallaxes have a limited influence to the ground coordinates, but similar problems can be expected for the x-parallaxes, determining directly the object height. Systematic y-parallaxes are shown for Ziyuan-3 (ZY3), WorldView-2 (WV2), Pleiades, Cartosat-1, IKONOS and GeoEye. Some of them have clear jitter effects. In addition linear trends of py can be seen. Linear trends in py and tilts in of computed height models may be caused by limited accuracy of the attitude registration, but also by bias correction with affinity transformation. The bias correction is based on ground control points (GCPs). The accuracy of the GCPs usually does not cause some limitations but the identification of the GCPs in the images may be difficult. With 2-dimensional bias corrected RPC-orientation by affinity transformation tilts of the generated height models may be caused, but due to large affine image deformations some satellites, as Cartosat-1, have to be handled with bias correction by affinity transformation. Instead of a 2-dimensional RPC-orientation also a 3-dimensional orientation is possible, respecting the object height more as by 2-dimensional orientation. The 3-dimensional orientation showed advantages for orientation based on a limited number of GCPs, but in case of poor GCP distribution it may cause also negative effects. For some of the used satellites the bias correction by affinity transformation showed advantages, but for some other the bias correction by shift was leading to a better levelling of the generated height models, even if the root mean square (RMS) differences at the GCPs were larger as for bias correction by affinity transformation. The generated height models can be analyzed and corrected with reference height models. For the used data sets accurate reference height models are available, but an analysis and correction with the free of charge available SRTM digital surface model (DSM) or ALOS World 3D (AW3D30) is also possible and leads to similar results. The comparison of the generated height models with the reference DSM shows some height undulations, but the major accuracy influence is caused by tilts of the height models. Some height model undulations reach up to 50 % of the ground sampling distance (GSD), this is not negligible but it cannot be seen not so much at the standard deviations of the height. In any case an improvement of the generated height models is possible with reference height models. If such corrections are applied it compensates possible negative effects of the type of bias correction or 2-dimensional orientations against 3-dimensional handling.

  17. [Baseline correction of spectrum for the inversion of chlorophyll-a concentration in the turbidity water].

    PubMed

    Wei, Yu-Chun; Wang, Guo-Xiang; Cheng, Chun-Mei; Zhang, Jing; Sun, Xiao-Peng

    2012-09-01

    Suspended particle material is the main factor affecting remote sensing inversion of chlorophyll-a concentration (Chla) in turbidity water. According to the optical property of suspended material in water, the present paper proposed a linear baseline correction method to weaken the suspended particle contribution in the spectrum above turbidity water surface. The linear baseline was defined as the connecting line of reflectance from 450 to 750 nm, and baseline correction is that spectrum reflectance subtracts the baseline. Analysis result of field data in situ of Meiliangwan, Taihu Lake in April, 2011 and March, 2010 shows that spectrum linear baseline correction can improve the inversion precision of Chl a and produce the better model diagnoses. As the data in March, 2010, RMSE of band ratio model built by original spectrum is 4.11 mg x m(-3), and that built by spectrum baseline correction is 3.58 mg x m(-3). Meanwhile, residual distribution and homoscedasticity in the model built by baseline correction spectrum is improved obviously. The model RMSE of April, 2011 shows the similar result. The authors suggest that using linear baseline correction as the spectrum processing method to improve Chla inversion accuracy in turbidity water without algae bloom.

  18. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  19. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  20. Quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, G.; Leble, S.

    2014-03-01

    Analytical form of quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model is obtained through zeta function regularisation with account of all rest variables of a d-dimensional theory. Qualitative dependence of quantum corrections on parameters of the classical systems is also evaluated for a much broader class of potentials u(x) = b2f(bx) + C with b and C as arbitrary real constants.

  1. Assessment of a Bidirectional Reflectance Distribution Correction of Above-Water and Satellite Water-Leaving Radiance in Coastal Waters

    DTIC Science & Technology

    2012-01-10

    water and satellite water-leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ... model is proposed to correct above-water and satellite water-leaving radiance data for bidirectional effects. The proposed model is first validated with...proposed model over the current one, demonstrating the need for a specific case 2 water BRDF correction algorithm as well as the feasibility of enhancing

  2. Research and application of a novel hybrid decomposition-ensemble learning paradigm with error correction for daily PM10 forecasting

    NASA Astrophysics Data System (ADS)

    Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang

    2018-03-01

    In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.

  3. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  4. Assessment of spill flow emissions on the basis of measured precipitation and waste water data

    NASA Astrophysics Data System (ADS)

    Hochedlinger, Martin; Gruber, Günter; Kainz, Harald

    2005-09-01

    Combined sewer overflows (CSOs) are substantial contributors to the total emissions into surface water bodies. The emitted pollution results from dry-weather waste water loads, surface runoff pollution and from the remobilisation of sewer deposits and sewer slime during storm events. One possibility to estimate overflow loads is a calculation with load quantification models. Input data for these models are pollution concentrations, e.g. Total Chemical Oxygen Demand (COD tot), Total Suspended Solids (TSS) or Soluble Chemical Oxygen Demand (COD sol), rainfall series and flow measurements for model calibration and validation. It is important for the result of overflow loads to model with reliable input data, otherwise this inevitably leads to bad results. In this paper the correction of precipitation measurements and the sewer online-measurements are presented to satisfy the load quantification model requirements already described. The main focus is on tipping bucket gauge measurements and their corrections. The results evidence the importance of their corrections due the effects on load quantification modelling and show the difference between corrected and not corrected data of storm events with high rain intensities.

  5. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  6. Determining spherical lens correction for astronaut training underwater.

    PubMed

    Porter, Jason; Gibson, C Robert; Strauss, Samuel

    2011-09-01

    To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration astronauts while training underwater. The replica space suit's helmet contains curved visors that induce refractive power when submersed in water. Anterior surface powers and thicknesses were measured for the helmet's protective and inside visors. The impact of each visor on the helmet's refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet's total induced spherical power underwater and the astronaut's manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. The helmet's visors induced a total power of -2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (r = 0.971) with 70% of eyes having a difference in magnitude of <0.25 D between values. We devised a model to calculate the spherical spectacle lens correction needed to be worn underwater by National Aeronautics and Space Administration astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater.

  7. Determining spherical lens correction for astronaut training underwater

    PubMed Central

    Porter, Jason; Gibson, C. Robert; Strauss, Samuel

    2013-01-01

    Purpose To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration (NASA) astronauts while training underwater. The replica space suit’s helmet contains curved visors that induce refractive power when submersed in water. Methods Anterior surface powers and thicknesses were measured for the helmet’s protective and inside visors. The impact of each visor on the helmet’s refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet’s total induced spherical power underwater and the astronaut’s manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. Results The helmet visors induced a total power of −2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (R = 0.971) with 70% of eyes having a difference in magnitude of < 0.25 D between values. Conclusions We devised a model to calculate the spherical spectacle lens correction needed to be worn underwater by National Aeronautics and Space Administration astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater. PMID:21623249

  8. Combining Statistics and Physics to Improve Climate Downscaling

    NASA Astrophysics Data System (ADS)

    Gutmann, E. D.; Eidhammer, T.; Arnold, J.; Nowak, K.; Clark, M. P.

    2017-12-01

    Getting useful information from climate models is an ongoing problem that has plagued climate science and hydrologic prediction for decades. While it is possible to develop statistical corrections for climate models that mimic current climate almost perfectly, this does not necessarily guarantee that future changes are portrayed correctly. In contrast, convection permitting regional climate models (RCMs) have begun to provide an excellent representation of the regional climate system purely from first principles, providing greater confidence in their change signal. However, the computational cost of such RCMs prohibits the generation of ensembles of simulations or long time periods, thus limiting their applicability for hydrologic applications. Here we discuss a new approach combining statistical corrections with physical relationships for a modest computational cost. We have developed the Intermediate Complexity Atmospheric Research model (ICAR) to provide a climate and weather downscaling option that is based primarily on physics for a fraction of the computational requirements of a traditional regional climate model. ICAR also enables the incorporation of statistical adjustments directly within the model. We demonstrate that applying even simple corrections to precipitation while the model is running can improve the simulation of land atmosphere feedbacks in ICAR. For example, by incorporating statistical corrections earlier in the modeling chain, we permit the model physics to better represent the effect of mountain snowpack on air temperature changes.

  9. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    NASA Astrophysics Data System (ADS)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2018-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  10. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    NASA Astrophysics Data System (ADS)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  11. Statistical bias correction method applied on CMIP5 datasets over the Indian region during the summer monsoon season for climate change applications

    NASA Astrophysics Data System (ADS)

    Prasanna, V.

    2018-01-01

    This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.

  12. A Study of Two-Equation Turbulence Models on the Elliptic Streamline Flow

    NASA Technical Reports Server (NTRS)

    Blaisdell, Gregory A.; Qin, Jim H.; Shariff, Karim; Rai, Man Mohan (Technical Monitor)

    1995-01-01

    Several two-equation turbulence models are compared to data from direct numerical simulations (DNS) of the homogeneous elliptic streamline flow, which combines rotation and strain. The models considered include standard two-equation models and models with corrections for rotational effects. Most of the rotational corrections modify the dissipation rate equation to account for the reduced dissipation rate in rotating turbulent flows, however, the DNS data shows that the production term in the turbulent kinetic energy equation is not modeled correctly by these models. Nonlinear relations for the Reynolds stresses are considered as a means of modifying the production term. Implications for the modeling of turbulent vortices will be discussed.

  13. Dynamic Aberration Correction for Conformal Window of High-Speed Aircraft Using Optimized Model-Based Wavefront Sensorless Adaptive Optics.

    PubMed

    Dong, Bing; Li, Yan; Han, Xin-Li; Hu, Bin

    2016-09-02

    For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10(-5) in optimized correction and is 1.427 × 10(-5) in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method.

  14. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    PubMed

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  15. Extension of the Haseman-Elston regression model to longitudinal data.

    PubMed

    Won, Sungho; Elston, Robert C; Park, Taesung

    2006-01-01

    We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.

  16. Workload and associated factors: a study in maritime port in Brazil.

    PubMed

    Cezar-Vaz, Marta Regina; Bonow, Clarice Alves; Almeida, Marlise Capa Verde de; Sant'Anna, Cynthia Fontella; Cardoso, Leticia Silveira

    2016-11-28

    to identify the effect of the mental, physical, temporal, performance, total effort and frustration demands in the overall workload, and in the same way analyze the global burden of port labor and associated factors that contribute most to their decrease or increase. a cross-sectional, quantitative study, developed with 232 dock workers. For data collection, a structured questionnaire with descriptive, occupational, smoking and illicit drug use variables was applied, as well as variables on the load on the tasks undertaken at work, based on the questionnaire NASA Task Load Index. For data analysis, we used the analysis of the Poisson regression model. the demands physical demand and total effort showed greater effect on the overall workload, indicating high overall load on port work (134 employees - 58.8%). The following remained associated statistically with high levels of workload: age (p = 0.044), to be employee of the wharfage (p = 0.006), work only at night (p = 0.025), smoking (p = 0.037) and use of illegal drugs (p = 0.029). the workload in this type of activity was high, and the professional category and work shift the factors that contributed to the increase, while the age proved to be a factor associated with a decrease. identificar o efeito das demandas mental, física, temporal, de desempenho, esforço total e frustração na carga global de trabalho, da mesma maneira que analisar a carga global de trabalho portuário e fatores associados que mais contribuem para sua diminuição ou aumento. estudo transversal, quantitativo, desenvolvido com 232 trabalhadores portuários. Para a coleta de dados, foi aplicado um questionário estruturado com variáveis de caracterização, ocupacional, tabagismo e uso de drogas ilícitas, além de variáveis sobre a carga nas tarefas desenvolvidas no trabalho, com base no questionário NASA Task Load Index. Para análise dos dados, utilizou-se a análise do modelo de regressão de Poisson. as demandas exigência física e esforço total apresentaram maior efeito sobre a carga global de trabalho, indicando elevada carga global no trabalho portuário (134 trabalhadores - 58,8%). Permaneceram associados, estatisticamente, com níveis elevados da carga de trabalho: idade (p = 0,044), ser trabalhador da capatazia (p = 0,006), trabalhar somente no período noturno (p = 0,025), tabagismo (p = 0,037) e uso de drogas ilícitas (p = 0,029). a carga de trabalho, nesse tipo de atividade, foi alta, sendo a categoria profissional e o turno de trabalho os fatores que contribuíram para o aumento, enquanto a idade mostrou-se fator associado à sua diminuição. identificar el efecto de las demandas mentales, físicas de tiempo y de desempeño, el esfuerzo total y la frustración en la carga de trabajo global, al mismo tiempo que se analizan la carga global de trabajo portuario y los factores asociados que contribuyen principalmente a su disminución o aumento. estudio transversal cuantitativo desarrollado con 232 trabajadores portuarios. Para la recolección de datos se aplicó un cuestionario estructurado con variables de caracterización, ocupacionales, tabaquismo y uso de drogas ilícitas así como también variables sobre la carga en las tareas desarrolladas en el trabajo, en base en el cuestionario NASA Task Load Index. Para el análisis de los datos se utilizó análisis de regresión de Poisson. las demandas exigencia física y esfuerzo total presentaron mayor efecto sobre la carga global de trabajo, indicando una elevada carga global en el trabajo portuario (134 trabajadores 58,8%). Estaban asociados estadísticamente con niveles elevados de la carga de trabajo: la edad (p = 0,044); ser trabajador de la capatacía (p = 0,006); trabajar solamente en el periodo nocturno (p = 0,025); tabaquismo (p = 0,037); y uso de drogas ilícitas (p = 0,029). la carga de trabajo en este tipo de actividad fue alta, siendo la categoría profesional y el turno de trabajo los factores que contribuyeron para este aumento, mientras que la edad fue un factor asociado a su disminución.

  17. A sun-crown-sensor model and adapted C-correction logic for topographic correction of high resolution forest imagery

    NASA Astrophysics Data System (ADS)

    Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.

    2014-10-01

    Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model comparison on the red and near infrared bands. The advantages of SCnS + C and SCnS + W on both bands are expected to facilitate forest classification and change detection applications.

  18. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    NASA Astrophysics Data System (ADS)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  19. Chronological and geomorphological investigation of fossil debris-covered glaciers in relation to deglaciation processes: A case study in the Sierra de La Demanda, northern Spain

    NASA Astrophysics Data System (ADS)

    Fernández-Fernández, José M.; Palacios, David; García-Ruiz, José M.; Andrés, Nuria; Schimmelpfennig, Irene; Gómez-Villar, Amelia; Santos-González, Javier; Álvarez-Martínez, Javier; Arnáez, José; Úbeda, José; Léanni, Laëtitia; Aumaître, Georges; Bourlès, Didier; Keddadouche, Karim; Aster Team

    2017-08-01

    In this study, fossil debris-covered glaciers are investigated and dated in the Sierra de la Demanda, northern Spain. They are located in glacial valleys of approximately 1 km in length, where several moraines represent distinct phases of the deglaciation period. Several boulders in the moraines and fossil debris-covered glaciers were selected for analysis of 10Be surface exposure dating. A minimum age of 17.8 ± 2.2 ka was obtained for the outermost moraine in the San Lorenzo cirque, and was attributed to the global Last Glacial Maximum (LGM) or earlier glacial stages, based on deglaciation dates determined in other mountain areas of northern Spain. The youngest moraines were dated to approximately 16.7 ± 1.4 ka, and hence correspond to the GS-2a stadial (Oldest Dryas). Given that the debris-covered glaciers fossilize intermediate moraines, it was deduced that they developed between the LGM and the Oldest Dryas, coinciding with a period of extensive deglaciation. During this deglaciation phase, the cirque headwalls likely discharged large quantities of boulders and blocks that covered the residual ice masses. The resulting debris-covered glaciers evolved slowly because the debris mantle preserved the ice core from rapid ablation, and consequently they remained active until the end of the Late Glacial or the beginning of the Holocene (for the San Lorenzo cirque) and the Holocene Thermal Maximum (for the Mencilla cirque). The north-facing part of the Mencilla cirque ensured longer preservation of the ice core.

  20. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  1. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    NASA Astrophysics Data System (ADS)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.

  2. Robust distortion correction of endoscope

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Nie, Sixiang; Soto-Thompson, Marcelo; Chen, Chao-I.; A-Rahim, Yousif I.

    2008-03-01

    Endoscopic images suffer from a fundamental spatial distortion due to the wide angle design of the endoscope lens. This barrel-type distortion is an obstacle for subsequent Computer Aided Diagnosis (CAD) algorithms and should be corrected. Various methods and research models for the barrel-type distortion correction have been proposed and studied. For industrial applications, a stable, robust method with high accuracy is required to calibrate the different types of endoscopes in an easy of use way. The correction area shall be large enough to cover all the regions that the physicians need to see. In this paper, we present our endoscope distortion correction procedure which includes data acquisition, distortion center estimation, distortion coefficients calculation, and look-up table (LUT) generation. We investigate different polynomial models used for modeling the distortion and propose a new one which provides correction results with better visual quality. The method has been verified with four types of colonoscopes. The correction procedure is currently being applied on human subject data and the coefficients are being utilized in a subsequent 3D reconstruction project of colon.

  3. Assessment of radar altimetry correction slopes for marine gravity recovery: A case study of Jason-1 GM data

    NASA Astrophysics Data System (ADS)

    Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu

    2018-04-01

    Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.

  4. Validation of drift and diffusion coefficients from experimental data

    NASA Astrophysics Data System (ADS)

    Riera, R.; Anteneodo, C.

    2010-04-01

    Many fluctuation phenomena, in physics and other fields, can be modeled by Fokker-Planck or stochastic differential equations whose coefficients, associated with drift and diffusion components, may be estimated directly from the observed time series. Its correct characterization is crucial to determine the system quantifiers. However, due to the finite sampling rates of real data, the empirical estimates may significantly differ from their true functional forms. In the literature, low-order corrections, or even no corrections, have been applied to the finite-time estimates. A frequent outcome consists of linear drift and quadratic diffusion coefficients. For this case, exact corrections have been recently found, from Itô-Taylor expansions. Nevertheless, model validation constitutes a necessary step before determining and applying the appropriate corrections. Here, we exploit the consequences of the exact theoretical results obtained for the linear-quadratic model. In particular, we discuss whether the observed finite-time estimates are actually a manifestation of that model. The relevance of this analysis is put into evidence by its application to two contrasting real data examples in which finite-time linear drift and quadratic diffusion coefficients are observed. In one case the linear-quadratic model is readily rejected while in the other, although the model constitutes a very good approximation, low-order corrections are inappropriate. These examples give warning signs about the proper interpretation of finite-time analysis even in more general diffusion processes.

  5. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    NASA Astrophysics Data System (ADS)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of streamflow under the climate scenarios RCP4.5 and RCP8.5. We utilize two techniques for correcting biases in the climate model output: quantile mapping and a new method, frequency bias correction. The FBC method matches the frequencies between observed and GCM-RCM data. In this way, it can be used to correct for all time scales, which is a known limitation of quantile mapping. A novel approach for the evaluation of the climate simulations and bias correction methods was then applied. Streamflow can be thought of as the "great integrator" of uncertainties. The ability, or the lack thereof, to correctly simulate streamflow is a way to assess the realism of the bias-corrected climate simulations. Long-term monthly mean as well as high and low flow metrics are used to evaluate the realism of the simulations under current climate and to gauge the impacts of climate change on streamflow. Preliminary results show that under present climate, calibration of the hydrological model comprises of a much smaller band of uncertainty in the modeling chain as compared to the bias correction of the GCM-RCMs. Therefore, for future time periods, we expect the bias correction of climate model data to have a greater influence on projected changes in streamflow than the calibration of the hydrological model.

  6. Are we using the right fuel to drive hydrological models? A climate impact study in the Upper Blue Nile

    NASA Astrophysics Data System (ADS)

    Liersch, Stefan; Tecklenburg, Julia; Rust, Henning; Dobler, Andreas; Fischer, Madlen; Kruschke, Tim; Koch, Hagen; Fokko Hattermann, Fred

    2018-04-01

    Climate simulations are the fuel to drive hydrological models that are used to assess the impacts of climate change and variability on hydrological parameters, such as river discharges, soil moisture, and evapotranspiration. Unlike with cars, where we know which fuel the engine requires, we never know in advance what unexpected side effects might be caused by the fuel we feed our models with. Sometimes we increase the fuel's octane number (bias correction) to achieve better performance and find out that the model behaves differently but not always as was expected or desired. This study investigates the impacts of projected climate change on the hydrology of the Upper Blue Nile catchment using two model ensembles consisting of five global CMIP5 Earth system models and 10 regional climate models (CORDEX Africa). WATCH forcing data were used to calibrate an eco-hydrological model and to bias-correct both model ensembles using slightly differing approaches. On the one hand it was found that the bias correction methods considerably improved the performance of average rainfall characteristics in the reference period (1970-1999) in most of the cases. This also holds true for non-extreme discharge conditions between Q20 and Q80. On the other hand, bias-corrected simulations tend to overemphasize magnitudes of projected change signals and extremes. A general weakness of both uncorrected and bias-corrected simulations is the rather poor representation of high and low flows and their extremes, which were often deteriorated by bias correction. This inaccuracy is a crucial deficiency for regional impact studies dealing with water management issues and it is therefore important to analyse model performance and characteristics and the effect of bias correction, and eventually to exclude some climate models from the ensemble. However, the multi-model means of all ensembles project increasing average annual discharges in the Upper Blue Nile catchment and a shift in seasonal patterns, with decreasing discharges in June and July and increasing discharges from August to November.

  7. Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island

    NASA Astrophysics Data System (ADS)

    Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.

    2018-04-01

    Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.

  8. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  9. The usefulness of "corrected" body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data.

    PubMed

    Dutton, Daniel J; McLaren, Lindsay

    2014-05-06

    National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18-65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23-28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association.

  10. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    ERIC Educational Resources Information Center

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  11. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    PubMed

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  12. Dynamic Aberration Correction for Conformal Window of High-Speed Aircraft Using Optimized Model-Based Wavefront Sensorless Adaptive Optics

    PubMed Central

    Dong, Bing; Li, Yan; Han, Xin-li; Hu, Bin

    2016-01-01

    For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10−5 in optimized correction and is 1.427 × 10−5 in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method. PMID:27598161

  13. Normal peer models and autistic children's learning.

    PubMed Central

    Egel, A L; Richman, G S; Koegel, R L

    1981-01-01

    Present research and legislation regarding mainstreaming autistic children into normal classrooms have raised the importance of studying whether autistic children can benefit from observing normal peer models. The present investigation systematically assessed whether autistic children's learning of discrimination tasks could be improved if they observed normal children perform the tasks correctly. In the context of a multiple baseline design, four autistic children worked on five discrimination tasks that their teachers reported were posing difficulty. Throughout the baseline condition the children evidenced very low levels of correct responding on all five tasks. In the subsequent treatment condition, when normal peers modeled correct responses, the autistic children's correct responding increased dramatically. In each case, the peer modeling procedure produced rapid achievement of the acquisition which was maintained after the peer models were removed. These results are discussed in relation to issues concerning observational learning and in relation to the implications for mainstreaming autistic children into normal classrooms. PMID:7216930

  14. The magnetisation distribution of the Ising model - a new approach

    NASA Astrophysics Data System (ADS)

    Hakan Lundow, Per; Rosengren, Anders

    2010-03-01

    A completely new approach to the Ising model in 1 to 5 dimensions is developed. We employ a generalisation of the binomial coefficients to describe the magnetisation distributions of the Ising model. For the complete graph this distribution is exact. For simple lattices of dimensions d=1 and d=5 the magnetisation distributions are remarkably well-fitted by the generalized binomial distributions. For d=4 we are only slightly less successful, while for d=2,3 we see some deviations (with exceptions!) between the generalized binomial and the Ising distribution. The results speak in favour of the generalized binomial distribution's correctness regarding their general behaviour in comparison to the Ising model. A theoretical analysis of the distribution's moments also lends support their being correct asymptotically, including the logarithmic corrections in d=4. The full extent to which they correctly model the Ising distribution, and for which graph families, is not settled though.

  15. Optimal control in adaptive optics modeling of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Herrmann, J.

    The problem of using an adaptive optics system to correct for nonlinear effects like thermal blooming is addressed using a model containing nonlinear lenses through which Gaussian beams are propagated. The best correction of this nonlinear system can be formulated as a deterministic open loop optimal control problem. This treatment gives a limit for the best possible correction. Aspects of adaptive control and servo systems are not included at this stage. An attempt is made to determine that control in the transmitter plane which minimizes the time averaged area or maximizes the fluence in the target plane. The standard minimization procedure leads to a two-point-boundary-value problem, which is ill-conditioned in the case. The optimal control problem was solved using an iterative gradient technique. An instantaneous correction is introduced and compared with the optimal correction. The results of the calculations show that for short times or weak nonlinearities the instantaneous correction is close to the optimal correction, but that for long times and strong nonlinearities a large difference develops between the two types of correction. For these cases the steady state correction becomes better than the instantaneous correction and approaches the optimum correction.

  16. Model-based sensor-less wavefront aberration correction in optical coherence tomography.

    PubMed

    Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel

    2015-12-15

    Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.

  17. Erratum: Erratum to: Maximally symmetric two Higgs doublet model with natural standard model alignment

    NASA Astrophysics Data System (ADS)

    Bhupal Dev, P. S.; Pilaftsis, Apostolos

    2015-11-01

    Here we correct some typesetting errors in ref. [1]. These corrections have been implemented in the latest version of [1] on arXiv and the corrected equations have also been reproduced in ref. [2] for the reader's convenience. We clarify that all numerical results presented in ref. [1] remain unaffected by these typographic errors.

  18. PET motion correction in context of integrated PET/MR: Current techniques, limitations, and future projections.

    PubMed

    Gillman, Ashley; Smith, Jye; Thomas, Paul; Rose, Stephen; Dowson, Nicholas

    2017-12-01

    Patient motion is an important consideration in modern PET image reconstruction. Advances in PET technology mean motion has an increasingly important influence on resulting image quality. Motion-induced artifacts can have adverse effects on clinical outcomes, including missed diagnoses and oversized radiotherapy treatment volumes. This review aims to summarize the wide variety of motion correction techniques available in PET and combined PET/CT and PET/MR, with a focus on the latter. A general framework for the motion correction of PET images is presented, consisting of acquisition, modeling, and correction stages. Methods for measuring, modeling, and correcting motion and associated artifacts, both in literature and commercially available, are presented, and their relative merits are contrasted. Identified limitations of current methods include modeling of aperiodic and/or unpredictable motion, attaining adequate temporal resolution for motion correction in dynamic kinetic modeling acquisitions, and maintaining availability of the MR in PET/MR scans for diagnostic acquisitions. Finally, avenues for future investigation are discussed, with a focus on improvements that could improve PET image quality, and that are practical in the clinical environment. © 2017 American Association of Physicists in Medicine.

  19. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  20. VizieR Online Data Catalog: 3D correction in 5 photometric systems (Bonifacio+, 2018)

    NASA Astrophysics Data System (ADS)

    Bonifacio, P.; Caffau, E.; Ludwig, H.-G.; Steffen, M.; Castelli, F.; Gallagher, A. J.; Kucinskas, A.; Prakapavicius, D.; Cayrel, R.; Freytag, B.; Plez, B.; Homeier, D.

    2018-01-01

    We have used the CIFIST grid of CO5BOLD models to investigate the effects of granulation on fluxes and colours of stars of spectral type F, G, and K. We publish tables with 3D corrections that can be applied to colours computed from any 1D model atmosphere. For Teff>=5000K, the corrections are smooth enough, as a function of atmospheric parameters, that it is possible to interpolate the corrections between grid points; thus the coarseness of the CIFIST grid should not be a major limitation. However at the cool end there are still far too few models to allow a reliable interpolation. (20 data files).

  1. A three-dimensional model-based partial volume correction strategy for gated cardiac mouse PET imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.

    2012-07-01

    Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.

  2. Detection and Attribution of Simulated Climatic Extreme Events and Impacts: High Sensitivity to Bias Correction

    NASA Astrophysics Data System (ADS)

    Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.

    2015-12-01

    Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/

  3. A Critical Meta-Analysis of Lens Model Studies in Human Judgment and Decision-Making

    PubMed Central

    Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W.

    2013-01-01

    Achieving accurate judgment (‘judgmental achievement’) is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping. PMID:24391781

  4. A critical meta-analysis of lens model studies in human judgment and decision-making.

    PubMed

    Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W

    2013-01-01

    Achieving accurate judgment ('judgmental achievement') is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping.

  5. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method.

    PubMed

    Fu, Jingyang; Li, Guangyun; Wang, Li

    2018-04-27

    Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction.

  6. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method

    PubMed Central

    Fu, Jingyang; Li, Guangyun; Wang, Li

    2018-01-01

    Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction. PMID:29702559

  7. System Mean-Time-between-Failure versus Life-Cycle Logistics Cost Subject to the Introduction of Line-Replaceable-Unit Redundancy

    DTIC Science & Technology

    1988-03-18

    Design Confiquration 6 ......... 805 Appendix 9 SCRAPIRONS’s Output For Design Confiquration 7 ......... 90 Appendix 10 SCRAPIRONS’s Output For fesign ...17umb~r of men per contact support crew 2.00e0j _______ TVRO IV test equipment). 293 Name-285) TRC D~own-time in hours per service demanda4 equipment...to service for servicing further _____LUs. 297 14iaeeC28F) T~mE used in concepts GM. GO. and CR which 84.rje1 cal;.for LRU and: mtue ear atGeneral S

  8. Active vibration control with model correction on a flexible laboratory grid structure

    NASA Technical Reports Server (NTRS)

    Schamel, George C., II; Haftka, Raphael T.

    1991-01-01

    This paper presents experimental and computational comparisons of three active damping control laws applied to a complex laboratory structure. Two reduced structural models were used with one model being corrected on the basis of measured mode shapes and frequencies. Three control laws were investigated, a time-invariant linear quadratic regulator with state estimation and two direct rate feedback control laws. Experimental results for all designs were obtained with digital implementation. It was found that model correction improved the agreement between analytical and experimental results. The best agreement was obtained with the simplest direct rate feedback control.

  9. Third generation sfermion decays into Z and W gauge bosons: Full one-loop analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arhrib, Abdesslam; LPHEA, Departement de Physique, Faculte des Sciences-Semlalia, B.P. 2390 Marrakech; Benbrik, Rachid

    2005-05-01

    The complete one-loop radiative corrections to third-generation scalar fermions into gauge bosons Z and W{sup {+-}} is considered. We focus on f-tilde{sub 2}{yields}Zf-tilde{sub 1} and f-tilde{sub i}{yields}W{sup {+-}}f-tilde{sub j}{sup '}, f,f{sup '}=t,b. We include SUSY-QCD, QED, and full electroweak corrections. It is found that the electroweak corrections can be of the same order as the SUSY-QCD corrections. The two sets of corrections interfere destructively in some region of parameter space. The full one-loop correction can reach 10% in some supergravity scenario, while in model independent analysis like general the minimal supersymmetric standard model, the one-loop correction can reach 20% formore » large tan{beta} and large trilinear soft breaking terms A{sub b}.« less

  10. Caracterisation, modelisation et validation du transfert radiatif d'atmospheres non standard; impact sur les corrections atmospheriques d'images de teledetection

    NASA Astrophysics Data System (ADS)

    Zidane, Shems

    This study is based on data acquired with an airborne multi-altitude sensor on July 2004 during a nonstandard atmospheric event in the region of Saint-Jean-sur-Richelieu, Quebec. By non-standard atmospheric event we mean an aerosol atmosphere that does not obey the typical monotonic, scale height variation employed in virtually all atmospheric correction codes. The surfaces imaged during this field campaign included a diverse variety of targets : agricultural land, water bodies, urban areas and forests. The multi-altitude approach employed in this campaign allowed us to better understand the altitude dependent influence of the atmosphere over the array of ground targets and thus to better characterize the perturbation induced by a non-standard (smoke) plume. The transformation of the apparent radiance at 3 different altitudes into apparent reflectance and the insertion of the plume optics into an atmospheric correction model permitted an atmospheric correction of the apparent reflectance at the two higher altitudes. The results showed consistency with the apparent validation reflectances derived from the lowest altitude radiances. This approach effectively confirmed the accuracy of our non-standard atmospheric correction approach. This test was particularly relevant at the highest altitude of 3.17 km : the apparent reflectances at this altitude were above most of the plume and therefore represented a good test of our ability to adequately correct for the influence of the perturbation. Standard atmospheric disturbances are obviously taken into account in most atmospheric correction models, but these are based on monotonically decreasing aerosol variations with increasing altitude. When the atmospheric radiation is affected by a plume or a local, non-standard pollution event, one must adapt the existing models to the radiative transfer constraints of the local perturbation and to the reality of the measurable parameters available for ingestion into the model. The main inputs of this study were those normally used in an atmospheric correction : apparent at-sensor radiance and the aerosol optical depth (AOD) acquired using ground-based sunphotometry. The procedure we employed made use of a standard atmospheric correction code (CAM5S, for Canadian Modified 5S, which comes from the 5S radiative transfer model in the visible and near infrared) : however, we also used other parameters and data to adapt and correctly model the special atmospheric situation which affected the multi-altitude images acquired during the St. Jean field campaign. We then developed a modeling protocol for these atmospheric perturbations where auxiliary data was employed to complement our main data-set. This allowed for the development of a robust and simple methodology adapted to this atmospheric situation. The auxiliary data, i.e. meteorological data, LIDAR profiles, various satellite images and sun photometer retrievals of the scattering phase function, were sufficient to accurately model the observed plume in terms of a unusual, vertical distribution. This distribution was transformed into an aerosol optical depth profile that replaced the standard aerosol optical depth profile employed in the CAM5S atmospheric correction model. Based on this model, a comparison between the apparent ground reflectances obtained after atmospheric corrections and validation values of R*(0) obtained from the lowest altitude data showed that the error between the two was less than 0.01 rms. This correction was shown to be a significantly better estimation of the surface reflectance than that obtained using the atmospheric correction model. Significant differences were nevertheless observed in the non-standard solution : these were mainly caused by the difficulties brought about by the acquisition conditions, significant disparities attributable to inconsistencies in the co-sampling / co-registration of different targets from three different altitudes, and possibly modeling errors and / or calibration. There is accordingly room for improvement in our approach to dealing with such conditions. The modeling and forecasting of such a disturbance is explicitly described in this document: our goal in so doing is to permit the establishment of a better protocol for the acquisition of more suitable supporting data. The originality of this study stems from a new approach for incorporating a plume structure into an operational atmospheric correction model and then demonstrating that the approach was a significant improvement over an approach that ignored the perturbations in the vertical profile while employing the correct overall AOD. The profile model we employed was simple and robust but captured sufficient plume detail to achieve significant improvements in atmospheric correction accuracy. The overall process of addressing all the problems encountered in the analysis of our aerosol perturbation helped us to build an appropriate methodology for characterizing such events based on data availability, distributed freely and accessible to the scientific community. This makes our study adaptable and exportable to other types of non-standard atmospheric events. Keywords : non-standard atmospheric perturbation, multi-altitude apparent radiances, smoke plume, Gaussian plume modelization, radiance fit, AOD, CASI

  11. Corrective response times in a coordinated eye-head-arm countermanding task.

    PubMed

    Tao, Gordon; Khan, Aarlenne Z; Blohm, Gunnar

    2018-06-01

    Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.

  12. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  13. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    NASA Astrophysics Data System (ADS)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  14. Comparison of analytical and numerical approaches for CT-based aberration correction in transcranial passive acoustic imaging

    NASA Astrophysics Data System (ADS)

    Jones, Ryan M.; Hynynen, Kullervo

    2016-01-01

    Computed tomography (CT)-based aberration corrections are employed in transcranial ultrasound both for therapy and imaging. In this study, analytical and numerical approaches for calculating aberration corrections based on CT data were compared, with a particular focus on their application to transcranial passive imaging. Two models were investigated: a three-dimensional full-wave numerical model (Connor and Hynynen 2004 IEEE Trans. Biomed. Eng. 51 1693-706) based on the Westervelt equation, and an analytical method (Clement and Hynynen 2002 Ultrasound Med. Biol. 28 617-24) similar to that currently employed by commercial brain therapy systems. Trans-skull time delay corrections calculated from each model were applied to data acquired by a sparse hemispherical (30 cm diameter) receiver array (128 piezoceramic discs: 2.5 mm diameter, 612 kHz center frequency) passively listening through ex vivo human skullcaps (n  =  4) to emissions from a narrow-band, fixed source emitter (1 mm diameter, 516 kHz center frequency). Measurements were taken at various locations within the cranial cavity by moving the source around the field using a three-axis positioning system. Images generated through passive beamforming using CT-based skull corrections were compared with those obtained through an invasive source-based approach, as well as images formed without skull corrections, using the main lobe volume, positional shift, peak sidelobe ratio, and image signal-to-noise ratio as metrics for image quality. For each CT-based model, corrections achieved by allowing for heterogeneous skull acoustical parameters in simulation outperformed the corresponding case where homogeneous parameters were assumed. Of the CT-based methods investigated, the full-wave model provided the best imaging results at the cost of computational complexity. These results highlight the importance of accurately modeling trans-skull propagation when calculating CT-based aberration corrections. Although presented in an imaging context, our results may also be applicable to the problem of transmit focusing through the skull.

  15. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    NASA Astrophysics Data System (ADS)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  16. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  17. Positioning performance of the NTCM model driven by GPS Klobuchar model parameters

    NASA Astrophysics Data System (ADS)

    Hoque, Mohammed Mainul; Jakowski, Norbert; Berdermann, Jens

    2018-03-01

    Users of the Global Positioning System (GPS) utilize the Ionospheric Correction Algorithm (ICA) also known as Klobuchar model for correcting ionospheric signal delay or range error. Recently, we developed an ionosphere correction algorithm called NTCM-Klobpar model for single frequency GNSS applications. The model is driven by a parameter computed from GPS Klobuchar model and consecutively can be used instead of the GPS Klobuchar model for ionospheric corrections. In the presented work we compare the positioning solutions obtained using NTCM-Klobpar with those using the Klobuchar model. Our investigation using worldwide ground GPS data from a quiet and a perturbed ionospheric and geomagnetic activity period of 17 days each shows that the 24-hour prediction performance of the NTCM-Klobpar is better than the GPS Klobuchar model in global average. The root mean squared deviation of the 3D position errors are found to be about 0.24 and 0.45 m less for the NTCM-Klobpar compared to the GPS Klobuchar model during quiet and perturbed condition, respectively. The presented algorithm has the potential to continuously improve the accuracy of GPS single frequency mass market devices with only little software modification.

  18. Simulation and Correction of Triana-Viewed Earth Radiation Budget with ERBE/ISCCP Data

    NASA Technical Reports Server (NTRS)

    Huang, Jian-Ping; Minnis, Patrick; Doelling, David R.; Valero, Francisco P. J.

    2002-01-01

    This paper describes the simulation of the earth radiation budget (ERB) as viewed by Triana and the development of correction models for converting Trianaviewed radiances into a complete ERB. A full range of Triana views and global radiation fields are simulated using a combination of datasets from ERBE (Earth Radiation Budget Experiment) and ISCCP (International Satellite Cloud Climatology Project) and analyzed with a set of empirical correction factors specific to the Triana views. The results show that the accuracy of global correction factors to estimate ERB from Triana radiances is a function of the Triana position relative to the Lagrange-1 (L1) or the Sun location. Spectral analysis of the global correction factor indicates that both shortwave (SW; 0.2 - 5.0 microns) and longwave (LW; 5 -50 microns) parameters undergo seasonal and diurnal cycles that dominate the periodic fluctuations. The diurnal cycle, especially its amplitude, is also strongly dependent on the seasonal cycle. Based on these results, models are developed to correct the radiances for unviewed areas and anisotropic emission and reflection. A preliminary assessment indicates that these correction models can be applied to Triana radiances to produce the most accurate global ERB to date.

  19. Modulation of Soil Initial State on WRF Model Performance Over China

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Jin, Qinjian; Yi, Bingqi; Mullendore, Gretchen L.; Zheng, Xiaohui; Jin, Hongchun

    2017-11-01

    The soil state (e.g., temperature and moisture) in a mesoscale numerical prediction model is typically initialized by reanalysis or analysis data that may be subject to large bias. Such bias may lead to unrealistic land-atmosphere interactions. This study shows that the Climate Forecast System Reanalysis (CFSR) dramatically underestimates soil temperature and overestimates soil moisture over most parts of China in the first (0-10 cm) and second (10-25 cm) soil layers compared to in situ observations in July 2013. A correction based on the global optimal dual kriging is employed to correct CFSR bias in soil temperature and moisture using in situ observations. To investigate the impacts of the corrected soil state on model forecasts, two numerical model simulations—a control run with CFSR soil state and a disturbed run with the corrected soil state—were conducted using the Weather Research and Forecasting model. All the simulations are initiated 4 times per day and run 48 h. Model results show that the corrected soil state, for example, warmer and drier surface over the most parts of China, can enhance evaporation over wet regions, which changes the overlying atmospheric temperature and moisture. The changes of the lifting condensation level, level of free convection, and water transport due to corrected soil state favor precipitation over wet regions, while prohibiting precipitation over dry regions. Moreover, diagnoses indicate that the remote moisture flux convergence plays a dominant role in the precipitation changes over the wet regions.

  20. Study on the physical and non-physical drag coefficients for spherical satellites

    NASA Astrophysics Data System (ADS)

    Man, Haijun; Li, Huijun; Tang, Geshi

    In this study, the physical and non-physical drag coefficients (C_D) for spherical satellites in ANDERR are retrieved from the number density of atomic oxygen and the orbit decay data, respectively. We concern on what changes should be taken to the retrieved physical C_D and non-physical C_D as the accuracy of the atmospheric density model is improved. Firstly, Lomb-Scargle periodograms to these C_D series as well as the environmental parameters indicate that: (1) there are obvious 5-, 7-, and 9-day periodic variations in the daily Ap indices and the solar wind speed at 1 AU as well as the model density, which has been reported as a result from the interaction between the corotating solar wind and the magnetosphere; (2) The same short periods also exist in the retrieved C_D except for the significance level for each C_D series; (3) the physical and non-physical C_D have behaved almost homogeneously with model densities along the satellite trajectory. Secondly, corrections to each type of C_D are defined as the differences between the values derived from the density model of NRLMSISE-00 and that of JB2008. It has shown that: (1) the bigger the density corrections are, the bigger the corrections to C_D of both types have. In addition, corrections to the physical C_D distribute within an extension of 0.05, which is about an order lower than the extension that the non-physical C_D distribute (0.5). (2) Corrections to the non-physical C_D behaved reciprocally to the density corrections, while a similar relationship is also existing between corrections to the physical C_D and that of the model density. (3) As the orbital altitude are lower than 200 km, corrections to the C_D and the model density are both decreased asymptotically to zero. Results in this study highlight that the physical C_D for spherical satellites should play an important role in technique renovations for accurate density corrections with the orbital decay data or in searching for a way to decouple the product of density and C_D wrapped in the orbital decay data.

  1. Ionospheric Refraction Corrections in the GTDS for Satellite-To-Satellite Tracking Data

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.; Kozelsky, J. K.

    1976-01-01

    In satellite-to-satellite tracking (SST) geographic as well as diurnal ionospheric effects must be contended with, for the line of sight between satellites can cross a day-night interface or lie within the equatorial ionosphere. These various effects were examined and a method of computing ionospheric refraction corrections to range and range rate measurements with sufficient accuracy were devised to be used in orbit determinations. The Bent Ionospheric Model is used for SST refraction corrections. Making use of this model a method of computing corrections through large ionospheric gradients was devised and implemented into the Goddard Trajectory Determination System. The various considerations taken in designing and implementing this SST refraction correction algorithm are reported.

  2. Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method

    NASA Astrophysics Data System (ADS)

    Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil

    2014-05-01

    Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.

  3. Further evidence for the increased power of LOD scores compared with nonparametric methods.

    PubMed

    Durner, M; Vieland, V J; Greenberg, D A

    1999-01-01

    In genetic analysis of diseases in which the underlying model is unknown, "model free" methods-such as affected sib pair (ASP) tests-are often preferred over LOD-score methods, although LOD-score methods under the correct or even approximately correct model are more powerful than ASP tests. However, there might be circumstances in which nonparametric methods will outperform LOD-score methods. Recently, Dizier et al. reported that, in some complex two-locus (2L) models, LOD-score methods with segregation analysis-derived parameters had less power to detect linkage than ASP tests. We investigated whether these particular models, in fact, represent a situation that ASP tests are more powerful than LOD scores. We simulated data according to the parameters specified by Dizier et al. and analyzed the data by using a (a) single locus (SL) LOD-score analysis performed twice, under a simple dominant and a recessive mode of inheritance (MOI), (b) ASP methods, and (c) nonparametric linkage (NPL) analysis. We show that SL analysis performed twice and corrected for the type I-error increase due to multiple testing yields almost as much linkage information as does an analysis under the correct 2L model and is more powerful than either the ASP method or the NPL method. We demonstrate that, even for complex genetic models, the most important condition for linkage analysis is that the assumed MOI at the disease locus being tested is approximately correct, not that the inheritance of the disease per se is correctly specified. In the analysis by Dizier et al., segregation analysis led to estimates of dominance parameters that were grossly misspecified for the locus tested in those models in which ASP tests appeared to be more powerful than LOD-score analyses.

  4. Evaluation of a new satellite-based precipitation dataset for climate studies in the Xiang River basin, Southern China

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Xu, Y. P.; Hsu, K. L.

    2017-12-01

    A new satellite-based precipitation dataset, Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) with long-term time series dating back to 1983 can be one valuable dataset for climate studies. This study investigates the feasibility of using PERSIANN-CDR as a reference dataset for climate studies. Sixteen CMIP5 models are evaluated over the Xiang River basin, southern China, by comparing their performance on precipitation projection and streamflow simulation, particularly on extreme precipitation and streamflow events. The results show PERSIANN-CDR is a valuable dataset for climate studies, even on extreme precipitation events. The precipitation estimates and their extreme events from CMIP5 models are improved significantly compared with rain gauge observations after bias-correction by the PERSIANN-CDR precipitation estimates. Given streamflows simulated with raw and bias-corrected precipitation estimates from 16 CMIP5 models, 10 out of 16 are improved after bias-correction. The impact of bias-correction on extreme events for streamflow simulations are unstable, with eight out of 16 models can be clearly claimed they are improved after the bias-correction. Concerning the performance of raw CMIP5 models on precipitation, IPSL-CM5A-MR excels the other CMIP5 models, while MRI-CGCM3 outperforms on extreme events with its better performance on six extreme precipitation metrics. Case studies also show that raw CCSM4, CESM1-CAM5, and MRI-CGCM3 outperform other models on streamflow simulation, while MIROC5-ESM-CHEM, MIROC5-ESM and IPSL-CM5A-MR behaves better than the other models after bias-correction.

  5. Quantile Mapping Bias correction for daily precipitation over Vietnam in a regional climate model

    NASA Astrophysics Data System (ADS)

    Trinh, L. T.; Matsumoto, J.; Ngo-Duc, T.

    2017-12-01

    In the past decades, Regional Climate Models (RCMs) have been developed significantly, allowing climate simulation to be conducted at a higher resolution. However, RCMs often contained biases when comparing with observations. Therefore, statistical correction methods were commonly employed to reduce/minimize the model biases. In this study, outputs of the Regional Climate Model (RegCM) version 4.3 driven by the CNRM-CM5 global products were evaluated with and without the Quantile Mapping (QM) bias correction method. The model domain covered the area from 90oE to 145oE and from 15oS to 40oN with a horizontal resolution of 25km. The QM bias correction processes were implemented by using the Vietnam Gridded precipitation dataset (VnGP) and the outputs of RegCM historical run in the period 1986-1995 and then validated for the period 1996-2005. Based on the statistical quantity of spatial correlation and intensity distributions, the QM method showed a significant improvement in rainfall compared to the non-bias correction method. The improvements both in time and space were recognized in all seasons and all climatic sub-regions of Vietnam. Moreover, not only the rainfall amount but also some extreme indices such as R10m, R20mm, R50m, CDD, CWD, R95pTOT, R99pTOT were much better after the correction. The results suggested that the QM correction method should be taken into practice for the projections of the future precipitation over Vietnam.

  6. Post processing of optically recognized text via second order hidden Markov model

    NASA Astrophysics Data System (ADS)

    Poudel, Srijana

    In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.

  7. Minimum Energy Routing through Interactive Techniques (MERIT) modeling

    NASA Technical Reports Server (NTRS)

    Wylie, Donald P.

    1988-01-01

    The MERIT program is designed to demonstrate the feasibility of fuel savings by airlines through improved route selection using wind observations from their own fleet. After a discussion of weather and aircraft data, manually correcting wind fields, automatic corrections to wind fields, and short-range prediction models, it is concluded that improvements in wind information are possible if a system is developed for analyzing wind observations and correcting the forecasts made by the major models. One data handling system, McIDAS, can easily collect and display wind observations and model forecasts. Changing the wind forecasts beyond the time of the most recent observations is more difficult; an Australian Mesoscale Model was tested with promising but not definitive results.

  8. Adjustment of spatio-temporal precipitation patterns in a high Alpine environment

    NASA Astrophysics Data System (ADS)

    Herrnegger, Mathew; Senoner, Tobias; Nachtnebel, Hans-Peter

    2018-01-01

    This contribution presents a method for correcting the spatial and temporal distribution of precipitation fields in a mountainous environment. The approach is applied within a flood forecasting model in the Upper Enns catchment in the Central Austrian Alps. Precipitation exhibits a large spatio-temporal variability in Alpine areas. Additionally the density of the monitoring network is low and measurements are subjected to major errors. This can lead to significant deficits in water balance estimation and stream flow simulations, e.g. for flood forecasting models. Therefore precipitation correction factors are frequently applied. For the presented study a multiplicative, stepwise linear correction model is implemented in the rainfall-runoff model COSERO to adjust the precipitation pattern as a function of elevation. To account for the local meteorological conditions, the correction model is derived for two elevation zones: (1) Valley floors to 2000 m a.s.l. and (2) above 2000 m a.s.l. to mountain peaks. Measurement errors also depend on the precipitation type, with higher magnitudes in winter months during snow fall. Therefore, additionally, separate correction factors for winter and summer months are estimated. Significant improvements in the runoff simulations could be achieved, not only in the long-term water balance simulation and the overall model performance, but also in the simulation of flood peaks.

  9. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  10. Analysis of Cricoid Pressure Force and Technique Among Anesthesiologists, Nurse Anesthetists, and Registered Nurses.

    PubMed

    Lefave, Melissa; Harrell, Brad; Wright, Molly

    2016-06-01

    The purpose of this project was to assess the ability of anesthesiologists, nurse anesthetists, and registered nurses to correctly identify anatomic landmarks of cricoid pressure and apply the correct amount of force. The project included an educational intervention with one group pretest-post-test design. Participants demonstrated cricoid pressure on a laryngotracheal model. After an educational intervention video, participants were asked to repeat cricoid pressure on the model. Participants with a nurse anesthesia background applied more appropriate force pretest than other participants; however, post-test results, while improved, showed no significant difference among providers. Participant identification of the correct anatomy of the cricoid cartilage and application of correct force were significantly improved after education. This study revealed that participants lacked prior knowledge of correct cricoid anatomy and pressure as well as the ability to apply correct force to the laryngotracheal model before an educational intervention. The intervention used in this study proved successful in educating health care providers. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  11. Correction of broadband snow albedo measurements affected by unknown slope and sensor tilts

    NASA Astrophysics Data System (ADS)

    Weiser, Ursula; Olefs, Marc; Schöner, Wolfgang; Weyss, Gernot; Hynek, Bernhard

    2016-04-01

    Geometric effects induced by the underlying terrain slope or by tilt errors of the radiation sensors lead to an erroneous measurement of snow or ice albedo. Consequently, artificial diurnal albedo variations in the order of 1-20 % are observed. The present paper proposes a general method to correct tilt errors of albedo measurements in cases where tilts of both the sensors and the slopes are not accurately measured or known. We demonstrate that atmospheric parameters for this correction model can either be taken from a nearby well-maintained and horizontally levelled measurement of global radiation or alternatively from a solar radiation model. In a next step the model is fitted to the measured data to determine tilts and directions of sensors and the underlying terrain slope. This then allows us to correct the measured albedo, the radiative balance and the energy balance. Depending on the direction of the slope and the sensors a comparison between measured and corrected albedo values reveals obvious over- or underestimations of albedo. It is also demonstrated that differences between measured and corrected albedo are generally highest for large solar zenith angles.

  12. Can a pseudo-Nambu-Goldstone Higgs lead to symmetry non-restoration?

    NASA Astrophysics Data System (ADS)

    Kilic, Can; Swaminathan, Sivaramakrishnan

    2016-01-01

    The calculation of finite temperature contributions to the scalar potential in a quantum field theory is similar to the calculation of loop corrections at zero temperature. In natural extensions of the Standard Model where loop corrections to the Higgs potential cancel between Standard Model degrees of freedom and their symmetry partners, it is interesting to contemplate whether finite temperature corrections also cancel, raising the question of whether a broken phase of electroweak symmetry may persist at high temperature. It is well known that this does not happen in supersymmetric theories because the thermal contributions of bosons and fermions do not cancel each other. However, for theories with same spin partners, the answer is less obvious. Using the Twin Higgs model as a benchmark, we show that although thermal corrections do cancel at the level of quadratic divergences, subleading corrections still drive the system to a restored phase. We further argue that our conclusions generalize to other well-known extensions of the Standard Model where the Higgs is rendered natural by being the pseudo-Nambu-Goldstone mode of an approximate global symmetry.

  13. Quantification of hepatic steatosis with T1-independent, T2-corrected MR imaging with spectral modeling of fat: blinded comparison with MR spectroscopy.

    PubMed

    Meisamy, Sina; Hines, Catherine D G; Hamilton, Gavin; Sirlin, Claude B; McKenzie, Charles A; Yu, Huanzhou; Brittain, Jean H; Reeder, Scott B

    2011-03-01

    To prospectively compare an investigational version of a complex-based chemical shift-based fat fraction magnetic resonance (MR) imaging method with MR spectroscopy for the quantification of hepatic steatosis. This study was approved by the institutional review board and was HIPAA compliant. Written informed consent was obtained before all studies. Fifty-five patients (31 women, 24 men; age range, 24-71 years) were prospectively imaged at 1.5 T with quantitative MR imaging and single-voxel MR spectroscopy, each within a single breath hold. The effects of T2 correction, spectral modeling of fat, and magnitude fitting for eddy current correction on fat quantification with MR imaging were investigated by reconstructing fat fraction images from the same source data with different combinations of error correction. Single-voxel T2-corrected MR spectroscopy was used to measure fat fraction and served as the reference standard. All MR spectroscopy data were postprocessed at a separate institution by an MR physicist who was blinded to MR imaging results. Fat fractions measured with MR imaging and MR spectroscopy were compared statistically to determine the correlation (r(2)), and the slope and intercept as measures of agreement between MR imaging and MR spectroscopy fat fraction measurements, to determine whether MR imaging can help quantify fat, and examine the importance of T2 correction, spectral modeling of fat, and eddy current correction. Two-sided t tests (significance level, P = .05) were used to determine whether estimated slopes and intercepts were significantly different from 1.0 and 0.0, respectively. Sensitivity and specificity for the classification of clinically significant steatosis were evaluated. Overall, there was excellent correlation between MR imaging and MR spectroscopy for all reconstruction combinations. However, agreement was only achieved when T2 correction, spectral modeling of fat, and magnitude fitting for eddy current correction were used (r(2) = 0.99; slope ± standard deviation = 1.00 ± 0.01, P = .77; intercept ± standard deviation = 0.2% ± 0.1, P = .19). T1-independent chemical shift-based water-fat separation MR imaging methods can accurately quantify fat over the entire liver, by using MR spectroscopy as the reference standard, when T2 correction, spectral modeling of fat, and eddy current correction methods are used. © RSNA, 2011.

  14. Quantification of Hepatic Steatosis with T1-independent, T2*-corrected MR Imaging with Spectral Modeling of Fat: Blinded Comparison with MR Spectroscopy

    PubMed Central

    Hines, Catherine D. G.; Hamilton, Gavin; Sirlin, Claude B.; McKenzie, Charles A.; Yu, Huanzhou; Brittain, Jean H.; Reeder, Scott B.

    2011-01-01

    Purpose: To prospectively compare an investigational version of a complex-based chemical shift–based fat fraction magnetic resonance (MR) imaging method with MR spectroscopy for the quantification of hepatic steatosis. Materials and Methods: This study was approved by the institutional review board and was HIPAA compliant. Written informed consent was obtained before all studies. Fifty-five patients (31 women, 24 men; age range, 24–71 years) were prospectively imaged at 1.5 T with quantitative MR imaging and single-voxel MR spectroscopy, each within a single breath hold. The effects of T2* correction, spectral modeling of fat, and magnitude fitting for eddy current correction on fat quantification with MR imaging were investigated by reconstructing fat fraction images from the same source data with different combinations of error correction. Single-voxel T2-corrected MR spectroscopy was used to measure fat fraction and served as the reference standard. All MR spectroscopy data were postprocessed at a separate institution by an MR physicist who was blinded to MR imaging results. Fat fractions measured with MR imaging and MR spectroscopy were compared statistically to determine the correlation (r2), and the slope and intercept as measures of agreement between MR imaging and MR spectroscopy fat fraction measurements, to determine whether MR imaging can help quantify fat, and examine the importance of T2* correction, spectral modeling of fat, and eddy current correction. Two-sided t tests (significance level, P = .05) were used to determine whether estimated slopes and intercepts were significantly different from 1.0 and 0.0, respectively. Sensitivity and specificity for the classification of clinically significant steatosis were evaluated. Results: Overall, there was excellent correlation between MR imaging and MR spectroscopy for all reconstruction combinations. However, agreement was only achieved when T2* correction, spectral modeling of fat, and magnitude fitting for eddy current correction were used (r2 = 0.99; slope ± standard deviation = 1.00 ± 0.01, P = .77; intercept ± standard deviation = 0.2% ± 0.1, P = .19). Conclusion: T1-independent chemical shift–based water-fat separation MR imaging methods can accurately quantify fat over the entire liver, by using MR spectroscopy as the reference standard, when T2* correction, spectral modeling of fat, and eddy current correction methods are used. © RSNA, 2011 PMID:21248233

  15. Towards self-correcting quantum memories

    NASA Astrophysics Data System (ADS)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.

  16. Wall interference correction improvements for the ONERA main wind tunnels

    NASA Technical Reports Server (NTRS)

    Vaucheret, X.

    1982-01-01

    This paper describes improved methods of calculating wall interference corrections for the ONERA large windtunnels. The mathematical description of the model and its sting support have become more sophisticated. An increasing number of singularities is used until an agreement between theoretical and experimental signatures of the model and sting on the walls of the closed test section is obtained. The singularity decentering effects are calculated when the model reaches large angles of attack. The porosity factor cartography on the perforated walls deduced from the measured signatures now replaces the reference tests previously carried out in larger tunnels. The porosity factors obtained from the blockage terms (signatures at zero lift) and from the lift terms are in good agreement. In each case (model + sting + test section), wall corrections are now determined, before the tests, as a function of the fundamental parameters M, CS, CZ. During the windtunnel tests, the corrections are quickly computed from these functions.

  17. Improved calibration-based non-uniformity correction method for uncooled infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao

    2017-08-01

    With the latest improvements of microbolometer focal plane arrays (FPA), uncooled infrared (IR) cameras are becoming the most widely used devices in thermography, especially in handheld devices. However the influences derived from changing ambient condition and the non-uniform response of the sensors make it more difficult to correct the nonuniformity of uncooled infrared camera. In this paper, based on the infrared radiation characteristic in the TEC-less uncooled infrared camera, a novel model was proposed for calibration-based non-uniformity correction (NUC). In this model, we introduce the FPA temperature, together with the responses of microbolometer under different ambient temperature to calculate the correction parameters. Based on the proposed model, we can work out the correction parameters with the calibration measurements under controlled ambient condition and uniform blackbody. All correction parameters can be determined after the calibration process and then be used to correct the non-uniformity of the infrared camera in real time. This paper presents the detail of the compensation procedure and the performance of the proposed calibration-based non-uniformity correction method. And our method was evaluated on realistic IR images obtained by a 384x288 pixels uncooled long wave infrared (LWIR) camera operated under changed ambient condition. The results show that our method can exclude the influence caused by the changed ambient condition, and ensure that the infrared camera has a stable performance.

  18. Absolute, SI-traceable lunar irradiance tie-points for the USGS Lunar Model

    NASA Astrophysics Data System (ADS)

    Brown, Steven W.; Eplee, Robert E.; Xiong, Xiaoxiong J.

    2017-10-01

    The United States Geological Survey (USGS) has developed an empirical model, known as the Robotic Lunar Observatory (ROLO) Model, that predicts the reflectance of the Moon for any Sun-sensor-Moon configuration over the spectral range from 350 nm to 2500 nm. The lunar irradiance can be predicted from the modeled lunar reflectance using a spectrum of the incident solar irradiance. While extremely successful as a relative exo-atmospheric calibration target, the ROLO Model is not SI-traceable and has estimated uncertainties too large for the Moon to be used as an absolute celestial calibration target. In this work, two recent absolute, low uncertainty, SI-traceable top-of-the-atmosphere (TOA) lunar irradiances, measured over the spectral range from 380 nm to 1040 nm, at lunar phase angles of 6.6° and 16.9° , are used as tie-points to the output of the ROLO Model. Combined with empirically derived phase and libration corrections to the output of the ROLO Model and uncertainty estimates in those corrections, the measurements enable development of a corrected TOA lunar irradiance model and its uncertainty budget for phase angles between +/-80° and libration angles from 7° to 51° . The uncertainties in the empirically corrected output from the ROLO model are approximately 1 % from 440 nm to 865 nm and increase to almost 3 % at 412 nm. The dominant components in the uncertainty budget are the uncertainty in the absolute TOA lunar irradiance and the uncertainty in the fit to the phase correction from the output of the ROLO model.

  19. Using GPS RO L1 data for calibration of the atmospheric path delay model for data reduction of the satellite altimetery observations.

    NASA Astrophysics Data System (ADS)

    Petrov, L.

    2017-12-01

    Processing satellite altimetry data requires the computation of path delayin the neutral atmosphere that is used for correcting ranges. The path delayis computed using numerical weather models and the accuracy of its computationdepends on the accuracy of numerical weather models. Accuracy of numerical modelsof numerical weather models over Antarctica and Greenland where there is a very sparse network of ground stations, is not well known. I used the dataset of GPS RO L1 data, computed predicted path delay for ROobservations using the numerical whether model GEOS-FPIT, formed the differences with observed path delay and used these differences for computationof the corrections to the a priori refractivity profile. These profiles wereused for computing corrections to the a priori zenith path delay. The systematic patter of these corrections are used for de-biasing of the the satellite altimetry results and for characterization of the systematic errorscaused by mismodeling atmosphere.

  20. Entropy corrected holographic dark energy models in modified gravity

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Azhar, Nadeem; Rani, Shamaila

    We consider the power law and the entropy corrected holographic dark energy (HDE) models with Hubble horizon in the dynamical Chern-Simons modified gravity. We explore various cosmological parameters and planes in this framework. The Hubble parameter lies within the consistent range at the present and later epoch for both entropy corrected models. The deceleration parameter explains the accelerated expansion of the universe. The equation of state (EoS) parameter corresponds to quintessence and cold dark matter (ΛCDM) limit. The ωΛ-ωΛ‧ approaches to ΛCDM limit and freezing region in both entropy corrected models. The statefinder parameters are consistent with ΛCDM limit and dark energy (DE) models. The generalized second law of thermodynamics remain valid in all cases of interacting parameter. It is interesting to mention here that our results of Hubble, EoS parameter and ωΛ-ωΛ‧ plane show consistency with the present observations like Planck, WP, BAO, H0, SNLS and nine-year WMAP.

  1. Adaptation of a Hyperspectral Atmospheric Correction Algorithm for Multi-spectral Ocean Color Data in Coastal Waters. Chapter 3

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.

    2003-01-01

    This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.

  2. Relationship of forces acting on implant rods and degree of scoliosis correction.

    PubMed

    Salmingo, Remel Alingalan; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu

    2013-02-01

    Adolescent idiopathic scoliosis is a complex spinal pathology characterized as a three-dimensional spine deformity combined with vertebral rotation. Various surgical techniques for correction of severe scoliotic deformity have evolved and became more advanced in applying the corrective forces. The objective of this study was to investigate the relationship between corrective forces acting on deformed rods and degree of scoliosis correction. Implant rod geometries of six adolescent idiopathic scoliosis patients were measured before and after surgery. An elasto-plastic finite element model of the implant rod before surgery was reconstructed for each patient. An inverse method based on Finite Element Analysis was used to apply forces to the implant rod model such that it was deformed the same after surgery. Relationship between the magnitude of corrective forces and degree of correction expressed as change of Cobb angle was evaluated. The effects of screw configuration on the corrective forces were also investigated. Corrective forces acting on rods and degree of correction were not correlated. Increase in number of implant screws tended to decrease the magnitude of corrective forces but did not provide higher degree of correction. Although greater correction was achieved with higher screw density, the forces increased at some level. The biomechanics of scoliosis correction is not only dependent to the corrective forces acting on implant rods but also associated with various parameters such as screw placement configuration and spine stiffness. Considering the magnitude of forces, increasing screw density is not guaranteed as the safest surgical strategy. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Radius correction formula for capacitances and effective length vectors of monopole and dipole antenna systems

    NASA Astrophysics Data System (ADS)

    Macher, W.; Oswald, T. H.

    2011-02-01

    In the investigation of antenna systems which consist of one or several monopoles, a realistic modeling of the monopole radii is not always feasible. In particular, physical scale models for electrolytic tank measurements of effective length vectors (rheometry) of spaceborne monopoles are so small that a correct scaling of monopole radii often results in very thin, flexible antenna wires which bend too much under their own weight. So one has to use monopoles in the model which are thicker than the correct scale diameters. The opposite case, where the monopole radius has to be modeled too thin, appears with certain numerical antenna programs based on wire grid modeling. This problem arises if the underlying algorithm assumes that the wire segments are much longer than their diameters. In such a case it is eventually not possible to use wires of correct thickness to model the monopoles. In order that these numerical and experimental techniques can be applied nonetheless to determine the capacitances and effective length vectors of such monopoles (with an inaccurate modeling of monopole diameters), an analytical correction method is devised. It enables one to calculate the quantities for the real antenna system from those obtained for the model antenna system with wrong monopole radii. Since a typical application of the presented formalism is the analysis of spaceborne antenna systems, an illustration for the monopoles of the WAVES experiment on board the STEREO-A spacecraft is given.

  4. Bias correction of temperature produced by the Community Climate System Model using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Moghim, S.; Hsu, K.; Bras, R. L.

    2013-12-01

    General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.

  5. EMC Global Climate And Weather Modeling Branch Personnel

    Science.gov Websites

    Comparison Statistics which includes: NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias Reduction

  6. Delivery of Education and Training in Oregon's Adult Corrections Institutions. An Analysis of Present and Future Directions.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR. Education and Work Program.

    This report on education and training in Oregon corrections institutions begins with a brief discussion of trends in correctional education and funding patterns. It then examines three general models of corrections education service delivery: educational programming under institutional superintendents, statewide programming facilitated by a state…

  7. The usefulness of “corrected” body mass index vs. self-reported body mass index: comparing the population distributions, sensitivity, specificity, and predictive utility of three correction equations using Canadian population-based data

    PubMed Central

    2014-01-01

    Background National data on body mass index (BMI), computed from self-reported height and weight, is readily available for many populations including the Canadian population. Because self-reported weight is found to be systematically under-reported, it has been proposed that the bias in self-reported BMI can be corrected using equations derived from data sets which include both self-reported and measured height and weight. Such correction equations have been developed and adopted. We aim to evaluate the usefulness (i.e., distributional similarity; sensitivity and specificity; and predictive utility vis-à-vis disease outcomes) of existing and new correction equations in population-based research. Methods The Canadian Community Health Surveys from 2005 and 2008 include both measured and self-reported values of height and weight, which allows for construction and evaluation of correction equations. We focused on adults age 18–65, and compared three correction equations (two correcting weight only, and one correcting BMI) against self-reported and measured BMI. We first compared population distributions of BMI. Second, we compared the sensitivity and specificity of self-reported BMI and corrected BMI against measured BMI. Third, we compared the self-reported and corrected BMI in terms of association with health outcomes using logistic regression. Results All corrections outperformed self-report when estimating the full BMI distribution; the weight-only correction outperformed the BMI-only correction for females in the 23–28 kg/m2 BMI range. In terms of sensitivity/specificity, when estimating obesity prevalence, corrected values of BMI (from any equation) were superior to self-report. In terms of modelling BMI-disease outcome associations, findings were mixed, with no correction proving consistently superior to self-report. Conclusions If researchers are interested in modelling the full population distribution of BMI, or estimating the prevalence of obesity in a population, then a correction of any kind included in this study is recommended. If the researcher is interested in using BMI as a predictor variable for modelling disease, then both self-reported and corrected BMI result in biased estimates of association. PMID:24885210

  8. Relativistic Corrections to the Bohr Model of the Atom

    ERIC Educational Resources Information Center

    Kraft, David W.

    1974-01-01

    Presents a simple means for extending the Bohr model to include relativistic corrections using a derivation similar to that for the non-relativistic case, except that the relativistic expressions for mass and kinetic energy are employed. (Author/GS)

  9. A study of ionospheric grid modification technique for BDS/GPS receiver

    NASA Astrophysics Data System (ADS)

    Liu, Xuelin; Li, Meina; Zhang, Lei

    2017-07-01

    For the single-frequency GPS receiver, ionospheric delay is an important factor affecting the positioning performance. There are many kinds of ionospheric correction methods, common models are Bent model, IRI model, Klobuchar model, Ne Quick model and so on. The US Global Positioning System (GPS) uses the Klobuchar coefficients transmitted in the satellite signal to correct the ionospheric delay error for a single frequency GPS receiver, but this model can only reduce the ionospheric error of about 50% in the mid-latitudes. In the Beidou system, the accuracy of the correction delay is higher. Therefore, this paper proposes a method that using BD grid information to correct GPS ionospheric delay to improve the ionospheric delay for the BDS/GPS compatible positioning receiver. In this paper, the principle of ionospheric grid algorithm is introduced in detail, and the positioning accuracy of GPS system and BDS/GPS compatible positioning system is compared and analyzed by the real measured data. The results show that the method can effectively improve the positioning accuracy of the receiver in a more concise way.

  10. BP artificial neural network based wave front correction for sensor-less free space optics communication

    NASA Astrophysics Data System (ADS)

    Li, Zhaokun; Zhao, Xiaohui

    2017-02-01

    The sensor-less adaptive optics (AO) is one of the most promising methods to compensate strong wave front disturbance in free space optics communication (FSO). The back propagation (BP) artificial neural network is applied for the sensor-less AO system to design a distortion correction scheme in this study. This method only needs one or a few online measurements to correct the wave front distortion compared with other model-based approaches, by which the real-time capacity of the system is enhanced and the Strehl Ratio (SR) is largely improved. Necessary comparisons in numerical simulation with other model-based and model-free correction methods proposed in Refs. [6,8,9,10] are given to show the validity and advantage of the proposed method.

  11. Cone-beam CT of traumatic brain injury using statistical reconstruction with a post-artifact-correction noise model

    NASA Astrophysics Data System (ADS)

    Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.

    2015-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.

  12. Modelling Thin Film Microbending: A Comparative Study of Three Different Approaches

    NASA Astrophysics Data System (ADS)

    Aifantis, Katerina E.; Nikitas, Nikos; Zaiser, Michael

    2011-09-01

    Constitutive models which describe crystal microplasticity in a continuum framework can be envisaged as average representations of the dynamics of dislocation systems. Thus, their performance needs to be assessed not only by their ability to correctly represent stress-strain characteristics on the specimen scale but also by their ability to correctly represent the evolution of internal stress and strain patterns. In the present comparative study we consider the bending of a free-standing thin film. We compare the results of 3D DDD simulations with those obtained from a simple 1D gradient plasticity model and a more complex dislocation-based continuum model. Both models correctly reproduce the nontrivial strain patterns predicted by DDD for the microbending problem.

  13. The QCD corrections of the process h → ηbZ

    NASA Astrophysics Data System (ADS)

    Zhu, Rong-Fei; Feng, Tai-Fu; Zhang, Hai-Bin

    2018-05-01

    We investigate the 125 GeV Higgs boson decay to a pseudoscalar quarkonium ηb and Z boson. We calculate the quantum chromodynamics (QCD) one-loop corrections to the branching ratio of the process, Br(h → ηbZ), both in the Standard Model (SM) and in the two Higgs double models (THDM). Adding the QCD one-loop corrections, the branching ratio of h → ηbZ in the SM is Br(h → ηbZ) = (4.739‑0.244+0.276) × 10‑5. The relative correction of that QCD one-loop level relative to the tree level of Br(h → ηbZ) is around 76% in the SM. Similarly, the relative correction in the THDM also can be around 75%. The key parameter, tan β, can affect the relative correction in the THDM.

  14. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    PubMed

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  15. Mapping Species Distributions with MAXENT Using a Geographically Biased Sample of Presence Data: A Performance Assessment of Methods for Correcting Sampling Bias

    PubMed Central

    Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607

  16. Practical estimate of gradient nonlinearity for implementation of apparent diffusion coefficient bias correction.

    PubMed

    Malkyarenko, Dariya I; Chenevert, Thomas L

    2014-12-01

    To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.

  17. Comprehensive renormalization group analysis of the littlest seesaw model

    NASA Astrophysics Data System (ADS)

    Geib, Tanja; King, Stephen F.

    2018-04-01

    We present a comprehensive renormalization group analysis of the littlest seesaw model involving two right-handed neutrinos and a very constrained Dirac neutrino Yukawa coupling matrix. We perform the first χ2 analysis of the low energy masses and mixing angles, in the presence of renormalization group corrections, for various right-handed neutrino masses and mass orderings, both with and without supersymmetry. We find that the atmospheric angle, which is predicted to be near maximal in the absence of renormalization group corrections, may receive significant corrections for some nonsupersymmetric cases, bringing it into close agreement with the current best fit value in the first octant. By contrast, in the presence of supersymmetry, the renormalization group corrections are relatively small, and the prediction of a near maximal atmospheric mixing angle is maintained, for the studied cases. Forthcoming results from T2K and NO ν A will decisively test these models at a precision comparable to the renormalization group corrections we have calculated.

  18. Some solutions for one of the cosmological constant problems

    NASA Astrophysics Data System (ADS)

    Nojiri, Shin'Ichi

    2016-11-01

    We propose several covariant models which may solve one of the problems in the cosmological constant. One of the models can be regarded as an extension of sequestering model. Other models could be regarded as extensions of the covariant formulation of the unimodular gravity. The contributions to the vacuum energy from the quantum corrections from the matters are absorbed into a redefinition of a scalar field and the quantum corrections become irrelevant to the dynamics. In a class of the extended unimodular gravity models, we also consider models which are regarded as topological field theories. The models can be extended and not only the vacuum energy but also any quantum corrections to the gravitational action could become irrelevant for the dynamics. We find, however, that the BRS symmetry in the topological field theories is broken spontaneously and therefore, the models might not be consistent.

  19. Galactic evolution of oxygen. OH lines in 3D hydrodynamical model atmospheres

    NASA Astrophysics Data System (ADS)

    González Hernández, J. I.; Bonifacio, P.; Ludwig, H.-G.; Caffau, E.; Behara, N. T.; Freytag, B.

    2010-09-01

    Context. Oxygen is the third most common element in the Universe. The measurement of oxygen lines in metal-poor unevolved stars, in particular near-UV OH lines, can provide invaluable information about the properties of the Early Galaxy. Aims: Near-UV OH lines constitute an important tool to derive oxygen abundances in metal-poor dwarf stars. Therefore, it is important to correctly model the line formation of OH lines, especially in metal-poor stars, where 3D hydrodynamical models commonly predict cooler temperatures than plane-parallel hydrostatic models in the upper photosphere. Methods: We have made use of a grid of 52 3D hydrodynamical model atmospheres for dwarf stars computed with the code CO5BOLD, extracted from the more extended CIFIST grid. The 52 models cover the effective temperature range 5000-6500 K, the surface gravity range 3.5-4.5 and the metallicity range -3 < [Fe/H] < 0. Results: We determine 3D-LTE abundance corrections in all 52 3D models for several OH lines and ion{Fe}{i} lines of different excitation potentials. These 3D-LTE corrections are generally negative and reach values of roughly -1 dex (for the OH 3167 with excitation potential of approximately 1 eV) for the higher temperatures and surface gravities. Conclusions: We apply these 3D-LTE corrections to the individual O abundances derived from OH lines of a sample the metal-poor dwarf stars reported in Israelian et al. (1998, ApJ, 507, 805), Israelian et al. (2001, ApJ, 551, 833) and Boesgaard et al. (1999, AJ, 117, 492) by interpolating the stellar parameters of the dwarfs in the grid of 3D-LTE corrections. The new 3D-LTE [O/Fe] ratio still keeps a similar trend as the 1D-LTE, i.e., increasing towards lower [Fe/H] values. We applied 1D-NLTE corrections to 3D ion{Fe}{i} abundances and still see an increasing [O/Fe] ratio towards lower metallicites. However, the Galactic [O/Fe] ratio must be revisited once 3D-NLTE corrections become available for OH and Fe lines for a grid of 3D hydrodynamical model atmospheres.

  20. Criterion for correct recalls in associative-memory neural networks

    NASA Astrophysics Data System (ADS)

    Ji, Han-Bing

    1992-12-01

    A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis.

  1. Surface corrections for peridynamic models in elasticity and fracture

    NASA Astrophysics Data System (ADS)

    Le, Q. V.; Bobaru, F.

    2018-04-01

    Peridynamic models are derived by assuming that a material point is located in the bulk. Near a surface or boundary, material points do not have a full non-local neighborhood. This leads to effective material properties near the surface of a peridynamic model to be slightly different from those in the bulk. A number of methods/algorithms have been proposed recently for correcting this peridynamic surface effect. In this study, we investigate the efficacy and computational cost of peridynamic surface correction methods for elasticity and fracture. We provide practical suggestions for reducing the peridynamic surface effect.

  2. Jet-boundary and Plan-form Corrections for Partial-Span Models with Reflection-Plane, End-Plate, or No End-Plate in a Closed Circular Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Sivells, James C; Deters, Owen J

    1946-01-01

    A method is presented for determining the jet-boundary and plan-form corrections necessary for application to test data for a partial-span model with a reflection plane, an end plate, or no end plate in a closed circular wind tunnel. Examples are worked out for a partial-span model with each of the three end conditions in the Langley 19-foot pressure tunnel and the corrections are applied to measured values of lift, drag, pitching-moment, rolling-moment, and yawing-moment coefficients.

  3. Properties of Vector Preisach Models

    NASA Technical Reports Server (NTRS)

    Kahler, Gary R.; Patel, Umesh D.; Torre, Edward Della

    2004-01-01

    This paper discusses rotational anisotropy and rotational accommodation of magnetic particle tape. These effects have a performance impact during the reading and writing of the recording process. We introduce the reduced vector model as the basis for the computations. Rotational magnetization models must accurately compute the anisotropic characteristics of ellipsoidally magnetizable media. An ellipticity factor is derived for these media that computes the two-dimensional magnetization trajectory for all applied fields. An orientation correction must be applied to the computed rotational magnetization. For isotropic materials, an orientation correction has been developed and presented. For anisotropic materials, an orientation correction is introduced.

  4. The Real-Time Wall Interference Correction System of the NASA Ames 12-Foot Pressure Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    1998-01-01

    An improved version of the Wall Signature Method was developed to compute wall interference effects in three-dimensional subsonic wind tunnel testing of aircraft models in real-time. The method may be applied to a full-span or a semispan model. A simplified singularity representation of the aircraft model is used. Fuselage, support system, propulsion simulator, and separation wake volume blockage effects are represented by point sources and sinks. Lifting effects are represented by semi-infinite line doublets. The singularity representation of the test article is combined with the measurement of wind tunnel test reference conditions, wall pressure, lift force, thrust force, pitching moment, rolling moment, and pre-computed solutions of the subsonic potential equation to determine first order wall interference corrections. Second order wall interference corrections for pitching and rolling moment coefficient are also determined. A new procedure is presented that estimates a rolling moment coefficient correction for wings with non-symmetric lift distribution. Experimental data obtained during the calibration of the Ames Bipod model support system and during tests of two semispan models mounted on an image plane in the NASA Ames 12 ft. Pressure Wind Tunnel are used to demonstrate the application of the wall interference correction method.

  5. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  6. An analysis of USSPACECOM's space surveillance network sensor tasking methodology

    NASA Astrophysics Data System (ADS)

    Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.

    1992-12-01

    This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.

  7. Dilatation-dissipation corrections for advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1992-01-01

    This paper analyzes dilatation-dissipation based compressibility corrections for advanced turbulence models. Numerical computations verify that the dilatation-dissipation corrections devised by Sarkar and Zeman greatly improve both the k-omega and k-epsilon model predicted effect of Mach number on spreading rate. However, computations with the k-gamma model also show that the Sarkar/Zeman terms cause an undesired reduction in skin friction for the compressible flat-plate boundary layer. A perturbation solution for the compressible wall layer shows that the Sarkar and Zeman terms reduce the effective von Karman constant in the law of the wall. This is the source of the inaccurate k-gamma model skin-friction predictions for the flat-plate boundary layer. The perturbation solution also shows that the k-epsilon model has an inherent flaw for compressible boundary layers that is not compensated for by the dilatation-dissipation corrections. A compressibility modification for k-gamma and k-epsilon models is proposed that is similar to those of Sarkar and Zeman. The new compressibility term permits accurate predictions for the compressible mixing layer, flat-plate boundary layer, and a shock separated flow with the same values for all closure coefficients.

  8. Exploring the Predictors of Treatment Views of Private Correctional Staff: A Test of an Integrated Work Model

    ERIC Educational Resources Information Center

    Lambert, Eric G.; Hogan, Nancy L.

    2009-01-01

    Rehabilitation is a salient goal in the field of corrections. Correctional staff need to be supportive of rehabilitation efforts in order for them to be effective. Past studies that have examined correctional staff support for rehabilitation have produced conflicting results. Most studies have focused on personal characteristics, including age,…

  9. Results of a Nationwide Survey: Correctional Education Organizational Structure Trends.

    ERIC Educational Resources Information Center

    Gehring, Thom

    1990-01-01

    Describes a state-by-state three-part study of the generic types of organizational structures used to deliver correctional education services. The first product was a list of correctional education state directors; this report, including the models used in each state, is the second; and the third was an analysis of correctional school district…

  10. Examination of multi-model ensemble seasonal prediction methods using a simple climate system

    NASA Astrophysics Data System (ADS)

    Kang, In-Sik; Yoo, Jin Ho

    2006-02-01

    A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.

  11. Improvement of the Correlative AFM and ToF-SIMS Approach Using an Empirical Sputter Model for 3D Chemical Characterization.

    PubMed

    Terlier, T; Lee, J; Lee, K; Lee, Y

    2018-02-06

    Technological progress has spurred the development of increasingly sophisticated analytical devices. The full characterization of structures in terms of sample volume and composition is now highly complex. Here, a highly improved solution for 3D characterization of samples, based on an advanced method for 3D data correction, is proposed. Traditionally, secondary ion mass spectrometry (SIMS) provides the chemical distribution of sample surfaces. Combining successive sputtering with 2D surface projections enables a 3D volume rendering to be generated. However, surface topography can distort the volume rendering by necessitating the projection of a nonflat surface onto a planar image. Moreover, the sputtering is highly dependent on the probed material. Local variation of composition affects the sputter yield and the beam-induced roughness, which in turn alters the 3D render. To circumvent these drawbacks, the correlation of atomic force microscopy (AFM) with SIMS has been proposed in previous studies as a solution for the 3D chemical characterization. To extend the applicability of this approach, we have developed a methodology using AFM-time-of-flight (ToF)-SIMS combined with an empirical sputter model, "dynamic-model-based volume correction", to universally correct 3D structures. First, the simulation of 3D structures highlighted the great advantages of this new approach compared with classical methods. Then, we explored the applicability of this new correction to two types of samples, a patterned metallic multilayer and a diblock copolymer film presenting surface asperities. In both cases, the dynamic-model-based volume correction produced an accurate 3D reconstruction of the sample volume and composition. The combination of AFM-SIMS with the dynamic-model-based volume correction improves the understanding of the surface characteristics. Beyond the useful 3D chemical information provided by dynamic-model-based volume correction, the approach permits us to enhance the correlation of chemical information from spectroscopic techniques with the physical properties obtained by AFM.

  12. Color correction with blind image restoration based on multiple images using a low-rank model

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  13. The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Choi, In-Hee; Paek, Insu; Cho, Sun-Joo

    2017-01-01

    The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…

  14. Effect of attenuation correction on image quality in emission tomography

    NASA Astrophysics Data System (ADS)

    Denisova, N. V.; Ondar, M. M.

    2017-10-01

    In this paper, mathematical modeling and computer simulations of myocardial perfusion SPECT imaging are performed. The main factors affecting the quality of reconstructed images in SPECT are anatomical structures, the diastolic volume of a myocardium and attenuation of gamma rays. The purpose of the present work is to study the effect of attenuation correction on image quality in emission tomography. The basic 2D model describing a Tc-99m distribution in a transaxial slice of the thoracic part of a patient body was designed. This model was used to construct four phantoms simulated various anatomical shapes: 2 male and 2 female patients with normal, obese and subtle physique were included in the study. Data acquisition model which includes the effect of non-uniform attenuation, collimator-detector response and Poisson statistics was developed. The projection data were calculated for 60 views in accordance with the standard myocardial perfusion SPECT imaging protocol. Reconstructions of images were performed using the OSEM algorithm which is widely used in modern SPECT systems. Two types of patient's examination procedures were simulated: SPECT without attenuation correction and SPECT/CT with attenuation correction. The obtained results indicate a significant effect of the attenuation correction on the SPECT images quality.

  15. Corrected Four-Sphere Head Model for EEG Signals.

    PubMed

    Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K

    2017-01-01

    The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  16. Corrected Four-Sphere Head Model for EEG Signals

    PubMed Central

    Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.

    2017-01-01

    The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671

  17. Topological quantum error correction in the Kitaev honeycomb model

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  18. Divergence correction schemes in finite difference method for 3D tensor CSAMT in axial anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Tan, Handong; Zhang, Zhiyong; Li, Zhiqiang; Cao, Meng

    2017-05-01

    Resistivity anisotropy and full-tensor controlled-source audio-frequency magnetotellurics (CSAMT) have gradually become hot research topics. However, much of the current anisotropy research for tensor CSAMT only focuses on the one-dimensional (1D) solution. As the subsurface is rarely 1D, it is necessary to study three-dimensional (3D) model response. The staggered-grid finite difference method is an effective simulation method for 3D electromagnetic forward modelling. Previous studies have suggested using the divergence correction to constrain the iterative process when using a staggered-grid finite difference model so as to accelerate the 3D forward speed and enhance the computational accuracy. However, the traditional divergence correction method was developed assuming an isotropic medium. This paper improves the traditional isotropic divergence correction method and derivation process to meet the tensor CSAMT requirements for anisotropy using the volume integral of the divergence equation. This method is more intuitive, enabling a simple derivation of a discrete equation and then calculation of coefficients related to the anisotropic divergence correction equation. We validate the result of our 3D computational results by comparing them to the results computed using an anisotropic, controlled-source 2.5D program. The 3D resistivity anisotropy model allows us to evaluate the consequences of using the divergence correction at different frequencies and for two orthogonal finite length sources. Our results show that the divergence correction plays an important role in 3D tensor CSAMT resistivity anisotropy research and offers a solid foundation for inversion of CSAMT data collected over an anisotropic body.

  19. Scaling in the vicinity of the four-state Potts fixed point

    NASA Astrophysics Data System (ADS)

    Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.

    2017-08-01

    We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.

  20. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  1. Cerebellar input configuration toward object model abstraction in manipulation tasks.

    PubMed

    Luque, Niceto R; Garrido, Jesus A; Carrillo, Richard R; Coenen, Olivier J-M D; Ros, Eduardo

    2011-08-01

    It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.

  2. In Our Lifetime: A Reformist View of Correctional Education.

    ERIC Educational Resources Information Center

    Arbenz, Richard L.

    1994-01-01

    Effective correctional education strategies should be used as models for comprehensive prison reform: prisons should become schools. Cognitive-democratic educational theory has been successfully applied in correctional education, demonstrating belief in people's intrinsic worth and ability to change. (SK)

  3. Erratum: Correction to: Toward an integrated genetic model for vent-distal SEDEX deposits

    NASA Astrophysics Data System (ADS)

    Sangster, D. F.

    2018-04-01

    The original version of this article unfortunately contained a mistake. Fig. 5(b) was originally published with an incorrect label. The correct version of this figure is provided here and the original article was corrected.

  4. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    NASA Astrophysics Data System (ADS)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  5. Axial geometrical aberration correction up to 5th order with N-SYLC.

    PubMed

    Hoque, Shahedul; Ito, Hiroyuki; Takaoka, Akio; Nishi, Ryuji

    2017-11-01

    We present N-SYLC (N-fold symmetric line currents) models to correct 5th order axial geometrical aberrations in electron microscopes. In our previous paper, we showed that 3rd order spherical aberration can be corrected by 3-SYLC doublet. After that, mainly the 5th order aberrations remain to limit the resolution. In this paper, we extend the doublet to quadruplet models also including octupole and dodecapole fields for correcting these higher order aberrations, without introducing any new unwanted ones. We prove the validity of our models by analytical calculations. Also by computer simulations, we show that for beam energy of 5keV and initial angle 10mrad at the corrector object plane, beam size of less than 0.5nm is achieved at the corrector image plane. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    NASA Astrophysics Data System (ADS)

    Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.

    2016-09-01

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major implications to field- and watershed-scale hydrologic studies.

  7. Exploring corrections to the Optomechanical Hamiltonian.

    PubMed

    Sala, Kamila; Tufarelli, Tommaso

    2018-06-14

    We compare two approaches for deriving corrections to the "linear model" of cavity optomechanics, in order to describe effects that are beyond first order in the radiation pressure coupling. In the regime where the mechanical frequency is much lower than the cavity one, we compare: (I) a widely used phenomenological Hamiltonian conserving the photon number; (II) a two-mode truncation of C. K. Law's microscopic model, which we take as the "true" system Hamiltonian. While these approaches agree at first order, the latter model does not conserve the photon number, resulting in challenging computations. We find that approach (I) allows for several analytical predictions, and significantly outperforms the linear model in our numerical examples. Yet, we also find that the phenomenological Hamiltonian cannot fully capture all high-order corrections arising from the C. K. Law model.

  8. Baseline Correction of Diffuse Reflection Near-Infrared Spectra Using Searching Region Standard Normal Variate (SRSNV).

    PubMed

    Genkawa, Takuma; Shinzawa, Hideyuki; Kato, Hideaki; Ishikawa, Daitaro; Murayama, Kodai; Komiyama, Makoto; Ozaki, Yukihiro

    2015-12-01

    An alternative baseline correction method for diffuse reflection near-infrared (NIR) spectra, searching region standard normal variate (SRSNV), was proposed. Standard normal variate (SNV) is an effective pretreatment method for baseline correction of diffuse reflection NIR spectra of powder and granular samples; however, its baseline correction performance depends on the NIR region used for SNV calculation. To search for an optimal NIR region for baseline correction using SNV, SRSNV employs moving window partial least squares regression (MWPLSR), and an optimal NIR region is identified based on the root mean square error (RMSE) of cross-validation of the partial least squares regression (PLSR) models with the first latent variable (LV). The performance of SRSNV was evaluated using diffuse reflection NIR spectra of mixture samples consisting of wheat flour and granular glucose (0-100% glucose at 5% intervals). From the obtained NIR spectra of the mixture in the 10 000-4000 cm(-1) region at 4 cm intervals (1501 spectral channels), a series of spectral windows consisting of 80 spectral channels was constructed, and then SNV spectra were calculated for each spectral window. Using these SNV spectra, a series of PLSR models with the first LV for glucose concentration was built. A plot of RMSE versus the spectral window position obtained using the PLSR models revealed that the 8680–8364 cm(-1) region was optimal for baseline correction using SNV. In the SNV spectra calculated using the 8680–8364 cm(-1) region (SRSNV spectra), a remarkable relative intensity change between a band due to wheat flour at 8500 cm(-1) and that due to glucose at 8364 cm(-1) was observed owing to successful baseline correction using SNV. A PLSR model with the first LV based on the SRSNV spectra yielded a determination coefficient (R2) of 0.999 and an RMSE of 0.70%, while a PLSR model with three LVs based on SNV spectra calculated in the full spectral region gave an R2 of 0.995 and an RMSE of 2.29%. Additional evaluation of SRSNV was carried out using diffuse reflection NIR spectra of marzipan and corn samples, and PLSR models based on SRSNV spectra showed good prediction results. These evaluation results indicate that SRSNV is effective in baseline correction of diffuse reflection NIR spectra and provides regression models with good prediction accuracy.

  9. SU-F-T-69: Correction Model of NIPAM Gel and Presage for Electron and Proton PDD Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, C; Lin, C; Tu, P

    Purpose: The current standard equipment for proton PDD measurement is multilayer-parallel-ion-chamber. Disadvantage of multilayer-parallel-ion-chamber is expensive and complexity manipulation. NIPAM-gel and Presage are options for PDD measurement. Due to different stopping power, the result of NIPAM-gel and Presage need to be corrected. This study aims to create a correction model for NIPAM-gel and Presage PDD measurement. Methods: Standard water based PDD profiles of electron 6MeV, 12MeV, and proton 90MeV were acquired. Electron PDD profile after 1cm thickness of NIPAM-gel added on the top of water was measured. Electron PDD profile with extra 1cm thickness of solid water, PTW RW3, wasmore » measured. The distance shift among standard PDD, NIPAM-gel PDD, and solid water PDD at R50% was compared and water equivalent thickness correction factor (WET) was calculated. Similar process was repeated. WETs for electron with Presage, proton with NIPAM-gel, and proton with Presage were calculated. PDD profiles of electron and proton with NIPAM-gel and Presage columns were corrected with each WET. The corrected profiles were compared with standard profiles. Results: WET for electron 12MeV with NIPAM-gel was 1.135, and 1.034 for electron 12Mev with Presage. After correction, PDD profile matched to the standard profile at the fall-off range well. The difference at R50% was 0.26mm shallower and 0.39mm deeper. The same WET was used to correct electron 6MeV profile. Energy independence of electron WET was observed. The difference at R50% was 0.17mm deeper for NIPAM-gel and 0.54mm deeper for Presage. WET for proton 90MeV with NIPAM-gel was 1.056. The difference at R50% was 0.37 deeper. Quenching effect at Bragg peak was revealed. The underestimated dose percentage at Bragg peak was 27%. Conclusion: This correction model can be used to modify PDD profile with depth error within 1mm. With this correction model, NIPAM-gel and Presage can be practical at PDD profile measurement.« less

  10. Correction of electronic record for weighing bucket precipitation gauge measurements

    USDA-ARS?s Scientific Manuscript database

    Electronic sensors generate valuable streams of forcing and validation data for hydrologic models, but are often subject to noise, which must be removed as part of model input and testing database development. We developed Automated Precipitation Correction Program (APCP) for weighting bucket preci...

  11. Towards automating measurements and predictions of Escherichia coli concentrations in the Cuyahoga River, Cuyahoga Valley National Park, Ohio, 2012–14

    USGS Publications Warehouse

    Brady, Amie M. G.; Meg B. Plona,

    2015-07-30

    A computer program was developed to manage the nowcasts by running the predictive models and posting the results to a publicly accessible Web site daily by 9 a.m. The nowcasts were able to correctly predict E. coli concentrations above or below the water-quality standard at Jaite for 79 percent of the samples compared with the measured concentrations. In comparison, the persistence model (using the previous day’s sample concentration) correctly predicted concentrations above or below the water-quality standard in only 68 percent of the samples. To determine if the Jaite nowcast could be used for the stretch of the river between Lock 29 and Jaite, the model predictions for Jaite were compared with the measured concentrations at Lock 29. The Jaite nowcast provided correct responses for 77 percent of the Lock 29 samples, which was a greater percentage than the percentage of correct responses (58 percent) from the persistence model at Lock 29.

  12. Corrections to Newton’s law of gravitation - application to hybrid Bloch brane

    NASA Astrophysics Data System (ADS)

    Almeida, C. A. S.; Veras, D. F. S.; Dantas, D. M.

    2018-02-01

    We present in this work, the calculations of corrections in the Newton’s law of gravitation due to Kaluza-Klein gravitons in five-dimensional warped thick braneworld scenarios. We consider here a recently proposed model, namely, the hybrid Bloch brane. This model couples two scalar fields to gravity and is engendered from a domain wall-like defect. Also, two other models the so-called asymmetric hybrid brane and compact brane are considered. Such models are deformations of the ϕ 4 and sine-Gordon topological defects, respectively. Therefore we consider the branes engendered by such defects and we also compute the corrections in their cases. In order to attain the mass spectrum and its corresponding eigenfunctions which are the essential quantities for computing the correction to the Newtonian potential, we develop a suitable numerical technique. The calculation of slight deviations in the gravitational potential may be used as a selection tool for braneworld scenarios matching with future experimental measurements in high energy collisions

  13. Establishment and correction of an Echelle cross-prism spectrogram reduction model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Bayanheshig; Li, Xiaotian; Cui, Jicheng

    2017-11-01

    The accuracy of an echelle cross-prism spectrometer depends on the matching degree between the spectrum reduction model and the actual state of the spectrometer. However, the error of adjustment can change the actual state of the spectrometer and result in a reduction model that does not match. This produces an inaccurate wavelength calibration. Therefore, the calibration of a spectrogram reduction model is important for the analysis of any echelle cross-prism spectrometer. In this study, the spectrogram reduction model of an echelle cross-prism spectrometer was established. The image position laws of a spectrometer that varies with the system parameters were simulated to the influence of the changes in prism refractive index, focal length and so on, on the calculation results. The model was divided into different wavebands. The iterative method, least squares principle and element lamps with known characteristic wavelength were used to calibrate the spectral model in different wavebands to obtain the actual values of the system parameters. After correction, the deviation of actual x- and y-coordinates and the coordinates calculated by the model are less than one pixel. The model corrected by this method thus reflects the system parameters in the current spectrometer state and can assist in accurate wavelength extraction. The instrument installation and adjustment would be guided in model-repeated correction, reducing difficulty of equipment, respectively.

  14. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    PubMed

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. Sensitivity of the atmospheric water cycle to corrections of the sea surface temperature bias over southern Africa in a regional climate model

    NASA Astrophysics Data System (ADS)

    Weber, Torsten; Haensler, Andreas; Jacob, Daniela

    2017-12-01

    Regional climate models (RCMs) have been used to dynamically downscale global climate projections at high spatial and temporal resolution in order to analyse the atmospheric water cycle. In southern Africa, precipitation pattern were strongly affected by the moisture transport from the southeast Atlantic and southwest Indian Ocean and, consequently, by their sea surface temperatures (SSTs). However, global ocean models often have deficiencies in resolving regional to local scale ocean currents, e.g. in ocean areas offshore the South African continent. By downscaling global climate projections using RCMs, the biased SSTs from the global forcing data were introduced to the RCMs and affected the results of regional climate projections. In this work, the impact of the SST bias correction on precipitation, evaporation and moisture transport were analysed over southern Africa. For this analysis, several experiments were conducted with the regional climate model REMO using corrected and uncorrected SSTs. In these experiments, a global MPI-ESM-LR historical simulation was downscaled with the regional climate model REMO to a high spatial resolution of 50 × 50 km2 and of 25 × 25 km2 for southern Africa using a double-nesting method. The results showed a distinct impact of the corrected SST on the moisture transport, the meridional vertical circulation and on the precipitation pattern in southern Africa. Furthermore, it was found that the experiment with the corrected SST led to a reduction of the wet bias over southern Africa and to a better agreement with observations as without SST bias corrections.

  16. A Multidimensional B-Spline Correction for Accurate Modeling Sugar Puckering in QM/MM Simulations.

    PubMed

    Huang, Ming; Dissanayake, Thakshila; Kuechler, Erich; Radak, Brian K; Lee, Tai-Sung; Giese, Timothy J; York, Darrin M

    2017-09-12

    The computational efficiency of approximate quantum mechanical methods allows their use for the construction of multidimensional reaction free energy profiles. It has recently been demonstrated that quantum models based on the neglect of diatomic differential overlap (NNDO) approximation have difficulty modeling deoxyribose and ribose sugar ring puckers and thus limit their predictive value in the study of RNA and DNA systems. A method has been introduced in our previous work to improve the description of the sugar puckering conformational landscape that uses a multidimensional B-spline correction map (BMAP correction) for systems involving intrinsically coupled torsion angles. This method greatly improved the adiabatic potential energy surface profiles of DNA and RNA sugar rings relative to high-level ab initio methods even for highly problematic NDDO-based models. In the present work, a BMAP correction is developed, implemented, and tested in molecular dynamics simulations using the AM1/d-PhoT semiempirical Hamiltonian for biological phosphoryl transfer reactions. Results are presented for gas-phase adiabatic potential energy surfaces of RNA transesterification model reactions and condensed-phase QM/MM free energy surfaces for nonenzymatic and RNase A-catalyzed transesterification reactions. The results show that the BMAP correction is stable, efficient, and leads to improvement in both the potential energy and free energy profiles for the reactions studied, as compared with ab initio and experimental reference data. Exploration of the effect of the size of the quantum mechanical region indicates the best agreement with experimental reaction barriers occurs when the full CpA dinucleotide substrate is treated quantum mechanically with the sugar pucker correction.

  17. Intercomparison of Downscaling Methods on Hydrological Impact for Earth System Model of NE United States

    NASA Astrophysics Data System (ADS)

    Yang, P.; Fekete, B. M.; Rosenzweig, B.; Lengyel, F.; Vorosmarty, C. J.

    2012-12-01

    Atmospheric dynamics are essential inputs to Regional-scale Earth System Models (RESMs). Variables including surface air temperature, total precipitation, solar radiation, wind speed and humidity must be downscaled from coarse-resolution, global General Circulation Models (GCMs) to the high temporal and spatial resolution required for regional modeling. However, this downscaling procedure can be challenging due to the need to correct for bias from the GCM and to capture the spatiotemporal heterogeneity of the regional dynamics. In this study, the results obtained using several downscaling techniques and observational datasets were compared for a RESM of the Northeast Corridor of the United States. Previous efforts have enhanced GCM model outputs through bias correction using novel techniques. For example, the Climate Impact Research at Potsdam Institute developed a series of bias-corrected GCMs towards the next generation climate change scenarios (Schiermeier, 2012; Moss et al., 2010). Techniques to better represent the heterogeneity of climate variables have also been improved using statistical approaches (Maurer, 2008; Abatzoglou, 2011). For this study, four downscaling approaches to transform bias-corrected HADGEM2-ES Model output (daily at .5 x .5 degree) to the 3'*3'(longitude*latitude) daily and monthly resolution required for the Northeast RESM were compared: 1) Bilinear Interpolation, 2) Daily bias-corrected spatial downscaling (D-BCSD) with Gridded Meteorological Datasets (developed by Abazoglou 2011), 3) Monthly bias-corrected spatial disaggregation (M-BCSD) with CRU(Climate Research Unit) and 4) Dynamic Downscaling based on Weather Research and Forecast (WRF) model. Spatio-temporal analysis of the variability in precipitation was conducted over the study domain. Validation of the variables of different downscaling methods against observational datasets was carried out for assessment of the downscaled climate model outputs. The effects of using the different approaches to downscale atmospheric variables (specifically air temperature and precipitation) for use as inputs to the Water Balance Model (WBMPlus, Vorosmarty et al., 1998;Wisser et al., 2008) for simulation of daily discharge and monthly stream flow in the Northeast US for a 100-year period in the 21st century were also assessed. Statistical techniques especially monthly bias-corrected spatial disaggregation (M-BCSD) showed potential advantage among other methods for the daily discharge and monthly stream flow simulation. However, Dynamic Downscaling will provide important complements to the statistical approaches tested.

  18. An Inherent-Optical-Property-Centered Approach to Correct the Angular Effects in Water-Leaving Radiance

    DTIC Science & Technology

    2011-07-01

    10%. These results demonstrate that the IOP-based BRDF correction scheme (which is composed of the R„ model along with the IOP retrieval...distribution was averaged over 10 min 5. Validation of the lOP-Based BRDF Correction Scheme The IOP-based BRDF correction scheme is applied to both...oceanic and coastal waters were very consistent qualitatively and quantitatively and thus validate the IOP- based BRDF correction system, at least

  19. How do Stability Corrections Perform in the Stable Boundary Layer Over Snow?

    NASA Astrophysics Data System (ADS)

    Schlögl, Sebastian; Lehning, Michael; Nishimura, Kouichi; Huwald, Hendrik; Cullen, Nicolas J.; Mott, Rebecca

    2017-10-01

    We assess sensible heat-flux parametrizations in stable conditions over snow surfaces by testing and developing stability correction functions for two alpine and two polar test sites. Five turbulence datasets are analyzed with respect to, (a) the validity of the Monin-Obukhov similarity theory, (b) the model performance of well-established stability corrections, and (c) the development of new univariate and multivariate stability corrections. Using a wide range of stability corrections reveals an overestimation of the turbulent sensible heat flux for high wind speeds and a generally poor performance of all investigated functions for large temperature differences between snow and the atmosphere above (>10 K). Applying the Monin-Obukhov bulk formulation introduces a mean absolute error in the sensible heat flux of 6 W m^{-2} (compared with heat fluxes calculated directly from eddy covariance). The stability corrections produce an additional error between 1 and 5 W m^{-2}, with the smallest error for published stability corrections found for the Holtslag scheme. We confirm from previous studies that stability corrections need improvements for large temperature differences and wind speeds, where sensible heat fluxes are distinctly overestimated. Under these atmospheric conditions our newly developed stability corrections slightly improve the model performance. However, the differences between stability corrections are typically small when compared to the residual error, which stems from the Monin-Obukhov bulk formulation.

  20. Nonlinear Adjustment with or without Constraints, Applicable to Geodetic Models

    DTIC Science & Technology

    1989-03-01

    corrections are neglected, resulting in the familiar (linearized) observation equations. In matrix notation, the latter are expressed by V = A X + I...where A is the design matrix, x=X -x is the column-vector of parametric corrections , VzLa-L b is the column-vector of residuals, and L=L -Lb is the...X0 . corresponds to the set ua of model-surface 0 coordinates describing the initial point P. The final set of parametric corrections , X, then

  1. Two-compartment modeling of tissue microcirculation revisited.

    PubMed

    Brix, Gunnar; Salehi Ravesh, Mona; Griebel, Jürgen

    2017-05-01

    Conventional two-compartment modeling of tissue microcirculation is used for tracer kinetic analysis of dynamic contrast-enhanced (DCE) computed tomography or magnetic resonance imaging studies although it is well-known that the underlying assumption of an instantaneous mixing of the administered contrast agent (CA) in capillaries is far from being realistic. It was thus the aim of the present study to provide theoretical and computational evidence in favor of a conceptually alternative modeling approach that makes it possible to characterize the bias inherent to compartment modeling and, moreover, to approximately correct for it. Starting from a two-region distributed-parameter model that accounts for spatial gradients in CA concentrations within blood-tissue exchange units, a modified lumped two-compartment exchange model was derived. It has the same analytical structure as the conventional two-compartment model, but indicates that the apparent blood flow identifiable from measured DCE data is substantially overestimated, whereas the three other model parameters (i.e., the permeability-surface area product as well as the volume fractions of the plasma and interstitial distribution space) are unbiased. Furthermore, a simple formula was derived to approximately compute a bias-corrected flow from the estimates of the apparent flow and permeability-surface area product obtained by model fitting. To evaluate the accuracy of the proposed modeling and bias correction method, representative noise-free DCE curves were analyzed. They were simulated for 36 microcirculation and four input scenarios by an axially distributed reference model. As analytically proven, the considered two-compartment exchange model is structurally identifiable from tissue residue data. The apparent flow values estimated for the 144 simulated tissue/input scenarios were considerably biased. After bias-correction, the deviations between estimated and actual parameter values were (11.2 ± 6.4) % (vs. (105 ± 21) % without correction) for the flow, (3.6 ± 6.1) % for the permeability-surface area product, (5.8 ± 4.9) % for the vascular volume and (2.5 ± 4.1) % for the interstitial volume; with individual deviations of more than 20% being the exception and just marginal. Increasing the duration of CA administration only had a statistically significant but opposite effect on the accuracy of the estimated flow (declined) and intravascular volume (improved). Physiologically well-defined tissue parameters are structurally identifiable and accurately estimable from DCE data by the conceptually modified two-compartment model in combination with the bias correction. The accuracy of the bias-corrected flow is nearly comparable to that of the three other (theoretically unbiased) model parameters. As compared to conventional two-compartment modeling, this feature constitutes a major advantage for tracer kinetic analysis of both preclinical and clinical DCE imaging studies. © 2017 American Association of Physicists in Medicine.

  2. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  3. Analytic model for ring pattern formation by bacterial swarmers

    NASA Astrophysics Data System (ADS)

    Arouh, Scott

    2001-03-01

    We analyze a model proposed by Medvedev, Kaper, and Kopell (the MKK model) for ring formation in two-dimensional bacterial colonies of Proteus mirabilis. We correct the model to formally include a feature crucial of the ring generation mechanism: a bacterial density threshold to the nonlinear diffusivity of the MKK model. We numerically integrate the model equations, and observe the logarithmic profiles of the bacterial densities near the front. These lead us to define a consolidation front distinct from the colony radius. We find that this consolidation front propagates outward toward the colony radius with a nearly constant velocity. We then implement the corrected MKK equations in two dimensions and compare our results with biological experiment. Our numerical results indicate that the two-dimensional corrected MKK model yields smooth (rather than branched) rings, and that colliding colonies merge if grown in phase but not if grown out of phase. We also introduce a model, based on coupling the MKK model to a nutrient field, for simulating experimentally observed branched rings.

  4. Improved water-level forecasting for the Northwest European Shelf and North Sea through direct modelling of tide, surge and non-linear interaction

    NASA Astrophysics Data System (ADS)

    Zijl, Firmijn; Verlaan, Martin; Gerritsen, Herman

    2013-07-01

    In real-time operational coastal forecasting systems for the northwest European shelf, the representation accuracy of tide-surge models commonly suffers from insufficiently accurate tidal representation, especially in shallow near-shore areas with complex bathymetry and geometry. Therefore, in conventional operational systems, the surge component from numerical model simulations is used, while the harmonically predicted tide, accurately known from harmonic analysis of tide gauge measurements, is added to forecast the full water-level signal at tide gauge locations. Although there are errors associated with this so-called astronomical correction (e.g. because of the assumption of linearity of tide and surge), for current operational models, astronomical correction has nevertheless been shown to increase the representation accuracy of the full water-level signal. The simulated modulation of the surge through non-linear tide-surge interaction is affected by the poor representation of the tide signal in the tide-surge model, which astronomical correction does not improve. Furthermore, astronomical correction can only be applied to locations where the astronomic tide is known through a harmonic analysis of in situ measurements at tide gauge stations. This provides a strong motivation to improve both tide and surge representation of numerical models used in forecasting. In the present paper, we propose a new generation tide-surge model for the northwest European Shelf (DCSMv6). This is the first application on this scale in which the tidal representation is such that astronomical correction no longer improves the accuracy of the total water-level representation and where, consequently, the straightforward direct model forecasting of total water levels is better. The methodology applied to improve both tide and surge representation of the model is discussed, with emphasis on the use of satellite altimeter data and data assimilation techniques for reducing parameter uncertainty. Historic DCSMv6 model simulations are compared against shelf wide observations for a full calendar year. For a selection of stations, these results are compared to those with astronomical correction, which confirms that the tide representation in coastal regions has sufficient accuracy, and that forecasting total water levels directly yields superior results.

  5. Markov model of the loan portfolio dynamics considering influence of management and external economic factors

    NASA Astrophysics Data System (ADS)

    Bozhalkina, Yana; Timofeeva, Galina

    2016-12-01

    Mathematical model of loan portfolio in the form of a controlled Markov chain with discrete time is considered. It is assumed that coefficients of migration matrix depend on corrective actions and external factors. Corrective actions include process of receiving applications, interaction with existing solvent and insolvent clients. External factors are macroeconomic indicators, such as inflation and unemployment rates, exchange rates, consumer price indices, etc. Changes in corrective actions adjust the intensity of transitions in the migration matrix. The mathematical model for forecasting the credit portfolio structure taking into account a cumulative impact of internal and external changes is obtained.

  6. A symmetric multivariate leakage correction for MEG connectomes

    PubMed Central

    Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.

    2015-01-01

    Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259

  7. Non-model-based correction of respiratory motion using beat-to-beat 3D spiral fat-selective imaging.

    PubMed

    Keegan, Jennifer; Gatehouse, Peter D; Yang, Guang-Zhong; Firmin, David N

    2007-09-01

    To demonstrate the feasibility of retrospective beat-to-beat correction of respiratory motion, without the need for a respiratory motion model. A high-resolution three-dimensional (3D) spiral black-blood scan of the right coronary artery (RCA) of six healthy volunteers was acquired over 160 cardiac cycles without respiratory gating. One spiral interleaf was acquired per cardiac cycle, prior to each of which a complete low-resolution fat-selective 3D spiral dataset was acquired. The respiratory motion (3D translation) on each cardiac cycle was determined by cross-correlating a region of interest (ROI) in the fat around the artery in the low-resolution datasets with that on a reference end-expiratory dataset. The measured translations were used to correct the raw data of the high-resolution spiral interleaves. Beat-to-beat correction provided consistently good results, with the image quality being better than that obtained with a fixed superior-inferior tracking factor of 0.6 and better than (N = 5) or equal to (N = 1) that achieved using a subject-specific retrospective 3D translation motion model. Non-model-based correction of respiratory motion using 3D spiral fat-selective imaging is feasible, and in this small group of volunteers produced better-quality images than a subject-specific retrospective 3D translation motion model. (c) 2007 Wiley-Liss, Inc.

  8. Human-machine interaction to disambiguate entities in unstructured text and structured datasets

    NASA Astrophysics Data System (ADS)

    Ward, Kevin; Davenport, Jack

    2017-05-01

    Creating entity network graphs is a manual, time consuming process for an intelligence analyst. Beyond the traditional big data problems of information overload, individuals are often referred to by multiple names and shifting titles as they advance in their organizations over time which quickly makes simple string or phonetic alignment methods for entities insufficient. Conversely, automated methods for relationship extraction and entity disambiguation typically produce questionable results with no way for users to vet results, correct mistakes or influence the algorithm's future results. We present an entity disambiguation tool, DRADIS, which aims to bridge the gap between human-centric and machinecentric methods. DRADIS automatically extracts entities from multi-source datasets and models them as a complex set of attributes and relationships. Entities are disambiguated across the corpus using a hierarchical model executed in Spark allowing it to scale to operational sized data. Resolution results are presented to the analyst complete with sourcing information for each mention and relationship allowing analysts to quickly vet the correctness of results as well as correct mistakes. Corrected results are used by the system to refine the underlying model allowing analysts to optimize the general model to better deal with their operational data. Providing analysts with the ability to validate and correct the model to produce a system they can trust enables them to better focus their time on producing higher quality analysis products.

  9. Evaluation of thermal network correction program using test temperature data

    NASA Technical Reports Server (NTRS)

    Ishimoto, T.; Fink, L. C.

    1972-01-01

    An evaluation process to determine the accuracy of a computer program for thermal network correction is discussed. The evaluation is required since factors such as inaccuracies of temperatures, insufficient number of temperature points over a specified time period, lack of one-to-one correlation between temperature sensor and nodal locations, and incomplete temperature measurements are not present in the computer-generated information. The mathematical models used in the evaluation are those that describe a physical system composed of both a conventional and a heat pipe platform. A description of the models used, the results of the evaluation of the thermal network correction, and input instructions for the thermal network correction program are presented.

  10. CD-SEM real time bias correction using reference metrology based modeling

    NASA Astrophysics Data System (ADS)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  11. Do large-scale inhomogeneities explain away dark energy?

    NASA Astrophysics Data System (ADS)

    Geshnizjani, Ghazal; Chung, Daniel J.; Afshordi, Niayesh

    2005-07-01

    Recently, new arguments [E. Barausse, S. Matarrese, and A. Riotto, Phys. Rev. D 71, 063537 (2005)., PRVDAQ, 0556-2821, 10.1103/PhysRevD.71.063537][E. W. Kolb, S. Matarrese, A. Notari, and A. Riotto, hep-th/0503117 [Phys. Rev. Lett. (to be published)]., PRLTAO, 0031-9007] for how corrections from super-Hubble modes can explain the present-day acceleration of the universe have appeared in the literature. However, in this paper, we argue that, to second order in spatial gradients, these corrections only amount to a renormalization of local spatial curvature, and thus cannot account for the negative deceleration. Moreover, cosmological observations already put severe bounds on such corrections, at the level of a few percent, while in the context of inflationary models, these corrections are typically limited to ˜10-5. Currently there is no general constraint on the possible correction from higher order gradient terms, but we argue that such corrections are even more constrained in the context of inflationary models.

  12. North Atlantic climate model bias influence on multiyear predictability

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Park, T.; Park, W.; Latif, M.

    2018-01-01

    The influences of North Atlantic biases on multiyear predictability of unforced surface air temperature (SAT) variability are examined in the Kiel Climate Model (KCM). By employing a freshwater flux correction over the North Atlantic to the model, which strongly alleviates both North Atlantic sea surface salinity (SSS) and sea surface temperature (SST) biases, the freshwater flux-corrected integration depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector in comparison to the uncorrected one. The enhanced SAT predictability in the corrected integration is due to a stronger and more variable Atlantic Meridional Overturning Circulation (AMOC) and its enhanced influence on North Atlantic SST. Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SAT and exhibit a smaller SAT predictability over the North Atlantic sector.

  13. Model-based aberration correction in a closed-loop wavefront-sensor-less adaptive optics system.

    PubMed

    Song, H; Fraanje, R; Schitter, G; Kroese, H; Vdovin, G; Verhaegen, M

    2010-11-08

    In many scientific and medical applications, such as laser systems and microscopes, wavefront-sensor-less (WFSless) adaptive optics (AO) systems are used to improve the laser beam quality or the image resolution by correcting the wavefront aberration in the optical path. The lack of direct wavefront measurement in WFSless AO systems imposes a challenge to achieve efficient aberration correction. This paper presents an aberration correction approach for WFSlss AO systems based on the model of the WFSless AO system and a small number of intensity measurements, where the model is identified from the input-output data of the WFSless AO system by black-box identification. This approach is validated in an experimental setup with 20 static aberrations having Kolmogorov spatial distributions. By correcting N=9 Zernike modes (N is the number of aberration modes), an intensity improvement from 49% of the maximum value to 89% has been achieved in average based on N+5=14 intensity measurements. With the worst initial intensity, an improvement from 17% of the maximum value to 86% has been achieved based on N+4=13 intensity measurements.

  14. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  15. Warm natural inflation

    NASA Astrophysics Data System (ADS)

    Mishra, Hiranmaya; Mohanty, Subhendra; Nautiyal, Akhilesh

    2012-04-01

    In warm inflation models there is the requirement of generating large dissipative couplings of the inflaton with radiation, while at the same time, not de-stabilising the flatness of the inflaton potential due to radiative corrections. One way to achieve this without fine tuning unrelated couplings is by supersymmetry. In this Letter we show that if the inflaton and other light fields are pseudo-Nambu-Goldstone bosons then the radiative corrections to the potential are suppressed and the thermal corrections are small as long as the temperature is below the symmetry breaking scale. In such models it is possible to fulfil the contrary requirements of an inflaton potential which is stable under radiative corrections and the generation of a large dissipative coupling of the inflaton field with other light fields. We construct a warm inflation model which gives the observed CMB-anisotropy amplitude and spectral index where the symmetry breaking is at the GUT scale.

  16. Comment on: "Corrections to the Mathematical Formulation of a Backwards Lagrangian Particle Dispersion Model" by Gibson and Sailor (2012: Boundary-Layer Meteorology 145, 399-406)

    NASA Astrophysics Data System (ADS)

    Stöckl, Stefan; Rotach, Mathias W.; Kljun, Natascha

    2018-01-01

    We discuss the results of Gibson and Sailor (Boundary-Layer Meteorol 145:399-406, 2012) who suggest several corrections to the mathematical formulation of the Lagrangian particle dispersion model of Rotach et al. (Q J R Meteorol Soc 122:367-389, 1996). While most of the suggested corrections had already been implemented in the 1990s, one suggested correction raises a valid point, but results in a violation of the well-mixed criterion. Here we improve their idea and test the impact on model results using a well-mixed test and a comparison with wind-tunnel experimental data. The new approach results in similar dispersion patterns as the original approach, while the approach suggested by Gibson and Sailor leads to erroneously reduced concentrations near the ground in convective and especially forced convective conditions.

  17. Radiative corrections in the (varying power)-law modified gravity

    NASA Astrophysics Data System (ADS)

    Hammad, Fayçal

    2015-06-01

    Although the (varying power)-law modified gravity toy model has the attractive feature of unifying the early- and late-time expansions of the Universe, thanks to the peculiar dependence of the scalar field's potential on the scalar curvature, the model still suffers from the fine-tuning problem when used to explain the actually observed Hubble parameter. Indeed, a more correct estimate of the mass of the scalar field needed to comply with actual observations gives an unnaturally small value. On the other hand, for a massless scalar field the potential would have no minimum and hence the field would always remain massless. What solves these issues are the radiative corrections that modify the field's effective potential. These corrections raise the field's effective mass, rendering the model free from fine-tuning, immune against positive fifth-force tests, and better suited to tackle the dark matter sector.

  18. Classical simulation of quantum error correction in a Fibonacci anyon code

    NASA Astrophysics Data System (ADS)

    Burton, Simon; Brell, Courtney G.; Flammia, Steven T.

    2017-02-01

    Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.

  19. VLBI height corrections due to gravitational deformation of antenna structures

    NASA Astrophysics Data System (ADS)

    Sarti, P.; Negusini, M.; Abbondanza, C.; Petrov, L.

    2009-12-01

    From an analysis of regional European VLBI data we evaluate the impact of a VLBI signal path correction model developed to account for gravitational deformations of the antenna structures. The model was derived from a combination of terrestrial surveying methods applied to telescopes at Medicina and Noto in Italy. We find that the model corrections shift the derived height components of these VLBI telescopes' reference points downward by 14.5 and 12.2 mm, respectively. No other parameter estimates nor other station positions are affected. Such systematic height errors are much larger than the formal VLBI random errors and imply the possibility of significant VLBI frame scale distortions, of major concern for the International Terrestrial Reference Frame (ITRF) and its applications. This demonstrates the urgent need to investigate gravitational deformations in other VLBI telescopes and eventually correct them in routine data analysis.

  20. Causes of model dry and warm bias over central U.S. and impact on climate projections.

    PubMed

    Lin, Yanluan; Dong, Wenhao; Zhang, Minghua; Xie, Yuanyu; Xue, Wei; Huang, Jianbin; Luo, Yong

    2017-10-12

    Climate models show a conspicuous summer warm and dry bias over the central United States. Using results from 19 climate models in the Coupled Model Intercomparison Project Phase 5 (CMIP5), we report a persistent dependence of warm bias on dry bias with the precipitation deficit leading the warm bias over this region. The precipitation deficit is associated with the widespread failure of models in capturing strong rainfall events in summer over the central U.S. A robust linear relationship between the projected warming and the present-day warm bias enables us to empirically correct future temperature projections. By the end of the 21st century under the RCP8.5 scenario, the corrections substantially narrow the intermodel spread of the projections and reduce the projected temperature by 2.5 K, resulting mainly from the removal of the warm bias. Instead of a sharp decrease, after this correction the projected precipitation is nearly neutral for all scenarios.Climate models repeatedly show a warm and dry bias over the central United States, but the origin of this bias remains unclear. Here the authors associate this bias to precipitation deficits in models and after applying a correction, projected precipitation in this region shows no significant changes.

  1. Analysis of quantum error correction with symmetric hypergraph states

    NASA Astrophysics Data System (ADS)

    Wagner, T.; Kampermann, H.; Bruß, D.

    2018-03-01

    Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.

  2. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  3. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  4. Phase I Hydrologic Data for the Groundwater Flow and Contaminant Transport Model of Corrective Action Unit 97: Yucca Flat/Climax Mine, Nevada Test Site, Nye County, Nevada, Rev. No.: 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John McCord

    2006-06-01

    The U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) initiated the Underground Test Area (UGTA) Project to assess and evaluate the effects of the underground nuclear weapons tests on groundwater beneath the Nevada Test Site (NTS) and vicinity. The framework for this evaluation is provided in Appendix VI, Revision No. 1 (December 7, 2000) of the Federal Facility Agreement and Consent Order (FFACO, 1996). Section 3.0 of Appendix VI ''Corrective Action Strategy'' of the FFACO describes the process that will be used to complete corrective actions specifically for the UGTA Project. The objective of themore » UGTA corrective action strategy is to define contaminant boundaries for each UGTA corrective action unit (CAU) where groundwater may have become contaminated from the underground nuclear weapons tests. The contaminant boundaries are determined based on modeling of groundwater flow and contaminant transport. A summary of the FFACO corrective action process and the UGTA corrective action strategy is provided in Section 1.5. The FFACO (1996) corrective action process for the Yucca Flat/Climax Mine CAU 97 was initiated with the Corrective Action Investigation Plan (CAIP) (DOE/NV, 2000a). The CAIP included a review of existing data on the CAU and proposed a set of data collection activities to collect additional characterization data. These recommendations were based on a value of information analysis (VOIA) (IT, 1999), which evaluated the value of different possible data collection activities, with respect to reduction in uncertainty of the contaminant boundary, through simplified transport modeling. The Yucca Flat/Climax Mine CAIP identifies a three-step model development process to evaluate the impact of underground nuclear testing on groundwater to determine a contaminant boundary (DOE/NV, 2000a). The three steps are as follows: (1) Data compilation and analysis that provides the necessary modeling data that is completed in two parts: the first addressing the groundwater flow model, and the second the transport model. (2) Development of a groundwater flow model. (3) Development of a groundwater transport model. This report presents the results of the first part of the first step, documenting the data compilation, evaluation, and analysis for the groundwater flow model. The second part, documentation of transport model data will be the subject of a separate report. The purpose of this document is to present the compilation and evaluation of the available hydrologic data and information relevant to the development of the Yucca Flat/Climax Mine CAU groundwater flow model, which is a fundamental tool in the prediction of the extent of contaminant migration. Where appropriate, data and information documented elsewhere are summarized with reference to the complete documentation. The specific task objectives for hydrologic data documentation are as follows: (1) Identify and compile available hydrologic data and supporting information required to develop and validate the groundwater flow model for the Yucca Flat/Climax Mine CAU. (2) Assess the quality of the data and associated documentation, and assign qualifiers to denote levels of quality. (3) Analyze the data to derive expected values or spatial distributions and estimates of the associated uncertainty and variability.« less

  5. Retinal image mosaicing using the radial distortion correction model

    NASA Astrophysics Data System (ADS)

    Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.

    2008-03-01

    Fundus camera imaging can be used to examine the retina to detect disorders. Similar to looking through a small keyhole into a large room, imaging the fundus with an ophthalmologic camera allows only a limited view at a time. Thus, the generation of a retinal montage using multiple images has the potential to increase diagnostic accuracy by providing larger field of view. A method of mosaicing multiple retinal images using the radial distortion correction (RADIC) model is proposed in this paper. Our method determines the inter-image connectivity by detecting feature correspondences. The connectivity information is converted to a tree structure that describes the spatial relationships between the reference and target images for pairwise registration. The montage is generated by cascading pairwise registration scheme starting from the anchor image downward through the connectivity tree hierarchy. The RADIC model corrects the radial distortion that is due to the spherical-to-planar projection during retinal imaging. Therefore, after radial distortion correction, individual images can be properly mapped onto a montage space by a linear geometric transformation, e.g. affine transform. Compared to the most existing montaging methods, our method is unique in that only a single registration per image is required because of the distortion correction property of RADIC model. As a final step, distance-weighted intensity blending is employed to correct the inter-image differences in illumination encountered when forming the montage. Visual inspection of the experimental results using three mosaicing cases shows our method can produce satisfactory montages.

  6. Bidirectional Contrast agent leakage correction of dynamic susceptibility contrast (DSC)-MRI improves cerebral blood volume estimation and survival prediction in recurrent glioblastoma treated with bevacizumab.

    PubMed

    Leu, Kevin; Boxerman, Jerrold L; Lai, Albert; Nghiemphu, Phioanh L; Pope, Whitney B; Cloughesy, Timothy F; Ellingson, Benjamin M

    2016-11-01

    To evaluate a leakage correction algorithm for T 1 and T2* artifacts arising from contrast agent extravasation in dynamic susceptibility contrast magnetic resonance imaging (DSC-MRI) that accounts for bidirectional contrast agent flux and compare relative cerebral blood volume (CBV) estimates and overall survival (OS) stratification from this model to those made with the unidirectional and uncorrected models in patients with recurrent glioblastoma (GBM). We determined median rCBV within contrast-enhancing tumor before and after bevacizumab treatment in patients (75 scans on 1.5T, 19 scans on 3.0T) with recurrent GBM without leakage correction and with application of the unidirectional and bidirectional leakage correction algorithms to determine whether rCBV stratifies OS. Decreased post-bevacizumab rCBV from baseline using the bidirectional leakage correction algorithm significantly correlated with longer OS (Cox, P = 0.01), whereas rCBV change using the unidirectional model (P = 0.43) or the uncorrected rCBV values (P = 0.28) did not. Estimates of rCBV computed with the two leakage correction algorithms differed on average by 14.9%. Accounting for T 1 and T2* leakage contamination in DSC-MRI using a two-compartment, bidirectional rather than unidirectional exchange model might improve post-bevacizumab survival stratification in patients with recurrent GBM. J. Magn. Reson. Imaging 2016;44:1229-1237. © 2016 International Society for Magnetic Resonance in Medicine.

  7. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    NASA Astrophysics Data System (ADS)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  8. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    USGS Publications Warehouse

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John Y.

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less

  10. Higgs boson mass in the standard model at two-loop order and beyond

    DOE PAGES

    Martin, Stephen P.; Robertson, David G.

    2014-10-01

    We calculate the mass of the Higgs boson in the standard model in terms of the underlying Lagrangian parameters at complete 2-loop order with leading 3-loop corrections. A computer program implementing the results is provided. The program also computes and minimizes the standard model effective potential in Landau gauge at 2-loop order with leading 3-loop corrections.

  11. Recognition Memory zROC Slopes for Items with Correct versus Incorrect Source Decisions Discriminate the Dual Process and Unequal Variance Signal Detection Models

    ERIC Educational Resources Information Center

    Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.

    2014-01-01

    We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…

  12. Modelling geomorphic responses to human perturbations: Application to the Kander river, Switzerland

    NASA Astrophysics Data System (ADS)

    Ramirez, Jorge; Zischg, Andreas; Schürmann, Stefan; Zimmermann, Markus; Weingartner, Rolf; Coulthard, Tom; Keiler, Margreth

    2017-04-01

    Before 1714 the Kander river (Switzerland) flowed into the Aare river causing massive flooding and for this reason the Kander river was deviated (Kander correction) to lake Thun. The Kander correction was a pioneering hydrological project and induced a major human change to the landscape, but had unintended hydrological and geomorphic impacts that cascaded upstream and downstream. For example doubling the catchment area of Lake Thun, which gave rise to major flood problems, cessation of direct sediment delivery to the Aare, and sediment flux to lake Thun forming the Kander delta. More importantly the Kander correction shortened the Kander river and substantially increased the slope and bed shear of the Kander upstream from the correction. Consequently impacts of the correction cascaded upstream as a migrating knickpoint and eroded the river channel at unprecedented rates. Today we may have at our disposal the theoretical and empirical foundations to foresee the consequences of human intervention into natural systems. One method to investigate such geomorphic changes are numerical models that estimate the evolution of rivers by simulating the movement of water and sediment. Although much progress has been made in the development of these geomorphic models, few models have been tested in circumstances with rare perturbations and extreme forcings. As such, it remains uncertain if geomorphic models are useful and stable in extreme situations that include large movements of sediment and water. Here, in this study, we use historic maps and documents to develop a detailed geomorphic model of the Kander river starting in the year 1714. We use this model to simulate the extreme geomorphic events that preceded the deviation of the Kander river into Lake Thun and simulate changes to the river until conditions become relatively stable. We test our model by replicating long term impacts to the river that include 1) rates of incision within the correction, 2) knickpoint migration, and 3) delta formation in Lake Thun. In doing this we build confidence in the model and gain understanding of how the river system responded to anthropogenic perturbations.

  13. Local respiratory motion correction for PET/CT imaging: Application to lung cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamare, F., E-mail: frederic.lamare@chu-bordeaux.fr; Fernandez, P.; Fayad, H.

    Purpose: Despite multiple methodologies already proposed to correct respiratory motion in the whole PET imaging field of view (FOV), such approaches have not found wide acceptance in clinical routine. An alternative can be the local respiratory motion correction (LRMC) of data corresponding to a given volume of interest (VOI: organ or tumor). Advantages of LRMC include the use of a simple motion model, faster execution times, and organ specific motion correction. The purpose of this study was to evaluate the performance of LMRC using various motion models for oncology (lung lesion) applications. Methods: Both simulated (NURBS based 4D cardiac-torso phantom)more » and clinical studies (six patients) were used in the evaluation of the proposed LRMC approach. PET data were acquired in list-mode and synchronized with respiration. The implemented approach consists first in defining a VOI on the reconstructed motion average image. Gated PET images of the VOI are subsequently reconstructed using only lines of response passing through the selected VOI and are used in combination with a center of gravity or an affine/elastic registration algorithm to derive the transformation maps corresponding to the respiration effects. Those are finally integrated in the reconstruction process to produce a motion free image over the lesion regions. Results: Although the center of gravity or affine algorithm achieved similar performance for individual lesion motion correction, the elastic model, applied either locally or to the whole FOV, led to an overall superior performance. The spatial tumor location was altered by 89% and 81% for the elastic model applied locally or to the whole FOV, respectively (compared to 44% and 39% for the center of gravity and affine models, respectively). This resulted in similar associated overall tumor volume changes of 84% and 80%, respectively (compared to 75% and 71% for the center of gravity and affine models, respectively). The application of the nonrigid deformation model in LRMC led to over an order of magnitude gain in computational efficiency of the correction relative to the application of the deformable model to the whole FOV. Conclusions: The results of this study support the use of LMRC as a flexible and efficient correction approach for respiratory motion effects for single lesions in the thoracic area.« less

  14. Feasibility of self-correcting quantum memory and thermal stability of topological order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Beni, E-mail: rouge@mit.edu

    2011-10-15

    Recently, it has become apparent that the thermal stability of topologically ordered systems at finite temperature, as discussed in condensed matter physics, can be studied by addressing the feasibility of self-correcting quantum memory, as discussed in quantum information science. Here, with this correspondence in mind, we propose a model of quantum codes that may cover a large class of physically realizable quantum memory. The model is supported by a certain class of gapped spin Hamiltonians, called stabilizer Hamiltonians, with translation symmetries and a small number of ground states that does not grow with the system size. We show that themore » model does not work as self-correcting quantum memory due to a certain topological constraint on geometric shapes of its logical operators. This quantum coding theoretical result implies that systems covered or approximated by the model cannot have thermally stable topological order, meaning that systems cannot be stable against both thermal fluctuations and local perturbations simultaneously in two and three spatial dimensions. - Highlights: > We define a class of physically realizable quantum codes. > We determine their coding and physical properties completely. > We establish the connection between topological order and self-correcting memory. > We find they do not work as self-correcting quantum memory. > We find they do not have thermally stable topological order.« less

  15. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  16. Effects of ionic strength and ion pairing on (plant-wide) modelling of anaerobic digestion.

    PubMed

    Solon, Kimberly; Flores-Alsina, Xavier; Mbamba, Christian Kazadi; Volcke, Eveline I P; Tait, Stephan; Batstone, Damien; Gernaey, Krist V; Jeppsson, Ulf

    2015-03-01

    Plant-wide models of wastewater treatment (such as the Benchmark Simulation Model No. 2 or BSM2) are gaining popularity for use in holistic virtual studies of treatment plant control and operations. The objective of this study is to show the influence of ionic strength (as activity corrections) and ion pairing on modelling of anaerobic digestion processes in such plant-wide models of wastewater treatment. Using the BSM2 as a case study with a number of model variants and cationic load scenarios, this paper presents the effects of an improved physico-chemical description on model predictions and overall plant performance indicators, namely effluent quality index (EQI) and operational cost index (OCI). The acid-base equilibria implemented in the Anaerobic Digestion Model No. 1 (ADM1) are modified to account for non-ideal aqueous-phase chemistry. The model corrects for ionic strength via the Davies approach to consider chemical activities instead of molar concentrations. A speciation sub-routine based on a multi-dimensional Newton-Raphson (NR) iteration method is developed to address algebraic interdependencies. The model also includes ion pairs that play an important role in wastewater treatment. The paper describes: 1) how the anaerobic digester performance is affected by physico-chemical corrections; 2) the effect on pH and the anaerobic digestion products (CO2, CH4 and H2); and, 3) how these variations are propagated from the sludge treatment to the water line. Results at high ionic strength demonstrate that corrections to account for non-ideal conditions lead to significant differences in predicted process performance (up to 18% for effluent quality and 7% for operational cost) but that for pH prediction, activity corrections are more important than ion pairing effects. Both are likely to be required when precipitation is to be modelled. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A mass-conserving multiphase lattice Boltzmann model for simulation of multiphase flows

    NASA Astrophysics Data System (ADS)

    Niu, Xiao-Dong; Li, You; Ma, Yi-Ren; Chen, Mu-Feng; Li, Xiang; Li, Qiao-Zhong

    2018-01-01

    In this study, a mass-conserving multiphase lattice Boltzmann (LB) model is proposed for simulating the multiphase flows. The proposed model developed in the present study is to improve the model of Shao et al. ["Free-energy-based lattice Boltzmann model for simulation of multiphase flows with density contrast," Phys. Rev. E 89, 033309 (2014)] by introducing a mass correction term in the lattice Boltzmann model for the interface. The model of Shao et al. [(the improved Zheng-Shu-Chew (Z-S-C model)] correctly considers the effect of the local density variation in momentum equation and has an obvious improvement over the Zheng-Shu-Chew (Z-S-C) model ["A lattice Boltzmann model for multiphase flows with large density ratio," J. Comput. Phys. 218(1), 353-371 (2006)] in terms of solution accuracy. However, due to the physical diffusion and numerical dissipation, the total mass of each fluid phase cannot be conserved correctly. To solve this problem, a mass correction term, which is similar to the one proposed by Wang et al. ["A mass-conserved diffuse interface method and its application for incompressible multiphase flows with large density ratio," J. Comput. Phys. 290, 336-351 (2015)], is introduced into the lattice Boltzmann equation for the interface to compensate the mass losses or offset the mass increase. Meanwhile, to implement the wetting boundary condition and the contact angle, a geometric formulation and a local force are incorporated into the present mass-conserving LB model. The proposed model is validated by verifying the Laplace law, simulating both one and two aligned droplets splashing onto a liquid film, droplets standing on an ideal wall, droplets with different wettability splashing onto smooth wax, and bubbles rising under buoyancy. Numerical results show that the proposed model can correctly simulate multiphase flows. It was found that the mass is well-conserved in all cases considered by the model developed in the present study. The developed model has been found to perform better than the improved Z-S-C model in this aspect.

  18. Biophysical modelling of phytoplankton communities from first principles using two-layered spheres: Equivalent Algal Populations (EAP) model: erratum.

    PubMed

    Lain, Lisl Robertson; Bernard, Stewart; Matthews, Mark W

    2016-11-28

    We regret that the Rrs spectra shown for the EAP modelled high biomass validation in Fig. 7 [Opt. Express, 22, 16745 (2014)] are incorrect. They are corrected here. The closest match of modelled to measured effective diameter is for a generalised 16 μm dinoflagellate population and not a 12 μm one as previously stated. These corrections do not affect the discussion or the conclusions of the paper.

  19. An alternative ionospheric correction model for global navigation satellite systems

    NASA Astrophysics Data System (ADS)

    Hoque, M. M.; Jakowski, N.

    2015-04-01

    The ionosphere is recognized as a major error source for single-frequency operations of global navigation satellite systems (GNSS). To enhance single-frequency operations the global positioning system (GPS) uses an ionospheric correction algorithm (ICA) driven by 8 coefficients broadcasted in the navigation message every 24 h. Similarly, the global navigation satellite system Galileo uses the electron density NeQuick model for ionospheric correction. The Galileo satellite vehicles (SVs) transmit 3 ionospheric correction coefficients as driver parameters of the NeQuick model. In the present work, we propose an alternative ionospheric correction algorithm called Neustrelitz TEC broadcast model NTCM-BC that is also applicable for global satellite navigation systems. Like the GPS ICA or Galileo NeQuick, the NTCM-BC can be optimized on a daily basis by utilizing GNSS data obtained at the previous day at monitor stations. To drive the NTCM-BC, 9 ionospheric correction coefficients need to be uploaded to the SVs for broadcasting in the navigation message. Our investigation using GPS data of about 200 worldwide ground stations shows that the 24-h-ahead prediction performance of the NTCM-BC is better than the GPS ICA and comparable to the Galileo NeQuick model. We have found that the 95 percentiles of the prediction error are about 16.1, 16.1 and 13.4 TECU for the GPS ICA, Galileo NeQuick and NTCM-BC, respectively, during a selected quiet ionospheric period, whereas the corresponding numbers are found about 40.5, 28.2 and 26.5 TECU during a selected geomagnetic perturbed period. However, in terms of complexity the NTCM-BC is easier to handle than the Galileo NeQuick and in this respect comparable to the GPS ICA.

  20. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. An integrated modeling approach to predict flooding on urban basin.

    PubMed

    Dey, Ashis Kumar; Kamioka, Seiji

    2007-01-01

    Correct prediction of flood extents in urban catchments has become a challenging issue. The traditional urban drainage models that consider only the sewerage-network are able to simulate the drainage system correctly until there is no overflow from the network inlet or manhole. When such overflows exist due to insufficient drainage capacity of downstream pipes or channels, it becomes difficult to reproduce the actual flood extents using these traditional one-phase simulation techniques. On the other hand, the traditional 2D models that simulate the surface flooding resulting from rainfall and/or levee break do not consider the sewerage network. As a result, the correct flooding situation is rarely addressed from those available traditional 1D and 2D models. This paper presents an integrated model that simultaneously simulates the sewerage network, river network and 2D mesh network to get correct flood extents. The model has been successfully applied into the Tenpaku basin (Nagoya, Japan), which experienced severe flooding with a maximum flood depth more than 1.5 m on September 11, 2000 when heavy rainfall, 580 mm in 28 hrs (return period > 100 yr), occurred over the catchments. Close agreements between the simulated flood depths and observed data ensure that the present integrated modeling approach is able to reproduce the urban flooding situation accurately, which rarely can be obtained through the traditional 1D and 2D modeling approaches.

  2. Threshold corrections to dimension-six proton decay operators in SUSY SU(5)

    NASA Astrophysics Data System (ADS)

    Kuwahara, Takumi

    2017-11-01

    Proton decay is a significant phenomenon to verify supersymmetric grand unified theories (SUSY GUTs). To predict the proton lifetime precisely, it is important to include the next-leading order (NLO) corrections to the proton decay operators. In this talk, we have shown threshold corrections to the dimension-six proton decay operators in the minimal SUSY SU(5) GUT, its extended models with extra matters, and the missing partner SUSY SU(5) GUT. As a result, we have found that the threshold effects give rise to corrections a few percent in the minimal setup and below 5% in its extension with extra matters in spite of a large unified coupling at the GUT scale. On the other hand, in the missing partner model the correction to the proton decay rate is suppression about 60% due to a number of component fields of 75 and their mass splitting.

  3. Effects of two types of intra-team feedback on developing a shared mental model in Command & Control teams.

    PubMed

    Rasker, P C; Post, W M; Schraagen, J M

    2000-08-01

    In two studies, the effect of two types of intra-team feedback on developing a shared mental model in Command & Control teams was investigated. A distinction is made between performance monitoring and team self-correction. Performance monitoring is the ability of team members to monitor each other's task execution and give feedback during task execution. Team self-correction is the process in which team members engage in evaluating their performance and in determining their strategies after task execution. In two experiments the opportunity to engage in performance monitoring, respectively team self-correction, was varied systematically. Both performance monitoring as well as team self-correction appeared beneficial in the improvement of team performance. Teams that had the opportunity to engage in performance monitoring, however, performed better than teams that had the opportunity to engage in team self-correction.

  4. Temporal integration property of stereopsis after higher-order aberration correction

    PubMed Central

    Kang, Jian; Dai, Yun; Zhang, Yudong

    2015-01-01

    Based on a binocular adaptive optics visual simulator, we investigated the effect of higher-order aberration correction on the temporal integration property of stereopsis. Stereo threshold for line stimuli, viewed in 550nm monochromatic light, was measured as a function of exposure duration, with higher-order aberrations uncorrected, binocularly corrected or monocularly corrected. Under all optical conditions, stereo threshold decreased with increasing exposure duration until a steady-state threshold was reached. The critical duration was determined by a quadratic summation model and the high goodness of fit suggested this model was reasonable. For normal subjects, the slope for stereo threshold versus exposure duration was about −0.5 on logarithmic coordinates, and the critical duration was about 200 ms. Both the slope and the critical duration were independent of the optical condition of the eye, showing no significant effect of higher-order aberration correction on the temporal integration property of stereopsis. PMID:26601010

  5. Correcting for batch effects in case-control microbiome studies

    PubMed Central

    Gibbons, Sean M.; Duvallet, Claire

    2018-01-01

    High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses. PMID:29684016

  6. Correction of broadband albedo measurements affected by unknown slope and sensor tilts

    NASA Astrophysics Data System (ADS)

    Weiser, Ursula; Olefs, Marc; Schöner, Wolfgang; Weyss, Gernot; Hynek, Bernhard

    2017-02-01

    Geometric effects induced by the underlying terrain slope or by tilt errors of radiation sensors lead to an erroneous measurement of snow or ice albedo. Consequently, diurnal albedo variations are observed. A general method to correct tilt errors of albedo measurements in cases where tilts of both the sensors and the slopes are not accurately measured or known is presented. Atmospheric parameters for this correction method can either be taken from a nearby well-maintained and horizontally levelled measurement of global radiation or alternatively from a solar radiation model. In a next step the model is fitted to the measured data to determine tilts and directions of the sensors and the underlying terrain slope. This then allows to correct the measured albedo, the radiative balance and the energy balance. Depending on the direction of the slope and the sensors a comparison between measured and corrected albedo values reveals obvious over-or underestimations of albedo.

  7. Wind-tunnel investigation of the flow correction for a model-mounted angle of attack sensor at angles of attack from -10 deg to 110 deg. [Langley 12-foot low speed wind tunnel test

    NASA Technical Reports Server (NTRS)

    Moul, T. M.

    1979-01-01

    A preliminary wind tunnel investigation was undertaken to determine the flow correction for a vane angle of attack sensor over an angle of attack range from -10 deg to 110 deg. The sensor was mounted ahead of the wing on a 1/5 scale model of a general aviation airplane. It was shown that the flow correction was substantial, reaching about 15 deg at an angle of attack of 90 deg. The flow correction was found to increase as the sensor was moved closer to the wing or closer to the fuselage. The experimentally determined slope of the flow correction versus the measured angle of attack below the stall angle of attack agreed closely with the slope of flight data from a similar full scale airplane.

  8. Explicit calculation of the two-loop corrections to the chiral magnetic effect with the NJL model

    NASA Astrophysics Data System (ADS)

    Chu, Kit-fai; Huang, Peng-hui; Liu, Hui

    2018-05-01

    The chiral magnetic effect (CME) is usually believed to not receive higher-order corrections due to the nonrenormalization of the AVV triangle diagram in the framework of quantum field theory. However, the CME-relevant triangle, which is obtained by expanding the current-current correlation, requires zero momentum on the axial vertex and is not equivalent to the general AVV triangle when taking the zero-momentum limit owing to the infrared problem on the axial vertex. Therefore, it is still significant to check if there exists perturbative higher-order corrections to the current-current correlation. In this paper, we explicitly calculate the two-loop corrections of CME within the Nambu-Jona-Lasinio model with a Chern-Simons term, which ensures a consistent μ5 . The result shows the two-loop corrections to the CME conductivity are zero, which confirms the nonrenomalization of CME conductivity.

  9. Understanding the atmospheric measurement and behavior of perfluorooctanoic acid.

    PubMed

    Webster, Eva M; Ellis, David A

    2012-09-01

    The recently reported quantification of the atmospheric sampling artifact for perfluorooctanoic acid (PFOA) was applied to existing gas and particle concentration measurements. Specifically, gas phase concentrations were increased by a factor of 3.5 and particle-bound concentrations by a factor of 0.1. The correlation constants in two particle-gas partition coefficient (K(QA)) estimation equations were determined for multiple studies with and without correcting for the sampling artifact. Correction for the sampling artifact gave correlation constants with improved agreement to those reported for other neutral organic contaminants, thus supporting the application of the suggested correction factors for perfluorinated carboxylic acids. Applying the corrected correlation constant to a recent multimedia modeling study improved model agreement with corrected, reported, atmospheric concentrations. This work confirms that there is sufficient partitioning to the gas phase to support the long-range atmospheric transport of PFOA. Copyright © 2012 SETAC.

  10. Efficient color correction method for smartphone camera-based health monitoring application.

    PubMed

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  11. Does Planck really rule out monomial inflation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Kari; Karčiauskas, Mindaugas, E-mail: kari.enqvist@helsinki.fi, E-mail: mindaugas.karciauskas@helsinki.fi

    2014-02-01

    We consider the modifications of monomial chaotic inflation models due to radiative corrections induced by inflaton couplings to bosons and/or fermions necessary for reheating. To the lowest order, ignoring gravitational corrections and treating the inflaton as a classical background field, they are of the Coleman-Weinberg type and parametrized by the renormalization scale μ. In cosmology, there are not enough measurements to fix μ so that we end up with a family of models, each having a slightly different slope of the potential. We demonstrate by explicit calculation that within the family of chaotic φ{sup 2} models, some may be ruledmore » out by Planck whereas some remain perfectly viable. In contrast, radiative corrections do not seem to help chaotic φ{sup 4} models to meet the Planck constraints.« less

  12. Validation of a T1 and T2* leakage correction method based on multi-echo DSC-MRI using MION as a reference standard

    PubMed Central

    Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad

    2015-01-01

    Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714

  13. Quantum decoration transformation for spin models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braz, F.F.; Rodrigues, F.C.; Souza, S.M. de

    2016-09-15

    It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the “classical” limit, establishing an equivalence between both quantum spin lattice models.more » To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising–Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.« less

  14. Quantum decoration transformation for spin models

    NASA Astrophysics Data System (ADS)

    Braz, F. F.; Rodrigues, F. C.; de Souza, S. M.; Rojas, Onofre

    2016-09-01

    It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the "classical" limit, establishing an equivalence between both quantum spin lattice models. To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising-Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.

  15. Station corrections for the Katmai Region Seismic Network

    USGS Publications Warehouse

    Searcy, Cheryl K.

    2003-01-01

    Most procedures for routinely locating earthquake hypocenters within a local network are constrained to using laterally homogeneous velocity models to represent the Earth's crustal velocity structure. As a result, earthquake location errors may arise due to actual lateral variations in the Earth's velocity structure. Station corrections can be used to compensate for heterogeneous velocity structure near individual stations (Douglas, 1967; Pujol, 1988). The HYPOELLIPSE program (Lahr, 1999) used by the Alaska Volcano Observatory (AVO) to locate earthquakes in Cook Inlet and the Aleutian Islands is a robust and efficient program that uses one-dimensional velocity models to determine hypocenters of local and regional earthquakes. This program does have the capability of utilizing station corrections within it's earthquake location proceedure. The velocity structures of Cook Inlet and Aleutian volcanoes very likely contain laterally varying heterogeneities. For this reason, the accuracy of earthquake locations in these areas will benefit from the determination and addition of station corrections. In this study, I determine corrections for each station in the Katmai region. The Katmai region is defined to lie between latitudes 57.5 degrees North and 59.00 degrees north and longitudes -154.00 and -156.00 (see Figure 1) and includes Mount Katmai, Novarupta, Mount Martin, Mount Mageik, Snowy Mountain, Mount Trident, and Mount Griggs volcanoes. Station corrections were determined using the computer program VELEST (Kissling, 1994). VELEST inverts arrival time data for one-dimensional velocity models and station corrections using a joint hypocenter determination technique. VELEST can also be used to locate single events.

  16. Use of bias correction techniques to improve seasonal forecasts for reservoirs - A case-study in northwestern Mediterranean.

    PubMed

    Marcos, Raül; Llasat, Ma Carmen; Quintana-Seguí, Pere; Turco, Marco

    2018-01-01

    In this paper, we have compared different bias correction methodologies to assess whether they could be advantageous for improving the performance of a seasonal prediction model for volume anomalies in the Boadella reservoir (northwestern Mediterranean). The bias correction adjustments have been applied on precipitation and temperature from the European Centre for Middle-range Weather Forecasting System 4 (S4). We have used three bias correction strategies: two linear (mean bias correction, BC, and linear regression, LR) and one non-linear (Model Output Statistics analogs, MOS-analog). The results have been compared with climatology and persistence. The volume-anomaly model is a previously computed Multiple Linear Regression that ingests precipitation, temperature and in-flow anomaly data to simulate monthly volume anomalies. The potential utility for end-users has been assessed using economic value curve areas. We have studied the S4 hindcast period 1981-2010 for each month of the year and up to seven months ahead considering an ensemble of 15 members. We have shown that the MOS-analog and LR bias corrections can improve the original S4. The application to volume anomalies points towards the possibility to introduce bias correction methods as a tool to improve water resource seasonal forecasts in an end-user context of climate services. Particularly, the MOS-analog approach gives generally better results than the other approaches in late autumn and early winter. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. How well can charge transfer inefficiency be corrected? A parameter sensitivity study for iterative correction

    NASA Astrophysics Data System (ADS)

    Israel, Holger; Massey, Richard; Prod'homme, Thibaut; Cropper, Mark; Cordes, Oliver; Gow, Jason; Kohley, Ralf; Marggraf, Ole; Niemi, Sami; Rhodes, Jason; Short, Alex; Verhoeve, Peter

    2015-10-01

    Radiation damage to space-based charge-coupled device detectors creates defects which result in an increasing charge transfer inefficiency (CTI) that causes spurious image trailing. Most of the trailing can be corrected during post-processing, by modelling the charge trapping and moving electrons back to where they belong. However, such correction is not perfect - and damage is continuing to accumulate in orbit. To aid future development, we quantify the limitations of current approaches, and determine where imperfect knowledge of model parameters most degrades measurements of photometry and morphology. As a concrete application, we simulate 1.5 × 109 `worst-case' galaxy and 1.5 × 108 star images to test the performance of the Euclid visual instrument detectors. There are two separable challenges. If the model used to correct CTI is perfectly the same as that used to add CTI, 99.68 per cent of spurious ellipticity is corrected in our setup. This is because readout noise is not subject to CTI, but gets overcorrected during correction. Secondly, if we assume the first issue to be solved, knowledge of the charge trap density within Δρ/ρ = (0.0272 ± 0.0005) per cent and the characteristic release time of the dominant species to be known within Δτ/τ = (0.0400 ± 0.0004) per cent will be required. This work presents the next level of definition of in-orbit CTI calibration procedures for Euclid.

  18. 75 FR 34527 - Volkswagen Petition for Exemption From the Vehicle Theft Prevention Standard; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-17

    ... for Exemption From the Vehicle Theft Prevention Standard; Correction AGENCY: National Highway Traffic... the Theft Prevention Standard. This document corrects the model year of the new Volkswagen vehicle... 543, Exemption from the Theft Prevention Standard. This petition is granted because the agency has...

  19. Inferential Procedures for Correlation Coefficients Corrected for Attenuation.

    ERIC Educational Resources Information Center

    Hakstian, A. Ralph; And Others

    1988-01-01

    A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)

  20. 75 FR 39042 - Solicitation for a Cooperative Agreement: The Norval Morris Project Implementation Phase

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-07

    ... model designed to provide correctional agencies with a step-by-step approach to promote systemic change..., evidence-based approaches, evaluate their potential to inform correctional policy and practice, create... outside the corrections field to develop interdisciplinary approaches and draw on professional networks...

  1. Implementing a Batterer's Intervention Program in a Correctional Setting: A Tertiary Prevention Model

    ERIC Educational Resources Information Center

    Yorke, Nada J.; Friedman, Bruce D.; Hurt, Pat

    2010-01-01

    This study discusses the pretest and posttest results of a batterer's intervention program (BIP) implemented within a California state prison substance abuse program (SAP), with a recommendation for further programs to be implemented within correctional institutions. The efficacy of utilizing correctional facilities to reach offenders who…

  2. Toward a more sophisticated response representation in theories of medial frontal performance monitoring: The effects of motor similarity and motor asymmetries.

    PubMed

    Hochman, Eldad Yitzhak; Orr, Joseph M; Gehring, William J

    2014-02-01

    Cognitive control in the posterior medial frontal cortex (pMFC) is formulated in models that emphasize adaptive behavior driven by a computation evaluating the degree of difference between 2 conflicting responses. These functions are manifested by an event-related brain potential component coined the error-related negativity (ERN). We hypothesized that the ERN represents a regulative rather than evaluative pMFC process, exerted over the error motor representation, expediting the execution of a corrective response. We manipulated the motor representations of the error and the correct response to varying degrees. The ERN was greater when 1) the error response was more potent than when the correct response was more potent, 2) more errors were committed, 3) fewer and slower corrections were observed, and 4) the error response shared fewer motor features with the correct response. In their current forms, several prominent models of the pMFC cannot be reconciled with these findings. We suggest that a prepotent, unintended error is prone to reach the manual motor processor responsible for response execution before a nonpotent, intended correct response. In this case, the correct response is a correction and its execution must wait until the error is aborted. The ERN may reflect pMFC activity that aimed to suppress the error.

  3. Development of a Pressure Sensitive Paint System with Correction for Temperature Variation

    NASA Technical Reports Server (NTRS)

    Simmons, Kantis A.

    1995-01-01

    Pressure Sensitive Paint (PSP) is known to provide a global image of pressure over a model surface. However, improvements in its accuracy and reliability are needed. Several factors contribute to the inaccuracy of PSP. One major factor is that luminescence is temperature dependent. To correct the luminescence of the pressure sensing component for changes in temperature, a temperature sensitive luminophore incorporated in the paint allows the user to measure both pressure and temperature simultaneously on the surface of a model. Magnesium Octaethylporphine (MgOEP) was used as a temperature sensing luminophore, with the pressure sensing luminophore, Platinum Octaethylporphine (PtOEP), to correct for temperature variations in model surface pressure measurements.

  4. A comparative study of several compressibility corrections to turbulence models applied to high-speed shear layers

    NASA Technical Reports Server (NTRS)

    Viegas, John R.; Rubesin, Morris W.

    1991-01-01

    Several recently published compressibility corrections to the standard k-epsilon turbulence model are used with the Navier-Stokes equations to compute the mixing region of a large variety of high speed flows. These corrections, specifically developed to address the weakness of higher order turbulence models to accurately predict the spread rate of compressible free shear flows, are applied to two stream flows of the same gas mixing under a large variety of free stream conditions. Results are presented for two types of flows: unconfined streams with either (1) matched total temperatures and static pressures, or (2) matched static temperatures and pressures, and a confined stream.

  5. Final state interactions at the threshold of Higgs boson pair production

    NASA Astrophysics Data System (ADS)

    Zhang, Zhentao

    2015-11-01

    We study the effect of final state interactions at the threshold of Higgs boson pair production in the Glashow-Weinberg-Salam model. We consider three major processes of the pair production in the model: lepton pair annihilation, ZZ fusion, and WW fusion. We find that the corrections caused by the effect for these processes are markedly different. According to our results, the effect can cause non-negligible corrections to the cross sections for lepton pair annihilation and small corrections for ZZ fusion, and this effect is negligible for WW fusion.

  6. Finite-size corrections to the excitation energy transfer in a massless scalar interaction model

    NASA Astrophysics Data System (ADS)

    Maeda, Nobuki; Yabuki, Tetsuo; Tobita, Yutaka; Ishikawa, Kenzo

    2017-05-01

    We study the excitation energy transfer (EET) for a simple model in which a massless scalar particle is exchanged between two molecules. We show that a finite-size effect appears in EET by the interaction energy due to overlapping of the quantum waves in a short time interval. The effect generates finite-size corrections to Fermi's golden rule and modifies EET probability from the standard formula in the Förster mechanism. The correction terms come from transition modes outside the resonance energy region and enhance EET probability substantially.

  7. Modeling Data Containing Outliers using ARIMA Additive Outlier (ARIMA-AO)

    NASA Astrophysics Data System (ADS)

    Saleh Ahmar, Ansari; Guritno, Suryo; Abdurakhman; Rahman, Abdul; Awi; Alimuddin; Minggi, Ilham; Arif Tiro, M.; Kasim Aidid, M.; Annas, Suwardi; Utami Sutiksno, Dian; Ahmar, Dewi S.; Ahmar, Kurniawan H.; Abqary Ahmar, A.; Zaki, Ahmad; Abdullah, Dahlan; Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Napitupulu, Darmawan; Simarmata, Janner; Kurniasih, Nuning; Andretti Abdillah, Leon; Pranolo, Andri; Haviluddin; Albra, Wahyudin; Arifin, A. Nurani M.

    2018-01-01

    The aim this study is discussed on the detection and correction of data containing the additive outlier (AO) on the model ARIMA (p, d, q). The process of detection and correction of data using an iterative procedure popularized by Box, Jenkins, and Reinsel (1994). By using this method we obtained an ARIMA models were fit to the data containing AO, this model is added to the original model of ARIMA coefficients obtained from the iteration process using regression methods. In the simulation data is obtained that the data contained AO initial models are ARIMA (2,0,0) with MSE = 36,780, after the detection and correction of data obtained by the iteration of the model ARIMA (2,0,0) with the coefficients obtained from the regression Zt = 0,106+0,204Z t-1+0,401Z t-2-329X 1(t)+115X 2(t)+35,9X 3(t) and MSE = 19,365. This shows that there is an improvement of forecasting error rate data.

  8. Fast Magnetotail Reconnection: Challenge to Global MHD Modeling

    NASA Astrophysics Data System (ADS)

    Kuznetsova, M. M.; Hesse, M.; Rastaetter, L.; Toth, G.; de Zeeuw, D.; Gombosi, T.

    2005-05-01

    Representation of fast magnetotail reconnection rates during substorm onset is one of the major challenges to global MHD modeling. Our previous comparative study of collisionless magnetic reconnection in GEM Challenge geometry demonstrated that the reconnection rate is controlled by ion nongyrotropic behavior near the reconnection site and that it can be described in terms of nongyrotropic corrections to the magnetic induction equation. To further test the approach we performed MHD simulations with nongyrotropic corrections of forced reconnection for the Newton Challenge setup. As a next step we employ the global MHD code BATSRUS and test different methods to model fast magnetotail reconnection rates by introducing non-ideal corrections to the induction equation in terms of nongyrotropic corrections, spatially localized resistivity, or current dependent resistivity. The BATSRUS adaptive grid structure allows to perform global simulations with spatial resolution near the reconnection site comparable with spatial resolution of local MHD simulations for the Newton Challenge. We select solar wind conditions which drive the accumulation of magnetic field in the tail lobes and subsequent magnetic reconnection and energy release. Testing the ability of global MHD models to describe magnetotail evolution during substroms is one of the elements of science based validation efforts at the Community Coordinated Modeling Center.

  9. Predictive models reduce talent development costs in female gymnastics.

    PubMed

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  10. Non-stationary Bias Correction of Monthly CMIP5 Temperature Projections over China using a Residual-based Bagging Tree Model

    NASA Astrophysics Data System (ADS)

    Yang, T.; Lee, C.

    2017-12-01

    The biases in the Global Circulation Models (GCMs) are crucial for understanding future climate changes. Currently, most bias correction methodologies suffer from the assumption that model bias is stationary. This paper provides a non-stationary bias correction model, termed Residual-based Bagging Tree (RBT) model, to reduce simulation biases and to quantify the contributions of single models. Specifically, the proposed model estimates the residuals between individual models and observations, and takes the differences between observations and the ensemble mean into consideration during the model training process. A case study is conducted for 10 major river basins in Mainland China during different seasons. Results show that the proposed model is capable of providing accurate and stable predictions while including the non-stationarities into the modeling framework. Significant reductions in both bias and root mean squared error are achieved with the proposed RBT model, especially for the central and western parts of China. The proposed RBT model has consistently better performance in reducing biases when compared to the raw ensemble mean, the ensemble mean with simple additive bias correction, and the single best model for different seasons. Furthermore, the contribution of each single GCM in reducing the overall bias is quantified. The single model importance varies between 3.1% and 7.2%. For different future scenarios (RCP 2.6, RCP 4.5, and RCP 8.5), the results from RBT model suggest temperature increases of 1.44 ºC, 2.59 ºC, and 4.71 ºC by the end of the century, respectively, when compared to the average temperature during 1970 - 1999.

  11. Corrections to scaling for watersheds, optimal path cracks, and bridge lines

    NASA Astrophysics Data System (ADS)

    Fehr, E.; Schrenk, K. J.; Araújo, N. A. M.; Kadau, D.; Grassberger, P.; Andrade, J. S., Jr.; Herrmann, H. J.

    2012-07-01

    We study the corrections to scaling for the mass of the watershed, the bridge line, and the optimal path crack in two and three dimensions (2D and 3D). We disclose that these models have numerically equivalent fractal dimensions and leading correction-to-scaling exponents. We conjecture all three models to possess the same fractal dimension, namely, df=1.2168±0.0005 in 2D and df=2.487±0.003 in 3D, and the same exponent of the leading correction, Ω=0.9±0.1 and Ω=1.0±0.1, respectively. The close relations between watersheds, optimal path cracks in the strong disorder limit, and bridge lines are further supported by either heuristic or exact arguments.

  12. Evaluation of the Klobuchar model in TaiWan

    NASA Astrophysics Data System (ADS)

    Li, Jinghua; Wan, Qingtao; Ma, Guanyi; Zhang, Jie; Wang, Xiaolan; Fan, Jiangtao

    2017-09-01

    Ionospheric delay is the mainly error source in Global Navigation Satellite System (GNSS). Ionospheric model is one of the ways to correct the ionospheric delay. The single-frequency GNSS users modify the ionospheric delay by receiving the correction parameters broadcasted by satellites. Klobuchar model is widely used in Global Positioning System (GPS) and COMPASS because it is simple and convenient for real-time calculation. This model is established on the observations mainly from Europe and USA. It does not describe the equatorial anomaly region. South of China is located near the north crest of the equatorial anomaly, where the ionosphere has complex spatial and temporal variation. The assessment on the validation of Klobuchar model in this area is important to improve this model. Eleven years (2003-2014) data from one GPS receiver located at Taoyuan Taiwan (121°E, 25°N) are used to assess the validation of Klobuchar model in Taiwan. Total electron content (TEC) from the dual-frequency GPS observations is calculated and used as the reference, and TEC based on the Klobuchar model is compared with the reference. The residual is defined as the difference between the TEC from Klobuchar model and the reference. It is a parameter to reflect the absolute correction of the model. RMS correction percentage presents the validation of the model relative to the observations. The residuals' long-term variation, the RMS correction percentage, and their changes with the latitudes are analyzed respectively to access the model. In some months the RMS correction did not reach the goal of 50% purposed by Klobuchar, especially in the winter of the low solar activity years and at nighttime. RMS correction did not depend on the 11-years solar activity, neither the latitudes. Different from RMS correction, the residuals changed with the solar activity, similar to the variation of TEC. The residuals were large in the daytime, during the equinox seasons and in the high solar activity years; they are small at night, during the solstice seasons, and in the low activity years. During 1300-1500 BJT in the high solar activity years, the mean bias was negative, implying the model underestimated TEC on average. The maximum mean bias was 33TECU in April 2014, and the maximum underestimation reached 97TECU in October 2011. During 0000-0200 BJT, the residuals had small mean bias, small variation range and small standard deviation. It suggested that the model could describe the TEC of the ionosphere better than that in the daytime. Besides the variation with the solar activity, the residuals also vary with the latitudes. The means bias reached the maximum at 20-22°N, corresponding to the north crest of the equatorial anomaly. At this latitude, the maximum mean bias was 47TECU lower than the observation in the high activity years, and 12TECU lower in the low activity years. The minimum variation range appeared at 30-32°N in high and low activity years. But the minimum mean bias was at different latitudes in the high and low activity years. In the high activity years, it appeared at 30-32°N, and in the low years it was at 24-26°N. For an ideal model, the residuals should have small mean bias and small variation range. Further study is needed to learn the distribution of the residuals and to improve the model.

  13. 76 FR 29997 - Airworthiness Directives; Bombardier, Inc. Model CL-600-2B19 (Regional Jet Series 100 & 440...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-24

    ... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2010... Airworthiness Directives; Bombardier, Inc. Model CL-600-2B19 (Regional Jet Series 100 & 440) Airplanes AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final rule; correction. SUMMARY: The FAA is correcting...

  14. Hydrologic Data for the Groundwater Flow and Contaminant Transport Model of Corrective Action Units 101 and 102: Central and Western Pahute Mesa, Nye County, Nevada, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drici, Warda

    2004-02-01

    This report documents the analysis of the available hydrologic data conducted in support of the development of a Corrective Action Unit (CAU) groundwater flow model for Central and Western Pahute Mesa: CAUs 101 and 102.

  15. A simple model for deep tissue attenuation correction and large organ analysis of Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Habte, Frezghi; Natarajan, Arutselvan; Paik, David S.; Gambhir, Sanjiv S.

    2014-03-01

    Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.

  16. Developing Formal Correctness Properties from Natural Language Requirements

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  17. Multipole correction of atomic monopole models of molecular charge distribution. I. Peptides

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Keller, D. A.; Ornstein, R. L.; Rein, R.

    1993-01-01

    The defects in atomic monopole models of molecular charge distribution have been analyzed for several model-blocked peptides and compared with accurate quantum chemical values. The results indicate that the angular characteristics of the molecular electrostatic potential around functional groups capable of forming hydrogen bonds can be considerably distorted within various models relying upon isotropic atomic charges only. It is shown that these defects can be corrected by augmenting the atomic point charge models by cumulative atomic multipole moments (CAMMs). Alternatively, sets of off-center atomic point charges could be automatically derived from respective multipoles, providing approximately equivalent corrections. For the first time, correlated atomic multipoles have been calculated for N-acetyl, N'-methylamide-blocked derivatives of glycine, alanine, cysteine, threonine, leucine, lysine, and serine using the MP2 method. The role of the correlation effects in the peptide molecular charge distribution are discussed.

  18. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  19. A radiation model for calculating atmospheric corrections to remotely sensed infrared measurements, version 2

    NASA Technical Reports Server (NTRS)

    Boudreau, R. D.

    1973-01-01

    A numerical model is developed which calculates the atmospheric corrections to infrared radiometric measurements due to absorption and emission by water vapor, carbon dioxide, and ozone. The corrections due to aerosols are not accounted for. The transmissions functions for water vapor, carbon dioxide, and water are given. The model requires as input the vertical distribution of temperature and water vapor as determined by a standard radiosonde. The vertical distribution of carbon dioxide is assumed to be constant. The vertical distribution of ozone is an average of observed values. The model also requires as input the spectral response function of the radiometer and the nadir angle at which the measurements were made. A listing of the FORTRAN program is given with details for its use and examples of input and output listings. Calculations for four model atmospheres are presented.

  20. Development of a Detailed Volumetric Finite Element Model of the Spine to Simulate Surgical Correction of Spinal Deformities

    PubMed Central

    Driscoll, Mark; Mac-Thiong, Jean-Marc; Labelle, Hubert; Parent, Stefan

    2013-01-01

    A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices. PMID:23991426

  1. The impact of climatological model biases in the North Atlantic jet on predicted future circulation change

    NASA Astrophysics Data System (ADS)

    Simpson, I.

    2015-12-01

    A long standing bias among global climate models (GCMs) is their incorrect representation of the wintertime circulation of the North Atlantic region. Specifically models tend to exhibit a North Atlantic jet (and associated storm track) that is too zonal, extending across central Europe, when it should tilt northward toward Scandinavia. GCM's consistently predict substantial changes in the large scale circulation in this region, consisting of a localized anti-cyclonic circulation, centered over the Mediterranean and accompanied by increased aridity there and increased storminess over Northern Europe.Here, we present preliminary results from experiments that are designed to address the question of what the impact of the climatological circulation biases might be on this predicted future response. Climate change experiments will be compared in two versions of the Community Earth System Model: the first is a free running version of the model, as typically used in climate prediction; the second is a bias corrected version of the model in which a seasonally varying cycle of bias correction tendencies are applied to the wind and temperature fields. These bias correction tendencies are designed to account for deficiencies in the fast parameterized processes, with an aim to push the model toward a more realistic climatology.While these experiments come with the caveat that they assume the bias correction tendencies will remain constant with time, they allow for an initial assessment, through controlled experiments, of the impact that biases in the climatological circulation can have on future predictions in this region. They will also motivate future work that can make use of the bias correction tendencies to understand the underlying physical processes responsible for the incorrect tilt of the jet.

  2. Corrected ROC analysis for misclassified binary outcomes.

    PubMed

    Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L

    2017-06-15

    Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  3. Simulations of the Mid-Pliocene Warm Period Using Two Versions of the NASA-GISS ModelE2-R Coupled Model

    NASA Technical Reports Server (NTRS)

    Chandler, M. A.; Sohl, L. E.; Jonas, J. A.; Dowsett, H. J.; Kelley, M.

    2013-01-01

    The mid-Pliocene Warm Period (mPWP) bears many similarities to aspects of future global warming as projected by the Intergovernmental Panel on Climate Change (IPCC, 2007). Both marine and terrestrial data point to high-latitude temperature amplification, including large decreases in sea ice and land ice, as well as expansion of warmer climate biomes into higher latitudes. Here we present our most recent simulations of the mid-Pliocene climate using the CMIP5 version of the NASAGISS Earth System Model (ModelE2-R). We describe the substantial impact associated with a recent correction made in the implementation of the Gent-McWilliams ocean mixing scheme (GM), which has a large effect on the simulation of ocean surface temperatures, particularly in the North Atlantic Ocean. The effect of this correction on the Pliocene climate results would not have been easily determined from examining its impact on the preindustrial runs alone, a useful demonstration of how the consequences of code improvements as seen in modern climate control runs do not necessarily portend the impacts in extreme climates.Both the GM-corrected and GM-uncorrected simulations were contributed to the Pliocene Model Intercomparison Project (PlioMIP) Experiment 2. Many findings presented here corroborate results from other PlioMIP multi-model ensemble papers, but we also emphasize features in the ModelE2-R simulations that are unlike the ensemble means. The corrected version yields results that more closely resemble the ocean core data as well as the PRISM3D reconstructions of the mid-Pliocene, especially the dramatic warming in the North Atlantic and Greenland-Iceland-Norwegian Sea, which in the new simulation appears to be far more realistic than previously found with older versions of the GISS model. Our belief is that continued development of key physical routines in the atmospheric model, along with higher resolution and recent corrections to mixing parameterisations in the ocean model, have led to an Earth System Model that will produce more accurate projections of future climate.

  4. A Lagrangian Subgridscale Model for Particle Transport Improvement and Application in the Adriatic Sea Using the Navy Coastal Ocean Model

    DTIC Science & Technology

    2006-12-01

    based on input statistical parameters , such as the turbulent velocity fluc- tuation and correlation time scale, without the need of an underlying...8217mVr) 2 + (ar, r- ;m Vm) 2 (8) Tr + Tm which is zero if the model and real parameters coincide. The correlation coefficient rmc between the...well correlated with the latter. The parameters estimated from the corrected velocity, Real(top), Model(mid), Corrected(bottom), Tm=1.5, Gm=l 0, Tr

  5. The use of the logistic model in space motion sickness prediction

    NASA Technical Reports Server (NTRS)

    Lin, Karl K.; Reschke, Millard F.

    1987-01-01

    The one-equation and the two-equation logistic models were used to predict subjects' susceptibility to motion sickness in KC-135 parabolic flights using data from other ground-based motion sickness tests. The results show that the logistic models correctly predicted substantially more cases (an average of 13 percent) in the data subset used for model building. Overall, the logistic models ranged from 53 to 65 percent predictions of the three endpoint parameters, whereas the Bayes linear discriminant procedure ranged from 48 to 65 percent correct for the cross validation sample.

  6. Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals

    NASA Astrophysics Data System (ADS)

    Huerta, E. A.; Gair, Jonathan R.

    2009-04-01

    We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.

  7. Statistical reconstruction for cone-beam CT with a post-artifact-correction noise model: application to high-quality head imaging

    NASA Astrophysics Data System (ADS)

    Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2015-08-01

    Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ~40-80 HU, size  >  1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of intracranial hemorrhage.

  8. Corrective Action Glossary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-07-01

    The glossary of technical terms was prepared to facilitate the use of the Corrective Action Plan (CAP) issued by OSWER on November 14, 1986. The CAP presents model scopes of work for all phases of a corrective action program, including the RCRA Facility Investigation (RFI), Corrective Measures Study (CMS), Corrective Measures Implementation (CMI), and interim measures. The Corrective Action Glossary includes brief definitions of the technical terms used in the CAP and explains how they are used. In addition, expected ranges (where applicable) are provided. Parameters or terms not discussed in the CAP, but commonly associated with site investigations ormore » remediations are also included.« less

  9. Expanding wave solutions of the Einstein equations that induce an anomalous acceleration into the Standard Model of Cosmology.

    PubMed

    Temple, Blake; Smoller, Joel

    2009-08-25

    We derive a system of three coupled equations that implicitly defines a continuous one-parameter family of expanding wave solutions of the Einstein equations, such that the Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology is embedded as a single point in this family. By approximating solutions near the center to leading order in the Hubble length, the family reduces to an explicit one-parameter family of expanding spacetimes, given in closed form, that represents a perturbation of the Standard Model. By introducing a comoving coordinate system, we calculate the correction to the Hubble constant as well as the exact leading order quadratic correction to the redshift vs. luminosity relation for an observer at the center. The correction to redshift vs. luminosity entails an adjustable free parameter that introduces an anomalous acceleration. We conclude (by continuity) that corrections to the redshift vs. luminosity relation observed after the radiation phase of the Big Bang can be accounted for, at the leading order quadratic level, by adjustment of this free parameter. The next order correction is then a prediction. Since nonlinearities alone could actuate dissipation and decay in the conservation laws associated with the highly nonlinear radiation phase and since noninteracting expanding waves represent possible time-asymptotic wave patterns that could result, we propose to further investigate the possibility that these corrections to the Standard Model might be the source of the anomalous acceleration of the galaxies, an explanation not requiring the cosmological constant or dark energy.

  10. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application

    PubMed Central

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang

    2018-01-01

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549

  11. Coordinated California Corrections: Institutions. Correctional System Study. Final Report.

    ERIC Educational Resources Information Center

    California State Human Relations Agency, Sacramento. Board of Corrections.

    This series of comprehensive task force reports on jails, prisons, and juvenile institutions presents overviews of corrective institutions in California, models, survey findings about the current systems, and a wide range of general and specific recommendations. Various tables and charts illustrate the data, which were collected by a review of the…

  12. Explanation of Two Anomalous Results in Statistical Mediation Analysis

    ERIC Educational Resources Information Center

    Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P.

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…

  13. A Three-Stage Model for Implementing Focused Written Corrective Feedback

    ERIC Educational Resources Information Center

    Chong, Sin Wang

    2017-01-01

    This article aims to show how the findings from written corrective feedback (WCF) research can be applied in practice. One particular kind of WCF--focused WCF--is brought into the spotlight. The article first summarizes major findings from focused WCF research to reveal the potential advantages of correcting a few preselected language items…

  14. Boomwhackers and End-Pipe Corrections

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    2014-01-01

    End-pipe corrections seldom come to mind as a suitable topic for an introductory physics lab. Yet, the end-pipe correction formula can be verified in an engaging and inexpensive lab that requires only two supplies: plastic-tube toys called boomwhackers and a meter-stick. This article describes a lab activity in which students model data from…

  15. Nonlinear model for offline correction of pulmonary waveform generators.

    PubMed

    Reynolds, Jeffrey S; Stemple, Kimberly J; Petsko, Raymond A; Ebeling, Thomas R; Frazer, David G

    2002-12-01

    Pulmonary waveform generators consisting of motor-driven piston pumps are frequently used to test respiratory-function equipment such as spirometers and peak expiratory flow (PEF) meters. Gas compression within these generators can produce significant distortion of the output flow-time profile. A nonlinear model of the generator was developed along with a method to compensate for gas compression when testing pulmonary function equipment. The model and correction procedure were tested on an Assess Full Range PEF meter and a Micro DiaryCard PEF meter. The tests were performed using the 26 American Thoracic Society standard flow-time waveforms as the target flow profiles. Without correction, the pump loaded with the higher resistance Assess meter resulted in ten waveforms having a mean square error (MSE) higher than 0.001 L2/s2. Correction of the pump for these ten waveforms resulted in a mean decrease in MSE of 87.0%. When loaded with the Micro DiaryCard meter, the uncorrected pump outputs included six waveforms with MSE higher than 0.001 L2/s2. Pump corrections for these six waveforms resulted in a mean decrease in MSE of 58.4%.

  16. Sound radiation of a railway rail in close proximity to the ground

    NASA Astrophysics Data System (ADS)

    Zhang, Xianying; Squicciarini, Giacomo; Thompson, David J.

    2016-02-01

    The sound radiation of a railway in close to proximity to a ground (both rigid and absorptive) is predicted by the boundary element method (BEM) in two dimensions (2D). Results are given in terms of the radiation ratio for both vertical and lateral motion of the rail, when the effects of the acoustic boundary conditions due to the sleepers and ballast are taken into account in the numerical models. Allowance is made for the effect of wave propagation along the rail by applying a correction in the 2D modelling. It is shown that the 2D correction is necessary at low frequency, for both vertical and lateral motion of an unsupported rail, especially in the vicinity of the corresponding critical frequency. However, this correction is not applicable for a supported rail; for vertical motion no correction is needed to the 2D result while for lateral motion the corresponding correction would depend on the pad stiffness. Finally, the corresponding numerical predictions of the sound radiation from a rail are verified by comparison with experimental results obtained using a 1/5 scale rail model in different configurations.

  17. CSAMT Data Processing with Source Effect and Static Corrections, Application of Occam's Inversion, and Its Application in Geothermal System

    NASA Astrophysics Data System (ADS)

    Hamdi, H.; Qausar, A. M.; Srigutomo, W.

    2016-08-01

    Controlled source audio-frequency magnetotellurics (CSAMT) is a frequency-domain electromagnetic sounding technique which uses a fixed grounded dipole as an artificial signal source. Measurement of CSAMT with finite distance between transmitter and receiver caused a complex wave. The shifted of the electric field due to the static effect caused elevated resistivity curve up or down and affects the result of measurement. The objective of this study was to obtain data that have been corrected for source and static effects as to have the same characteristic as MT data which are assumed to exhibit plane wave properties. Corrected CSAMT data were inverted to reveal subsurface resistivity model. Source effect correction method was applied to eliminate the effect of the signal source and static effect was corrected by using spatial filtering technique. Inversion method that used in this study is the Occam's 2D Inversion. The results of inversion produces smooth models with a small misfit value, it means the model can describe subsurface conditions well. Based on the result of inversion was predicted measurement area is rock that has high permeability values with rich hot fluid.

  18. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  19. A Model for Assessing the Liability of Seemingly Correct Software

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.

    1991-01-01

    Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.

  20. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  1. Cosmological implications of quantum corrections and higher-derivative extension

    NASA Astrophysics Data System (ADS)

    Chialva, Diego; Mazumdar, Anupam

    2015-02-01

    We discuss the challenges for the early universe cosmology from quantum corrections, and in particular higher-derivative terms, in the gravitational and inflaton sectors of the models. The work is divided in two parts. In the first one we review the already well-known issues due to quantum corrections to the inflaton potential, in particular focusing on chaotic/slow-roll single-field models. We will point out some issues concerning the proposed mechanisms to cope with the corrections, and also argue how the presence of higher-derivative corrections could be problematic for those mechanisms. In the second part we will more directly focus on higher-derivative corrections. We will show how, in order to discuss a number of high-energy phenomena relevant to inflation (such as its actual onset) one has to deal with energy scales where the derivative expansion breaks down, presenting problems such as quantum vacuum instability and ghosts. To discuss such phenomena in the convenient framework of the effective theory, one must then abandon the derivative expansion and resort to the full nonlocal formulation of the theory, which is in fact equivalent to re-integrating back the relevant physics, but with the benefit of using a more compact single-field formalism. Finally, we will briefly discuss possible advantages offered by the presence of higher derivatives and a nonlocal theory to build better controlled UV models of inflation.

  2. Real-space post-processing correction of thermal drift and piezoelectric actuator nonlinearities in scanning tunneling microscope images.

    PubMed

    Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A

    2017-01-01

    We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.

  3. Induction log responses to layered, dipping, and anisotropic formations: Induction log shoulder-bed corrections to anisotropic formations and the effect of shale anisotropy in thinly laminated sand/shale sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagiwara, Teruhiko

    1996-12-31

    Induction log responses to layered, dipping, and anisotropic formations are examined analytically. The analytical model is especially helpful in understanding induction log responses to thinly laminated binary formations, such as sand/shale sequences, that exhibit macroscopically anisotropic: resistivity. Two applications of the analytical model are discussed. In one application we examine special induction log shoulder-bed corrections for use when thin anisotropic beds are encountered. It is known that thinly laminated sand/shale sequences act as macroscopically anisotropic: formations. Hydrocarbon-bearing formations also act as macroscopically anisotropic formations when they consist of alternating layers of different grain-size distributions. When such formations are thick, inductionmore » logs accurately read the macroscopic conductivity, from which the hydrocarbon saturation in the formations can be computed. When the laminated formations are not thick, proper shoulder-bed corrections (or thin-bed corrections) should be applied to obtain the true macroscopic formation conductivity and to estimate the hydrocarbon saturation more accurately. The analytical model is used to calculate the thin-bed effect and to evaluate the shoulder-bed corrections. We will show that the formation resistivity and hence the hydrocarbon saturation are greatly overestimated when the anisotropy effect is not accounted for and conventional shoulder-bed corrections are applied to the log responses from such laminated formations.« less

  4. Real-space post-processing correction of thermal drift and piezoelectric actuator nonlinearities in scanning tunneling microscope images

    NASA Astrophysics Data System (ADS)

    Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.

    2017-01-01

    We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.

  5. Corrected goodness-of-fit test in covariance structure analysis.

    PubMed

    Hayakawa, Kazuhiko

    2018-05-17

    Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Advanced corrections for InSAR using GPS and numerical weather models

    NASA Astrophysics Data System (ADS)

    Foster, J. H.; Cossu, F.; Amelung, F.; Businger, S.; Cherubini, T.

    2016-12-01

    The complex spatial and temporal changes in the atmospheric propagation delay of the radar signal remain the single biggest factor limiting Interferometric Synthetic Aperture Radar's (InSAR) potential for hazard monitoring and mitigation. A new generation of InSAR systems is being built and launched, and optimizing the science and hazard applications of these systems requires advanced methodologies to mitigate tropospheric noise. We present preliminary results from an investigation into the application of GPS and numerical weather models for generating tropospheric correction fields. We use the Weather Research and Forecasting (WRF) model to generate a 900 m spatial resolution atmospheric model covering the Big Island of Hawaii and an even higher, 300 m resolution grid over Mauna Loa and Kilauea volcanoes. By comparing a range of approaches, from the simplest, using reanalyses based on typically available meteorological observations, through to the "kitchen-sink" approach of assimilating all relevant data sets into our custom analyses, we examine the impact of the additional data sets on the atmospheric models and their effectiveness in correcting InSAR data. We focus particularly on the assimilation of information from the more than 60 GPS sites in the island. We ingest zenith tropospheric delay estimates from these sites directly into the WRF analyses, and also perform double-difference tomography using the phase residuals from the GPS processing to robustly incorporate information on atmospheric heterogeneity from the GPS data into the models. We assess our performance through comparisons of our atmospheric models with external observations not ingested into the model, and through the effectiveness of the derived phase screens in reducing InSAR variance. This work will produce best-practice recommendations for the use of weather models for InSAR correction, and inform efforts to design a global strategy for the NISAR mission, for both low-latency and definitive atmospheric correction products.

  7. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  8. Adaptive correction procedure for TVL1 image deblurring under impulse noise

    NASA Astrophysics Data System (ADS)

    Bai, Minru; Zhang, Xiongjun; Shao, Qianqian

    2016-08-01

    For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.

  9. Comparisons of Computed Mobile Phone Induced SAR in the SAM Phantom to That in Anatomically Correct Models of the Human Head.

    PubMed

    Beard, Brian B; Kainz, Wolfgang; Onishi, Teruo; Iyama, Takahiro; Watanabe, Soichi; Fujiwara, Osamu; Wang, Jianqing; Bit-Babik, Giorgi; Faraone, Antonio; Wiart, Joe; Christ, Andreas; Kuster, Niels; Lee, Ae-Kyoung; Kroeze, Hugo; Siegbahn, Martin; Keshvari, Jafar; Abrishamkar, Houman; Simon, Winfried; Manteuffel, Dirk; Nikoloski, Neviana

    2006-06-05

    The specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, SubCommittee 2, and Working Group 2, and carried out by an international task force comprising 14 government, academic, and industrial research institutions. The detailed study protocol defined the computational head and mobile phone models. The participants used different finite-difference time-domain software and independently positioned the mobile phone and head models in accordance with the protocol. The results show that when the pinna SAR is calculated separately from the head SAR, SAM produced a higher SAR in the head than the anatomically correct head models. Also the larger (adult) head produced a statistically significant higher peak SAR for both the 1- and 10-g averages than did the smaller (child) head for all conditions of frequency and position.

  10. Final Report on the Development of a Model Education and Training System for Inmates in Federal Correctional Institutions, to Federal Prison Industries, Incorporated, U.S. Department of Justice.

    ERIC Educational Resources Information Center

    Hitt, William D.; Agostino, Norman R.

    This study to develop an education and training (E&T) system for inmates in Federal correctional institutions described and evaluated existing E&T systems and needs at Milan, Michigan, and Terre Haute, Indiana; formulated an E&T model; and made specific recommendations for implementation of each point in the model. A systems analysis approach was…

  11. Simulation of Ultra-Small MOSFETs Using a 2-D Quantum-Corrected Drift-Diffusion Model

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Rafferty, Conor S.; Yu, Zhiping; Dutton, Robert W.; Ancona, Mario G.; Saini, Subhash (Technical Monitor)

    1998-01-01

    We describe an electronic transport model and an implementation approach that respond to the challenges of device modeling for gigascale integration. We use the density-gradient (DG) transport model, which adds tunneling and quantum smoothing of carrier density profiles to the drift-diffusion model. We present the current implementation of the DG model in PROPHET, a partial differential equation solver developed by Lucent Technologies. This implementation approach permits rapid development and enhancement of models, as well as run-time modifications and model switching. We show that even in typical bulk transport devices such as P-N diodes and BJTs, DG quantum effects can significantly modify the I-V characteristics. Quantum effects are shown to be even more significant in small, surface transport devices, such as sub-0.1 micron MOSFETs. In thin-oxide MOS capacitors, we find that quantum effects may reduce gate capacitance by 25% or more. The inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements. Significant quantum corrections also occur in the I-V characteristics of short-channel MOSFETs due to the gate capacitance correction.

  12. Impact and Implementation of Higher-Order Ionospheric Effects on Precise GNSS Applications

    NASA Astrophysics Data System (ADS)

    Hadas, T.; Krypiak-Gregorczyk, A.; Hernández-Pajares, M.; Kaplon, J.; Paziewski, J.; Wielgosz, P.; Garcia-Rigo, A.; Kazmierski, K.; Sosnica, K.; Kwasniak, D.; Sierny, J.; Bosy, J.; Pucilowski, M.; Szyszko, R.; Portasiak, K.; Olivares-Pulido, G.; Gulyaeva, T.; Orus-Perez, R.

    2017-11-01

    High precision Global Navigation Satellite Systems (GNSS) positioning and time transfer require correcting signal delays, in particular higher-order ionospheric (I2+) terms. We present a consolidated model to correct second- and third-order terms, geometric bending and differential STEC bending effects in GNSS data. The model has been implemented in an online service correcting observations from submitted RINEX files for I2+ effects. We performed GNSS data processing with and without including I2+ corrections, in order to investigate the impact of I2+ corrections on GNSS products. We selected three time periods representing different ionospheric conditions. We used GPS and GLONASS observations from a global network and two regional networks in Poland and Brazil. We estimated satellite orbits, satellite clock corrections, Earth rotation parameters, troposphere delays, horizontal gradients, and receiver positions using global GNSS solution, Real-Time Kinematic (RTK), and Precise Point Positioning (PPP) techniques. The satellite-related products captured most of the impact of I2+ corrections, with the magnitude up to 2 cm for clock corrections, 1 cm for the along- and cross-track orbit components, and below 5 mm for the radial component. The impact of I2+ on troposphere products turned out to be insignificant in general. I2+ corrections had limited influence on the performance of ambiguity resolution and the reliability of RTK positioning. Finally, we found that I2+ corrections caused a systematic shift in the coordinate domain that was time- and region-dependent and reached up to -11 mm for the north component of the Brazilian stations during the most active ionospheric conditions.

  13. Can quantile mapping improve precipitation extremes from regional climate models?

    NASA Astrophysics Data System (ADS)

    Tani, Satyanarayana; Gobiet, Andreas

    2015-04-01

    The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.

  14. A Realization of Bias Correction Method in the GMAO Coupled System

    NASA Technical Reports Server (NTRS)

    Chang, Yehui; Koster, Randal; Wang, Hailan; Schubert, Siegfried; Suarez, Max

    2018-01-01

    Over the past several decades, a tremendous effort has been made to improve model performance in the simulation of the climate system. The cold or warm sea surface temperature (SST) bias in the tropics is still a problem common to most coupled ocean atmosphere general circulation models (CGCMs). The precipitation biases in CGCMs are also accompanied by SST and surface wind biases. The deficiencies and biases over the equatorial oceans through their influence on the Walker circulation likely contribute the precipitation biases over land surfaces. In this study, we introduce an approach in the CGCM modeling to correct model biases. This approach utilizes the history of the model's short-term forecasting errors and their seasonal dependence to modify model's tendency term and to minimize its climate drift. The study shows that such an approach removes most of model climate biases. A number of other aspects of the model simulation (e.g. extratropical transient activities) are also improved considerably due to the imposed pre-processed initial 3-hour model drift corrections. Because many regional biases in the GEOS-5 CGCM are common amongst other current models, our approaches and findings are applicable to these other models as well.

  15. Atlantic Meridional Overturning Circulation Influence on North Atlantic Sector Surface Air Temperature and its Predictability in the Kiel Climate Model

    NASA Astrophysics Data System (ADS)

    Latif, M.

    2017-12-01

    We investigate the influence of the Atlantic Meridional Overturning Circulation (AMOC) on the North Atlantic sector surface air temperature (SAT) in two multi-millennial control integrations of the Kiel Climate Model (KCM). One model version employs a freshwater flux correction over the North Atlantic, while the other does not. A clear influence of the AMOC on North Atlantic sector SAT only is simulated in the corrected model that depicts much reduced upper ocean salinity and temperature biases in comparison to the uncorrected model. Further, the model with much reduced biases depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector relative to the uncorrected model. The enhanced SAT predictability in the corrected model is due to a stronger and more variable AMOC and its enhanced influence on North Atlantic sea surface temperature (SST). Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SST and exhibit a smaller SAT predictability over the North Atlantic sector.

  16. Comparing Error Correction Procedures for Children Diagnosed with Autism

    ERIC Educational Resources Information Center

    Townley-Cochran, Donna; Leaf, Justin B.; Leaf, Ronald; Taubman, Mitchell; McEachin, John

    2017-01-01

    The purpose of this study was to examine the effectiveness of two error correction (EC) procedures: modeling alone and the use of an error statement plus modeling. Utilizing an alternating treatments design nested into a multiple baseline design across participants, we sought to evaluate and compare the effects of these two EC procedures used to…

  17. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  18. Correction of aeroheating-induced intensity nonuniformity in infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Li; Yan, Luxin; Zhao, Hui; Dai, Xiaobing; Zhang, Tianxu

    2016-05-01

    Aeroheating-induced intensity nonuniformity effects severely influence the effective performance of an infrared (IR) imaging system in high-speed flight. In this paper, we propose a new approach to the correction of intensity nonuniformity in IR images. The basic assumption is that the low-frequency intensity bias is additive and smoothly varying so that it can be modeled as a bivariate polynomial and estimated by using an isotropic total variation (TV) model. A half quadratic penalty method is applied to the isotropic form of TV discretization. And an alternating minimization algorithm is adopted for solving the optimization model. The experimental results of simulated and real aerothermal images show that the proposed correction method can effectively improve IR image quality.

  19. Low-energy effective action in two-dimensional SQED: a two-loop analysis

    NASA Astrophysics Data System (ADS)

    Samsonov, I. B.

    2017-07-01

    We study two-loop quantum corrections to the low-energy effective actions in N=(2,2) and N=(4,4) SQED on the Coulomb branch. In the latter model, the low-energy effective action is described by a generalized Kähler potential which depends on both chiral and twisted chiral superfields. We demonstrate that this generalized Kähler potential is one-loop exact and corresponds to the N=(4,4) sigma-model with torsion presented by Roček, Schoutens and Sevrin [1]. In the N=(2,2) SQED, the effective Kähler potential is not protected against higher-loop quantum corrections. The two-loop quantum corrections to this potential and the corresponding sigma-model metric are explicitly found.

  20. A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors

    PubMed Central

    Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca

    2012-01-01

    Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642

  1. Comparison and Analysis of Geometric Correction Models of Spaceborne SAR

    PubMed Central

    Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong

    2016-01-01

    Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973

  2. [Atmospheric correction of visible-infrared band FY-3A/MERSI data based on 6S model].

    PubMed

    Wu, Yong-Li; Luan, Qing; Tian, Guo-Zhen

    2011-06-01

    Based on the observation data from the meteorological stations in Taiyuan City and its surrounding areas of Shanxi Province, the atmosphere parameters for 6S model were supplied, and the atmospheric correction of visible-infrared band (precision 250 meters) FY-3A/MERSI data was conducted. After atmospheric correction, the range of visible-infrared band FY-3A/MERSI data was widened, reflectivity increased, high peak was higher, and distribution histogram was smoother. In the meantime, the threshold value of NDVI data reflecting vegetation condition increased, and its high peak was higher, more close to the real data. Moreover, the color synthesis image of correction data showed more abundant information, its brightness increased, contrast enhanced, and the information reflected was more close to real.

  3. Wind Tunnel Corrections for High Angle of Attack Models,

    DTIC Science & Technology

    1981-02-01

    MAQUETTES EN SOUFFLERIE par X.Vaucheret GERMA * J ACTIVITIES ON WIND TUNNEL CORRECTIONS byH.HoIst A REVIEW OF RESEARCH AT NLR ON WIND TUNNEL...1-10 Ro - ,2.13M106 M - 0.230 ° BALANCE — corrected -T unoorreoted •r r^a—Q * o n ...8217 n * t ?’ A *i o o 1 1 -0.70 -0.65 -0.60 -0.S5 -0.50 -0.45 -0.40 Fig.l 1 Corrected

  4. Reduction of CMIP5 models bias using Cumulative Distribution Function transform and impact on crops yields simulations across West Africa.

    NASA Astrophysics Data System (ADS)

    Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu

    2017-04-01

    Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009

  5. Correcting Biases in a lower resolution global circulation model with data assimilation

    NASA Astrophysics Data System (ADS)

    Canter, Martin; Barth, Alexander

    2016-04-01

    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have currents perpendicular to the coast. The randomly generated stochastic forcing are then directly injected into the NEMO LIM model's equations in order to force the model at each timestep, and not only during the assimilation step. Results from a twin experiment will be presented. This method is being applied to a real case, with observations on the sea surface height available from the mean dynamic topography of CNES (Centre national d'études spatiales). The model, the bias correction, and more extensive forcings, in particular with a three dimensional structure and a time-varying component, will also be presented.

  6. Understanding climatological, instantaneous and reference VTEC maps, its variability, its relation to STEC and its assimilation by VTEC models

    NASA Astrophysics Data System (ADS)

    Orus, R.; Prieto-Cerdeira, R.

    2012-12-01

    As the next Solar Maximum peak is approaching, forecasted for the late 2013, it is a good opportunity to study the ionospheric behaviour in such conditions and how this behaviour can be estimated and corrected by existing climatological models - e.g.. NeQuick, International Reference Ionosphere (IRI)- , as well as, GNSS driven models, such as Klobuchar, NeQuick Galileo, SBAS MOPS (EGNOS and WAAS corrections) and Near Real Time Global Ionospheric Maps (GIM) or regional Maps computed by different institutions. In this framework, technology advances allow to increase the computational and radio frequency channels capabilities of low-cost receivers embedded in handheld devices (such mobile phones, pads, trekking clocks, photo-cameras, etc). This may enable the active use of received ionospheric data or correction parameters from different data sources. The study is centred in understanding the ionosphere but focusing on its impact on the position error for low-cost single-frequency receivers. This study tests optimal ways to take advantage of a big amount of Real or Near Real Time ionospheric information and the way to combine various corrections in order to reach a better navigation solution. In this context, the use of real time estimation vTEC data coming from EGNOS or WAAS or near real time GIMs are used to feed the standard GPS single-frequency ionospheric correction models (Klobuchar) and get enhanced Ionospheric corrections with minor changes on the navigation software. This is done by using a Taylor expansion over the 8 coefficients send by GPS. Moreover, the same datasets are used to assimilate it in NeQuick, for broadcast coefficients, as well as, for grid assimilation. As a side product, electron density profiles in Near Real Time could be estimated with data assimilated from different ionospheric sources. Finally, the ionospheric delay estimation for multi-constellation receivers could take benefit from a common and more accurate ionospheric model being able to reduce the position error due to ionosphere. Therefore, a performance study of the different models to navigate with GNSS will be presented in different ionospheric conditions and using different sources for the model adjustment, keeping the real time capability of the receivers.

  7. Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.

    2003-01-01

    This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.

  8. A Physical Model-based Correction for Charge Traps in the Hubble Space Telescope ’s Wide Field Camera 3 Near-IR Detector and Its Applications to Transiting Exoplanets and Brown Dwarfs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yifan; Apai, Dániel; Schneider, Glenn

    The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium . We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this processmore » that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam ) may also benefit from the extension of this model if similar systematic profiles are observed.« less

  9. Application of overlay modeling and control with Zernike polynomials in an HVM environment

    NASA Astrophysics Data System (ADS)

    Ju, JaeWuk; Kim, MinGyu; Lee, JuHan; Nabeth, Jeremy; Jeon, Sanghuck; Heo, Hoyoung; Robinson, John C.; Pierson, Bill

    2016-03-01

    Shrinking technology nodes and smaller process margins require improved photolithography overlay control. Generally, overlay measurement results are modeled with Cartesian polynomial functions for both intra-field and inter-field models and the model coefficients are sent to an advanced process control (APC) system operating in an XY Cartesian basis. Dampened overlay corrections, typically via exponentially or linearly weighted moving average in time, are then retrieved from the APC system to apply on the scanner in XY Cartesian form for subsequent lot exposure. The goal of the above method is to process lots with corrections that target the least possible overlay misregistration in steady state as well as in change point situations. In this study, we model overlay errors on product using Zernike polynomials with same fitting capability as the process of reference (POR) to represent the wafer-level terms, and use the standard Cartesian polynomials to represent the field-level terms. APC calculations for wafer-level correction are performed in Zernike basis while field-level calculations use standard XY Cartesian basis. Finally, weighted wafer-level correction terms are converted to XY Cartesian space in order to be applied on the scanner, along with field-level corrections, for future wafer exposures. Since Zernike polynomials have the property of being orthogonal in the unit disk we are able to reduce the amount of collinearity between terms and improve overlay stability. Our real time Zernike modeling and feedback evaluation was performed on a 20-lot dataset in a high volume manufacturing (HVM) environment. The measured on-product results were compared to POR and showed a 7% reduction in overlay variation including a 22% terms variation. This led to an on-product raw overlay Mean + 3Sigma X&Y improvement of 5% and resulted in 0.1% yield improvement.

  10. A Physical Model-based Correction for Charge Traps in the Hubble Space Telescope’s Wide Field Camera 3 Near-IR Detector and Its Applications to Transiting Exoplanets and Brown Dwarfs

    NASA Astrophysics Data System (ADS)

    Zhou, Yifan; Apai, Dániel; Lew, Ben W. P.; Schneider, Glenn

    2017-06-01

    The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium. We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam) may also benefit from the extension of this model if similar systematic profiles are observed.

  11. 75 FR 31332 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model EMB-120, -120ER...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-03

    ... correcting the fuel quantity indication system; as applicable. Compliance (f) You are responsible for having... correcting the fuel quantity indication system; as applicable. The MCAI does not provide a corrective action... unsafe condition as: It has been found that some fuel quantity probes may fail during the airplane life...

  12. 21 CFR 806.20 - Records of corrections and removals not required to be reported.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICES; REPORTS OF CORRECTIONS AND... known, and the intended use of the device. (2) The model, catalog, or code number of the device and the...) Each device manufacturer or importer who initiates a correction or removal of a device that is not...

  13. 21 CFR 806.20 - Records of corrections and removals not required to be reported.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICES; REPORTS OF CORRECTIONS AND... known, and the intended use of the device. (2) The model, catalog, or code number of the device and the...) Each device manufacturer or importer who initiates a correction or removal of a device that is not...

  14. Initial Correction versus Negative Marking in Multiple Choice Examinations

    ERIC Educational Resources Information Center

    Van Hecke, Tanja

    2015-01-01

    Optimal assessment tools should measure in a limited time the knowledge of students in a correct and unbiased way. A method for automating the scoring is multiple choice scoring. This article compares scoring methods from a probabilistic point of view by modelling the probability to pass: the number right scoring, the initial correction (IC) and…

  15. 75 FR 31282 - Airworthiness Directives; Airbus Model A319-100, A320-200, A321-100, and A321-200 Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-03

    ... published in the Federal Register on May 12, 2006. The error resulted in an incorrect component maintenance... related investigative and corrective actions if necessary. DATES: This correction is effective June 3... wall-mounted cabin attendant seat, and related investigative and corrective actions if necessary. As...

  16. Coaching, Not Correcting: An Alternative Model for Minority Students

    ERIC Educational Resources Information Center

    Dresser, Rocío; Asato, Jolynn

    2014-01-01

    The debate on the role of oral corrective feedback or "repair" in English instruction settings has been going on for over 30 years. Some educators believe that oral grammar correction is effective because they have noticed that students who learned a set of grammar rules were more likely to use them in real life communication (Krashen,…

  17. Innovation in prediction planning for anterior open bite correction.

    PubMed

    Almuzian, Mohammed; Almukhtar, Anas; O'Neil, Michael; Benington, Philip; Al Anezi, Thamer; Ayoub, Ashraf

    2015-05-01

    This study applies recent advances in 3D virtual imaging for application in the prediction planning of dentofacial deformities. Stereo-photogrammetry has been used to create virtual and physical models, which are creatively combined in planning the surgical correction of anterior open bite. The application of these novel methods is demonstrated through the surgical correction of a case.

  18. The Effect of Guessing on Item Reliability under Answer-Until-Correct Scoring

    ERIC Educational Resources Information Center

    Kane, Michael; Moloney, James

    1978-01-01

    The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)

  19. Using Statistical Techniques and Web Search to Correct ESL Errors

    ERIC Educational Resources Information Center

    Gamon, Michael; Leacock, Claudia; Brockett, Chris; Dolan, William B.; Gao, Jianfeng; Belenko, Dmitriy; Klementiev, Alexandre

    2009-01-01

    In this paper we present a system for automatic correction of errors made by learners of English. The system has two novel aspects. First, machine-learned classifiers trained on large amounts of native data and a very large language model are combined to optimize the precision of suggested corrections. Second, the user can access real-life web…

  20. Experiences in the Family Health Strategy: demands and vulnerabilities in the territory.

    PubMed

    Pinto, Antonio Germane Alves; Jorge, Maria Salete Bessa; Marinho, Mirna Neyara Alexandre de Sá Barreto; Vidal, Emery Ciana Figueirêdo; Aquino, Priscila de Souza; Vidal, Eglídia Carla Figueirêdo

    2017-01-01

    To understand the daily demands of Family Health Strategy in clinical practice of the team and social vulnerabilities of community territory. Research with qualitative approach, in a critical-reflexive perspective, held with two teams of the Family Health Strategy, in the city of Fortaleza, State of Ceará, Brazil. The participants were 22 users and 19 health professionals from the basic health network. Data from the interviews and observation were analyzed under the assumptions of critical hermeneutics. We highlight the unveiling of sufferings and daily clashes, the influence of social determinants on health and psychosocial demands, limits and possibilities of everyday clinical practice. The clinic attention must recognize the perceptions and living conditions by listening and promoting health in the community. Compreender as demandas cotidianas da Estratégia Saúde da Família na prática clínica da equipe e as vulnerabilidades sociais do território comunitário. Pesquisa com abordagem qualitativa, numa perspectiva crítico-reflexiva, realizada com duas equipes da Estratégia Saúde da Família, no município de Fortaleza, Estado do Ceará, Brasil. Os participantes foram 22 usuários e 19 profissionais de saúde da rede básica de saúde. Os dados das entrevistas e observação foram analisados sob os pressupostos da hermenêutica crítica. Evidenciam-se o desvelamento de sofrimentos e enfrentamentos cotidianos, a influência dos determinantes sociais na saúde e as demandas psicossociais, limites e possibilidades da prática clínica cotidiana. Considera-se que a atenção clínica deve reconhecer as percepções e condições de vida pela escuta e ações de promoção de saúde na comunidade.

  1. Mental health in the Family Health Strategy as perceived by health professionals.

    PubMed

    Souza, Jacqueline de; Almeida, Letícia Yamawaka de; Luis, Margarita Antonia Villar; Nievas, Andreia Fernanda; Veloso, Tatiana Maria Coelho; Barbosa, Sara Pinto; Giacon, Bianca Cristina Ciccone; Assad, Francine Baltazar

    2017-01-01

    to analyze the management of mental health needs in primary care as perceived by Family Health Strategy professionals. this was a qualitative descriptive exploratory study developed within the coverage area of five family health teams. The data were collected using observation, group interviews, individual semi-structured interviews, and focus groups. Content analysis was conducted using text analysis software and interpretation was based on the corresponding analytical structures. numerous and challenging mental health demands occur in this setting, for which the teams identified care resources; however, they also indicated difficulties, especially related to the operationalization and integration of such resources. there is a need for a care network sensitive to mental health demands that are better coordinated and more effectively managed. analisar o manejo das necessidades de saúde mental na atenção primária à saúde de acordo com a percepção dos profissionais da Estratégia Saúde da Família. estudo qualitativo, descritivo exploratório, desenvolvido no território de abrangência de cinco equipes de saúde da família. Os participantes foram cinco enfermeiras, cinco coordenadores e 17 agentes comunitários de saúde. Os dados foram coletados utilizando observação, entrevistas grupais, entrevistas individuais semiestruturadas e grupos focais. Fez-se a análise de conteúdo com o auxílio de um Software de análise textual, e a interpretação baseou-se nas estruturas analíticas correspondentes. inúmeras e desafiadoras demandas de saúde mental têm sido acolhidas nesse setting, para as quais as equipes identificaram recursos de atendimento; no entanto, apontaram dificuldades, sobretudo relacionadas à operacionalização e integração destes recursos. destaca-se a necessidade de uma rede de cuidados sensível a tais demandas, mais articulada e gerida de modo eficaz.

  2. Measurement of sediment and crustal thickness corrected RDA for 2D profiles at rifted continental margins: Applications to the Iberian, Gulf of Aden and S Angolan margins

    NASA Astrophysics Data System (ADS)

    Cowie, Leanne; Kusznir, Nick

    2014-05-01

    Subsidence analysis of sedimentary basins and rifted continental margins requires a correction for the anomalous uplift or subsidence arising from mantle dynamic topography. Whilst different global model predictions of mantle dynamic topography may give a broadly similar pattern at long wavelengths, they differ substantially in the predicted amplitude and at shorter wavelengths. As a consequence the accuracy of predicted mantle dynamic topography is not sufficiently good to provide corrections for subsidence analysis. Measurements of present day anomalous subsidence, which we attribute to mantle dynamic topography, have been made for three rifted continental margins; offshore Iberia, the Gulf of Aden and southern Angola. We determine residual depth anomaly (RDA), corrected for sediment loading and crustal thickness variation for 2D profiles running from unequivocal oceanic crust across the continental ocean boundary onto thinned continental crust. Residual depth anomalies (RDA), corrected for sediment loading using flexural backstripping and decompaction, have been calculated by comparing observed and age predicted oceanic bathymetries at these margins. Age predicted bathymetric anomalies have been calculated using the thermal plate model predictions from Crosby & McKenzie (2009). Non-zero sediment corrected RDAs may result from anomalous oceanic crustal thickness with respect to the global average or from anomalous uplift or subsidence. Gravity anomaly inversion incorporating a lithosphere thermal gravity anomaly correction and sediment thickness from 2D seismic reflection data has been used to determine Moho depth, calibrated using seismic refraction, and oceanic crustal basement thickness. Crustal basement thicknesses derived from gravity inversion together with Airy isostasy have been used to correct for variations of crustal thickness from a standard oceanic thickness of 7km. The 2D profiles of RDA corrected for both sediment loading and non-standard crustal thickness provide a measurement of anomalous uplift or subsidence which we attribute to mantle dynamic topography. We compare our sediment and crustal thickness corrected RDA analysis results with published predictions of mantle dynamic topography from global models.

  3. Parameter Sensitivity Study of the Wall Interference Correction System (WICS)

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Everhart, Joel L.; Iyer, Venkit

    2001-01-01

    An off-line version of the Wall Interference Correction System (WICS) has been implemented for the "NASA Langley National Transonic Facility. The correction capability is currently restricted to corrections for solid wall interference in the model pitch plane for Mach numbers, less than 0.45 due to a limitation in tunnel calibration data. A study to assess output sensitivity to the aerodynamic parameters of Reynolds number and Mach number was conducted on this code to further ensure quality during the correction process. In addition, this paper includes all investigation into possible correction due to a semispan test technique using a non metric standoff and all improvement to the standard data rejection algorithm.

  4. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.

    PubMed

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-10-27

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.

  5. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements

    PubMed Central

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-01-01

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998

  6. Correction of terrestrial LiDAR intensity channel using Oren-Nayar reflectance model: An application to lithological differentiation

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellan, Antonio; Humair, Florian; Matasci, Battista; Derron, Marc-Henri; Jaboyedoff, Michel

    2016-03-01

    Ground-based LiDAR has been traditionally used for surveying purposes via 3D point clouds. In addition to XYZ coordinates, an intensity value is also recorded by LiDAR devices. The intensity of the backscattered signal can be a significant source of information for various applications in geosciences. Previous attempts to account for the scattering of the laser signal are usually modelled using a perfect diffuse reflection. Nevertheless, experience on natural outcrops shows that rock surfaces do not behave as perfect diffuse reflectors. The geometry (or relief) of the scanned surfaces plays a major role in the recorded intensity values. Our study proposes a new terrestrial LiDAR intensity correction, which takes into consideration the range, the incidence angle and the geometry of the scanned surfaces. The proposed correction equation combines the classical radar equation for LiDAR with the bidirectional reflectance distribution function of the Oren-Nayar model. It is based on the idea that the surface geometry can be modelled by a relief of multiple micro-facets. This model is constrained by only one tuning parameter: the standard deviation of the slope angle distribution (σslope) of micro-facets. Firstly, a series of tests have been carried out in laboratory conditions on a 2 m2 board covered by black/white matte paper (perfect diffuse reflector) and scanned at different ranges and incidence angles. Secondly, other tests were carried out on rock blocks of different lithologies and surface conditions. Those tests demonstrated that the non-perfect diffuse reflectance of rock surfaces can be practically handled by the proposed correction method. Finally, the intensity correction method was applied to a real case study, with two scans of the carbonate rock outcrop of the Dents-du-Midi (Swiss Alps), to improve the lithological identification for geological mapping purposes. After correction, the intensity values are proportional to the intrinsic material reflectance and are independent from range, incidence angle and scanned surface geometry. The corrected intensity values significantly improve the material differentiation.

  7. Wafer hotspot prevention using etch aware OPC correction

    NASA Astrophysics Data System (ADS)

    Hamouda, Ayman; Power, Dave; Salama, Mohamed; Chen, Ao

    2016-03-01

    As technology development advances into deep-sub-wavelength nodes, multiple patterning is becoming more essential to achieve the technology shrink requirements. Recently, Optical Proximity Correction (OPC) technology has proposed simultaneous correction of multiple mask-patterns to enable multiple patterning awareness during OPC correction. This is essential to prevent inter-layer hot-spots during the final pattern transfer. In state-of-art literature, multi-layer awareness is achieved using simultaneous resist-contour simulations to predict and correct for hot-spots during mask generation. However, this approach assumes a uniform etch shrink response for all patterns independent of their proximity, which isn't sufficient for the full prevention of inter-exposure hot-spot, for example different color space violations post etch or via coverage/enclosure post etch. In this paper, we explain the need to include the etch component during multiple patterning OPC. We also introduce a novel approach for Etch-aware simultaneous Multiple-patterning OPC, where we calibrate and verify a lumped model that includes the combined resist and etch responses. Adding this extra simulation condition during OPC is suitable for full chip processing from a computation intensity point of view. Also, using this model during OPC to predict and correct inter-exposures hot-spots is similar to previously proposed multiple-patterning OPC, yet our proposed approach more accurately corrects post-etch defects too.

  8. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    NASA Astrophysics Data System (ADS)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-07-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data are available from multiyear Kepler photometry. We explore the internal systematics on the stellar properties, that is associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from (i) the inclusion of the diffusion of helium and heavy elements; (ii) the uncertainty in solar metallicity mixture; and (iii) different surface correction methods used in optimization/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5 per cent, 0.8 per cent, 2.1 per cent, and 16 per cent in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7 per cent in mean density, 0.5 per cent in radius, 1.4 per cent in mass, and 6.7 per cent in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1 per cent, ˜1 per cent, ˜2 per cent, and ˜8 per cent in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  9. Analytical and numerical analysis of frictional damage in quasi brittle materials

    NASA Astrophysics Data System (ADS)

    Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.

    2016-07-01

    Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.

  10. Coarse-grained modeling of polyethylene melts: Effect on dynamics

    DOE PAGES

    Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya; ...

    2017-05-23

    The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less

  11. Coarse-grained modeling of polyethylene melts: Effect on dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, Brandon L.; Salerno, K. Michael; Agrawal, Anupriya

    The distinctive viscoelastic behavior of polymers results from a coupled interplay of motion on multiple length and time scales. Capturing the broad time and length scales of polymer motion remains a challenge. Using polyethylene (PE) as a model macromolecule, we construct coarse-grained (CG) models of PE with three to six methyl groups per CG bead and probe two critical aspects of the technique: pressure corrections required after iterative Boltzmann inversion (IBI) to generate CG potentials that match the pressure of reference fully atomistic melt simulations and the transferability of CG potentials across temperatures. While IBI produces nonbonded pair potentials thatmore » give excellent agreement between the atomistic and CG pair correlation functions, the resulting pressure for the CG models is large compared with the pressure of the atomistic system. We find that correcting the potential to match the reference pressure leads to nonbonded interactions with much deeper minima and slightly smaller effective bead diameter. However, simulations with potentials generated by IBI and pressure-corrected IBI result in similar mean-square displacements (MSDs) and stress autocorrelation functions G( t) for PE melts. While the time rescaling factor required to match CG and atomistic models is the same for pressure- and non-pressure-corrected CG models, it strongly depends on temperature. Furthermore, transferability was investigated by comparing the MSDs and stress autocorrelation functions for potentials developed at different temperatures.« less

  12. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections.

    PubMed

    Junk, J; Ulber, B; Vidal, S; Eickermann, M

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  13. Assessing climate change impacts on the rape stem weevil, Ceutorhynchus napi Gyll., based on bias- and non-bias-corrected regional climate change projections

    NASA Astrophysics Data System (ADS)

    Junk, J.; Ulber, B.; Vidal, S.; Eickermann, M.

    2015-11-01

    Agricultural production is directly affected by projected increases in air temperature and changes in precipitation. A multi-model ensemble of regional climate change projections indicated shifts towards higher air temperatures and changing precipitation patterns during the summer and winter seasons up to the year 2100 for the region of Goettingen (Lower Saxony, Germany). A second major controlling factor of the agricultural production is the infestation level by pests. Based on long-term field surveys and meteorological observations, a calibration of an existing model describing the migration of the pest insect Ceutorhynchus napi was possible. To assess the impacts of climate on pests under projected changing environmental conditions, we combined the results of regional climate models with the phenological model to describe the crop invasion of this species. In order to reduce systematic differences between the output of the regional climate models and observational data sets, two different bias correction methods were applied: a linear correction for air temperature and a quantile mapping approach for precipitation. Only the results derived from the bias-corrected output of the regional climate models showed satisfying results. An earlier onset, as well as a prolongation of the possible time window for the immigration of Ceutorhynchus napi, was projected by the majority of the ensemble members.

  14. Believing what we do not believe: Acquiescence to superstitious beliefs and other powerful intuitions.

    PubMed

    Risen, Jane L

    2016-03-01

    Traditionally, research on superstition and magical thinking has focused on people's cognitive shortcomings, but superstitions are not limited to individuals with mental deficits. Even smart, educated, emotionally stable adults have superstitions that are not rational. Dual process models--such as the corrective model advocated by Kahneman and Frederick (2002, 2005), which suggests that System 1 generates intuitive answers that may or may not be corrected by System 2--are useful for illustrating why superstitious thinking is widespread, why particular beliefs arise, and why they are maintained even though they are not true. However, to understand why superstitious beliefs are maintained even when people know they are not true requires that the model be refined. It must allow for the possibility that people can recognize--in the moment--that their belief does not make sense, but act on it nevertheless. People can detect an error, but choose not to correct it, a process I refer to as acquiescence. The first part of the article will use a dual process model to understand the psychology underlying magical thinking, highlighting features of System 1 that generate magical intuitions and features of the person or situation that prompt System 2 to correct them. The second part of the article will suggest that we can improve the model by decoupling the detection of errors from their correction and recognizing acquiescence as a possible System 2 response. I suggest that refining the theory will prove useful for understanding phenomena outside of the context of magical thinking. (c) 2016 APA, all rights reserved).

  15. Adaptive cyclic physiologic noise modeling and correction in functional MRI.

    PubMed

    Beall, Erik B

    2010-03-30

    Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  16. Modeling of polychromatic attenuation using computed tomography reconstructed images

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    1999-01-01

    This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.

  17. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-01

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  18. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-29

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  19. Sex determination of the Acadian Flycatcher using discriminant analysis

    USGS Publications Warehouse

    Wilson, R.R.

    1999-01-01

    I used five morphometric variables from 114 individuals captured in Arkansas to develop a discriminant model to predict the sex of Acadian Flycatchers (Empidonax virescens). Stepwise discriminant function analyses selected wing chord and tail length as the most parsimonious subset of variables for discriminating sex. This two-variable model correctly classified 80% of females and 97% of males used to develop the model. Validation of the model using 19 individuals from Louisiana and Virginia resulted in 100% correct classification of males and females. This model provides criteria for sexing monomorphic Acadian Flycatchers during the breeding season and possibly during the winter.

  20. Mosquito population dynamics from cellular automata-based simulation

    NASA Astrophysics Data System (ADS)

    Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning

    2016-02-01

    In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.

  1. Region of validity of the finite–temperature Thomas–Fermi model with respect to quantum and exchange corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyachkov, Sergey, E-mail: serj.dyachkov@gmail.com; Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region 141700; Levashov, Pavel, E-mail: pasha@ihed.ras.ru

    We determine the region of applicability of the finite–temperature Thomas–Fermi model and its thermal part with respect to quantum and exchange corrections. Very high accuracy of computations has been achieved by using a special approach for the solution of the boundary problem and numerical integration. We show that the thermal part of the model can be applied at lower temperatures than the full model. Also we offer simple approximations of the boundaries of validity for practical applications.

  2. GPS/INS Sensor Fusion Using GPS Wind up Model

    NASA Technical Reports Server (NTRS)

    Williamson, Walton R. (Inventor)

    2013-01-01

    A method of stabilizing an inertial navigation system (INS), includes the steps of: receiving data from an inertial navigation system; and receiving a finite number of carrier phase observables using at least one GPS receiver from a plurality of GPS satellites; calculating a phase wind up correction; correcting at least one of the finite number of carrier phase observables using the phase wind up correction; and calculating a corrected IMU attitude or velocity or position using the corrected at least one of the finite number of carrier phase observables; and performing a step selected from the steps consisting of recording, reporting, or providing the corrected IMU attitude or velocity or position to another process that uses the corrected IMU attitude or velocity or position. A GPS stabilized inertial navigation system apparatus is also described.

  3. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  4. Ionospheric Correction of D-InSAR Using Split-Spectrum Technique and 3D Ionosphere Model in Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Guo, L.; Wu, J. J.; Chen, Q.; Song, S.

    2014-12-01

    In Differential Interferometric Synthetic Aperture Radar (D-InSAR) atmosphere effect including troposphere and ionosphere is one of the dominant sources of error in most interferograms, which greatly reduced the accuracy of deformation monitoring. In recent years tropospheric correction especially Zwd in InSAR data processing has ever got widely investigated and got efficiently suppressed. And thus we focused our study on ionospheric correction using two different methods, which are split-spectrum technique and Nequick model, one of the three dimensional electron density models. We processed Wenchuan ALOS PALSAR images, and compared InSAR surface deformation after ionospheric modification using the two approaches mentioned above with ground GPS subsidence observations to validate the effect of split-spectrum method and NeQuick model, further discussed the performance and feasibility of external data and InSAR itself during the study of the elimination of InSAR ionospheric effect.

  5. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models.

    PubMed

    Beard, Brian B; Kainz, Wolfgang

    2004-10-13

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.

  6. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server

    PubMed Central

    Ho, Qirong; Cipar, James; Cui, Henggang; Kim, Jin Kyu; Lee, Seunghak; Gibbons, Phillip B.; Gibson, Garth A.; Ganger, Gregory R.; Xing, Eric P.

    2014-01-01

    We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model’s values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes. PMID:25400488

  7. Parton distribution functions with QED corrections in the valon model

    NASA Astrophysics Data System (ADS)

    Mottaghizadeh, Marzieh; Taghavi Shahri, Fatemeh; Eslami, Parvin

    2017-10-01

    The parton distribution functions (PDFs) with QED corrections are obtained by solving the QCD ⊗QED DGLAP evolution equations in the framework of the "valon" model at the next-to-leading-order QCD and the leading-order QED approximations. Our results for the PDFs with QED corrections in this phenomenological model are in good agreement with the newly related CT14QED global fits code [Phys. Rev. D 93, 114015 (2016), 10.1103/PhysRevD.93.114015] and APFEL (NNPDF2.3QED) program [Comput. Phys. Commun. 185, 1647 (2014), 10.1016/j.cpc.2014.03.007] in a wide range of x =[10-5,1 ] and Q2=[0.283 ,108] GeV2 . The model calculations agree rather well with those codes. In the latter, we proposed a new method for studying the symmetry breaking of the sea quark distribution functions inside the proton.

  8. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models

    PubMed Central

    Beard, Brian B; Kainz, Wolfgang

    2004-01-01

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. PMID:15482601

  9. Atmospheric Correction Algorithm for Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolutemore » calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.« less

  10. Preclinical modeling highlights the therapeutic potential of hematopoietic stem cell gene editing for correction of SCID-X1.

    PubMed

    Schiroli, Giulia; Ferrari, Samuele; Conway, Anthony; Jacob, Aurelien; Capo, Valentina; Albano, Luisa; Plati, Tiziana; Castiello, Maria C; Sanvito, Francesca; Gennery, Andrew R; Bovolenta, Chiara; Palchaudhuri, Rahul; Scadden, David T; Holmes, Michael C; Villa, Anna; Sitia, Giovanni; Lombardo, Angelo; Genovese, Pietro; Naldini, Luigi

    2017-10-11

    Targeted genome editing in hematopoietic stem/progenitor cells (HSPCs) is an attractive strategy for treating immunohematological diseases. However, the limited efficiency of homology-directed editing in primitive HSPCs constrains the yield of corrected cells and might affect the feasibility and safety of clinical translation. These concerns need to be addressed in stringent preclinical models and overcome by developing more efficient editing methods. We generated a humanized X-linked severe combined immunodeficiency (SCID-X1) mouse model and evaluated the efficacy and safety of hematopoietic reconstitution from limited input of functional HSPCs, establishing thresholds for full correction upon different types of conditioning. Unexpectedly, conditioning before HSPC infusion was required to protect the mice from lymphoma developing when transplanting small numbers of progenitors. We then designed a one-size-fits-all IL2RG (interleukin-2 receptor common γ-chain) gene correction strategy and, using the same reagents suitable for correction of human HSPC, validated the edited human gene in the disease model in vivo, providing evidence of targeted gene editing in mouse HSPCs and demonstrating the functionality of the IL2RG -edited lymphoid progeny. Finally, we optimized editing reagents and protocol for human HSPCs and attained the threshold of IL2RG editing in long-term repopulating cells predicted to safely rescue the disease, using clinically relevant HSPC sources and highly specific zinc finger nucleases or CRISPR (clustered regularly interspaced short palindromic repeats)/Cas9 (CRISPR-associated protein 9). Overall, our work establishes the rationale and guiding principles for clinical translation of SCID-X1 gene editing and provides a framework for developing gene correction for other diseases. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  11. Higher-order corrections to the effective potential close to the jamming transition in the perceptron model

    NASA Astrophysics Data System (ADS)

    Altieri, Ada

    2018-01-01

    In view of the results achieved in a previously related work [A. Altieri, S. Franz, and G. Parisi, J. Stat. Mech. (2016) 093301], 10.1088/1742-5468/2016/09/093301, regarding a Plefka-like expansion of the free energy up to the second order in the perceptron model, we improve the computation here focusing on the role of third-order corrections. The perceptron model is a simple example of constraint satisfaction problem, falling in the same universality class as hard spheres near jamming and hence allowing us to get exact results in high dimensions for more complex settings. Our method enables to define an effective potential (or Thouless-Anderson-Palmer free energy), namely a coarse-grained functional, which depends on the generalized forces and the effective gaps between particles. The analysis of the third-order corrections to the effective potential reveals that, albeit irrelevant in a mean-field framework in the thermodynamic limit, they might instead play a fundamental role in considering finite-size effects. We also study the typical behavior of generalized forces and we show that two kinds of corrections can occur. The first contribution arises since the system is analyzed at a finite distance from jamming, while the second one is due to finite-size corrections. We nevertheless show that third-order corrections in the perturbative expansion vanish in the jamming limit both for the potential and the generalized forces, in agreement with the isostaticity argument proposed by Wyart and coworkers. Finally, we analyze the relevant scaling solutions emerging close to the jamming line, which define a crossover regime connecting the control parameters of the model to an effective temperature.

  12. Higher-order corrections to the effective potential close to the jamming transition in the perceptron model.

    PubMed

    Altieri, Ada

    2018-01-01

    In view of the results achieved in a previously related work [A. Altieri, S. Franz, and G. Parisi, J. Stat. Mech. (2016) 093301]10.1088/1742-5468/2016/09/093301, regarding a Plefka-like expansion of the free energy up to the second order in the perceptron model, we improve the computation here focusing on the role of third-order corrections. The perceptron model is a simple example of constraint satisfaction problem, falling in the same universality class as hard spheres near jamming and hence allowing us to get exact results in high dimensions for more complex settings. Our method enables to define an effective potential (or Thouless-Anderson-Palmer free energy), namely a coarse-grained functional, which depends on the generalized forces and the effective gaps between particles. The analysis of the third-order corrections to the effective potential reveals that, albeit irrelevant in a mean-field framework in the thermodynamic limit, they might instead play a fundamental role in considering finite-size effects. We also study the typical behavior of generalized forces and we show that two kinds of corrections can occur. The first contribution arises since the system is analyzed at a finite distance from jamming, while the second one is due to finite-size corrections. We nevertheless show that third-order corrections in the perturbative expansion vanish in the jamming limit both for the potential and the generalized forces, in agreement with the isostaticity argument proposed by Wyart and coworkers. Finally, we analyze the relevant scaling solutions emerging close to the jamming line, which define a crossover regime connecting the control parameters of the model to an effective temperature.

  13. Primordial power spectrum features and consequences

    NASA Astrophysics Data System (ADS)

    Goswami, G.

    2014-03-01

    The present Cosmic Microwave Background (CMB) temperature and polarization anisotropy data is consistent with not only a power law scalar primordial power spectrum (PPS) with a small running but also with the scalar PPS having very sharp features. This has motivated inflationary models with such sharp features. Recently, even the possibility of having nulls in the power spectrum (at certain scales) has been considered. The existence of these nulls has been shown in linear perturbation theory. What shall be the effect of higher order corrections on such nulls? Inspired by this question, we have attempted to calculate quantum radiative corrections to the Fourier transform of the 2-point function in a toy field theory and address the issue of how these corrections to the power spectrum behave in models in which the tree-level power spectrum has a sharp dip (but not a null). In particular, we have considered the possibility of the relative enhancement of radiative corrections in a model in which the tree-level spectrum goes through a dip in power at a certain scale. The mode functions of the field (whose power spectrum is to be evaluated) are chosen such that they undergo the kind of dynamics that leads to a sharp dip in the tree level power spectrum. Next, we have considered the situation in which this field has quartic self interactions, and found one loop correction in a suitably chosen renormalization scheme. Thus, we have attempted to answer the following key question in the context of this toy model (which is as important in the realistic case): In the chosen renormalization scheme, can quantum radiative corrections be enhanced relative to tree-level power spectrum at scales, at which sharp dips appear in the tree-level spectrum?

  14. Predicting Correctness of Problem Solving from Low-Level Log Data in Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    Cetintas, Suleyman; Si, Luo; Xin, Yan Ping; Hord, Casey

    2009-01-01

    This paper proposes a learning based method that can automatically determine how likely a student is to give a correct answer to a problem in an intelligent tutoring system. Only log files that record students' actions with the system are used to train the model, therefore the modeling process doesn't require expert knowledge for identifying…

  15. A Simulation Study on Methods of Correcting for the Effects of Extreme Response Style

    ERIC Educational Resources Information Center

    Wetzel, Eunike; Böhnke, Jan R.; Rose, Norman

    2016-01-01

    The impact of response styles such as extreme response style (ERS) on trait estimation has long been a matter of concern to researchers and practitioners. This simulation study investigated three methods that have been proposed for the correction of trait estimates for ERS effects: (a) mixed Rasch models, (b) multidimensional item response models,…

  16. Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.

    NASA Astrophysics Data System (ADS)

    Tian, D.; Medina, H.

    2017-12-01

    Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.

  17. Temperature Data Assimilation with Salinity Corrections: Validation for the NSIPP Ocean Data Assimilation System in the Tropical Pacific Ocean, 1993-1998

    NASA Technical Reports Server (NTRS)

    Troccoli, Alberto; Rienecker, Michele M.; Keppenne, Christian L.; Johnson, Gregory C.

    2003-01-01

    The NASA Seasonal-to-Interannual Prediction Project (NSIPP) has developed an Ocean data assimilation system to initialize the quasi-isopycnal ocean model used in our experimental coupled-model forecast system. Initial tests of the system have focused on the assimilation of temperature profiles in an optimal interpolation framework. It is now recognized that correction of temperature only often introduces spurious water masses. The resulting density distribution can be statically unstable and also have a detrimental impact on the velocity distribution. Several simple schemes have been developed to try to correct these deficiencies. Here the salinity field is corrected by using a scheme which assumes that the temperature-salinity relationship of the model background is preserved during the assimilation. The scheme was first introduced for a zlevel model by Troccoli and Haines (1999). A large set of subsurface observations of salinity and temperature is used to cross-validate two data assimilation experiments run for the 6-year period 1993-1998. In these two experiments only subsurface temperature observations are used, but in one case the salinity field is also updated whenever temperature observations are available.

  18. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  19. BPM CALIBRATION INDEPENDENT LHC OPTICS CORRECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CALAGA,R.; TOMAS, R.; GIOVANNOZZI, M.

    2007-06-25

    The tight mechanical aperture for the LHC imposes severe constraints on both the beta and dispersion beating. Robust techniques to compensate these errors are critical for operation of high intensity beams in the LHC. We present simulations using realistic errors from magnet measurements and alignment tolerances in the presence of BPM noise. Correction reveals that the use of BPM calibration and model independent observables are key ingredients to accomplish optics correction. Experiments at RHIC to verify the algorithms for optics correction are also presented.

  20. Correcting Satellite Image Derived Surface Model for Atmospheric Effects

    NASA Technical Reports Server (NTRS)

    Emery, William; Baldwin, Daniel

    1998-01-01

    This project was a continuation of the project entitled "Resolution Earth Surface Features from Repeat Moderate Resolution Satellite Imagery". In the previous study, a Bayesian Maximum Posterior Estimate (BMPE) algorithm was used to obtain a composite series of repeat imagery from the Advanced Very High Resolution Radiometer (AVHRR). The spatial resolution of the resulting composite was significantly greater than the 1 km resolution of the individual AVHRR images. The BMPE algorithm utilized a simple, no-atmosphere geometrical model for the short-wave radiation budget at the Earth's surface. A necessary assumption of the algorithm is that all non geometrical parameters remain static over the compositing period. This assumption is of course violated by temporal variations in both the surface albedo and the atmospheric medium. The effect of the albedo variations is expected to be minimal since the variations are on a fairly long time scale compared to the compositing period, however, the atmospheric variability occurs on a relatively short time scale and can be expected to cause significant errors in the surface reconstruction. The current project proposed to incorporate an atmospheric correction into the BMPE algorithm for the purpose of investigating the effects of a variable atmosphere on the surface reconstructions. Once the atmospheric effects were determined, the investigation could be extended to include corrections various cloud effects, including short wave radiation through thin cirrus clouds. The original proposal was written for a three year project, funded one year at a time. The first year of the project focused on developing an understanding of atmospheric corrections and choosing an appropriate correction model. Several models were considered and the list was narrowed to the two best suited. These were the 5S and 6S shortwave radiation models developed at NASA/GODDARD and tested extensively with data from the AVHRR instrument. Although the 6S model was a successor to the 5S and slightly more advanced, the 5S was selected because outputs from the individual components comprising the short-wave radiation budget were more easily separated. The separation was necessary since both the 5S and 6S did not include geometrical corrections for terrain, a fundamental constituent of the BMPE algorithm. The 5S correction code was incorporated into the BMPE algorithm and many sensitivity studies were performed.

  1. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  2. Calibrating the Grigg's' Apparatus using Experiments performed at the Quartz-Coesite Transition

    NASA Astrophysics Data System (ADS)

    Heilbronner, R.; Stunitz, H.; Richter, B.

    2015-12-01

    The Griggs deformation apparatus is increasingly used for shear experiments. The tested material is placed on a 45° pre-cut between two forcing blocks. During the experiment, the axial displacement, load, temperature, and confining pressure are recorded as a function of time. From these records, stress, strain, and other mechanical data can be calculated - provided the machine is calibrated. Experimentalists are well aware that calibrating a Griggs apparatus is not easy. The stiffness correction accounts for the elastic extension of the rig as load is applied to the sample. An 'area correction' accounts for the decreasing overlap of the forcing blocks as slip along the pre-cut progresses. Other corrections are sometimes used to account for machine specific behaviour. While the rig stiffness can be measured very accurately, the area correction involves model assumptions. Depending on the choice of the model, the calculated stresses may vary by as much as 100 MPa. Also, while the assumptions appear to be theoretically valid, in practice they tend to over-correct the data, yielding strain hardening curves even in cases where constant flow stress or weakening is expected. Using the results of experiments on quartz gouge at the quartz-coesite transition (see Richter et al. this conference), we are now able to improve and constrain our corrections. We introduce an elastic salt correction based on the assumption that the confining pressure is increased as the piston advances and reduces the volume in the confining medium. As the compressibility of salt is low, the correction is significant and increases with strain. Applying this correction, the strain hardening artefact introduced by the area correction can be counter-balanced. Using a combination of area correction and salt correction we can now reproduce strain weakening, for which there is evidence in samples where coesite transforms back to quartz.

  3. Simulating an underwater vehicle self-correcting guidance system with Simulink

    NASA Astrophysics Data System (ADS)

    Fan, Hui; Zhang, Yu-Wen; Li, Wen-Zhe

    2008-09-01

    Underwater vehicles have already adopted self-correcting directional guidance algorithms based on multi-beam self-guidance systems, not waiting for research to determine the most effective algorithms. The main challenges facing research on these guidance systems have been effective modeling of the guidance algorithm and a means to analyze the simulation results. A simulation structure based on Simulink that dealt with both issues was proposed. Initially, a mathematical model of relative motion between the vehicle and the target was developed, which was then encapsulated as a subsystem. Next, steps for constructing a model of the self-correcting guidance algorithm based on the Stateflow module were examined in detail. Finally, a 3-D model of the vehicle and target was created in VRML, and by processing mathematical results, the model was shown moving in a visual environment. This process gives more intuitive results for analyzing the simulation. The results showed that the simulation structure performs well. The simulation program heavily used modularization and encapsulation, so has broad applicability to simulations of other dynamic systems.

  4. A method to preserve trends in quantile mapping bias correction of climate modeled temperature

    NASA Astrophysics Data System (ADS)

    Grillakis, Manolis G.; Koutroulis, Aristeidis G.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2017-09-01

    Bias correction of climate variables is a standard practice in climate change impact (CCI) studies. Various methodologies have been developed within the framework of quantile mapping. However, it is well known that quantile mapping may significantly modify the long-term statistics due to the time dependency of the temperature bias. Here, a method to overcome this issue without compromising the day-to-day correction statistics is presented. The methodology separates the modeled temperature signal into a normalized and a residual component relative to the modeled reference period climatology, in order to adjust the biases only for the former and preserve the signal of the later. The results show that this method allows for the preservation of the originally modeled long-term signal in the mean, the standard deviation and higher and lower percentiles of temperature. To illustrate the improvements, the methodology is tested on daily time series obtained from five Euro CORDEX regional climate models (RCMs).

  5. A downscaling method for the assessment of local climate change

    NASA Astrophysics Data System (ADS)

    Bruno, E.; Portoghese, I.; Vurro, M.

    2009-04-01

    The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.

  6. Surface-wave amplitude analysis for array data with non-linear waveform fitting: Toward high-resolution attenuation models of the upper mantle

    NASA Astrophysics Data System (ADS)

    Hamada, K.; Yoshizawa, K.

    2013-12-01

    Anelastic attenuation of seismic waves provides us with valuable information on temperature and water content in the Earth's mantle. While seismic velocity models have been investigated by many researchers, anelastic attenuation (or Q) models have yet to be investigated in detail mainly due to the intrinsic difficulties and uncertainties in the amplitude analysis of observed seismic waveforms. To increase the horizontal resolution of surface wave attenuation models on a regional scale, we have developed a new method of fully non-linear waveform fitting to measure inter-station phase velocities and amplitude ratios simultaneously, using the Neighborhood Algorithm (NA) as a global optimizer. Model parameter space (perturbations of phase speed and amplitude ratio) is explored to fit two observed waveforms on a common great-circle path by perturbing both phase and amplitude of the fundamental-mode surface waves. This method has been applied to observed waveform data of the USArray from 2007 to 2008, and a large-number of inter-station amplitude and phase speed data are corrected in a period range from 20 to 200 seconds. We have constructed preliminary phase speed and attenuation models using the observed phase and amplitude data, with careful considerations of the effects of elastic focusing and station correction factors for amplitude data. The phase velocity models indicate good correlation with the conventional tomographic results in North America on a large-scale; e.g., significant slow velocity anomaly in volcanic regions in the western United States. The preliminary results of surface-wave attenuation achieved a better variance reduction when the amplitude data are inverted for attenuation models in conjunction with corrections for receiver factors. We have also taken into account the amplitude correction for elastic focusing based on a geometrical ray theory, but its effects on the final model is somewhat limited and our attenuation model show anti-correlation with the phase velocity models; i.e., lower attenuation is found in slower velocity areas that cannot readily be explained by the temperature effects alone. Some former global scale studies (e.g., Dalton et al., JGR, 2006) indicated that the ray-theoretical focusing corrections on amplitude data tend to eliminate such anti-correlation of phase speed and attenuation, but this seems not to work sufficiently well for our regional scale model, which is affected by stronger velocity gradient relative to global-scale models. Thus, the estimated elastic focusing effects based on ray theory may be underestimated in our regional-scale studies. More rigorous ways to estimate the focusing corrections as well as data selection criteria for amplitude measurements are required to achieve a high-resolution attenuation models on regional scales in the future.

  7. A new approach to correct the QT interval for changes in heart rate using a nonparametric regression model in beagle dogs.

    PubMed

    Watanabe, Hiroyuki; Miyazaki, Hiroyasu

    2006-01-01

    Over- and/or under-correction of QT intervals for changes in heart rate may lead to misleading conclusions and/or masking the potential of a drug to prolong the QT interval. This study examines a nonparametric regression model (Loess Smoother) to adjust the QT interval for differences in heart rate, with an improved fitness over a wide range of heart rates. 240 sets of (QT, RR) observations collected from each of 8 conscious and non-treated beagle dogs were used as the materials for investigation. The fitness of the nonparametric regression model to the QT-RR relationship was compared with four models (individual linear regression, common linear regression, and Bazett's and Fridericia's correlation models) with reference to Akaike's Information Criterion (AIC). Residuals were visually assessed. The bias-corrected AIC of the nonparametric regression model was the best of the models examined in this study. Although the parametric models did not fit, the nonparametric regression model improved the fitting at both fast and slow heart rates. The nonparametric regression model is the more flexible method compared with the parametric method. The mathematical fit for linear regression models was unsatisfactory at both fast and slow heart rates, while the nonparametric regression model showed significant improvement at all heart rates in beagle dogs.

  8. Skin Friction and Transition Location Measurement on Supersonic Transport Models

    NASA Technical Reports Server (NTRS)

    Kennelly, Robert A., Jr.; Goodsell, Aga M.; Olsen, Lawrence E. (Technical Monitor)

    2000-01-01

    Flow visualization techniques were used to obtain both qualitative and quantitative skin friction and transition location data in wind tunnel tests performed on two supersonic transport models at Mach 2.40. Oil-film interferometry was useful for verifying boundary layer transition, but careful monitoring of model surface temperatures and systematic examination of the effects of tunnel start-up and shutdown transients will be required to achieve high levels of accuracy for skin friction measurements. A more common technique, use of a subliming solid to reveal transition location, was employed to correct drag measurements to a standard condition of all-turbulent flow on the wing. These corrected data were then analyzed to determine the additional correction required to account for the effect of the boundary layer trip devices.

  9. A proposal for self-correcting stabilizer quantum memories in 3 dimensions (or slightly less)

    NASA Astrophysics Data System (ADS)

    Brell, Courtney G.

    2016-01-01

    We propose a family of local CSS stabilizer codes as possible candidates for self-correcting quantum memories in 3D. The construction is inspired by the classical Ising model on a Sierpinski carpet fractal, which acts as a classical self-correcting memory. Our models are naturally defined on fractal subsets of a 4D hypercubic lattice with Hausdorff dimension less than 3. Though this does not imply that these models can be realized with local interactions in {{{R}}}3, we also discuss this possibility. The X and Z sectors of the code are dual to one another, and we show that there exists a finite temperature phase transition associated with each of these sectors, providing evidence that the system may robustly store quantum information at finite temperature.

  10. An experimental study of wall adaptation and interference assessment using Cauchy integral formula

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1991-01-01

    This paper summarizes the results of an experimental study of combined wall adaptation and residual interference assessment using the Cauchy integral formula. The experiments were conducted on a supercritical airfoil model in the Langley 0.3-m Transonic Cryogenic Tunnel solid flexible wall test section. The ratio of model chord to test section height was about 0.7. The method worked satisfactorily in reducing the blockage interference and demonstrated the primary requirement for correcting for the blockage effects at high model incidences to correctly determine high lift characteristics. The studies show that the method has potential for reducing the residual interference to considerably low levels. However, corrections to blockage and upwash velocities gradients may still be required for the final adapted wall shapes.

  11. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  12. Modeling the physisorption of graphene on metals

    NASA Astrophysics Data System (ADS)

    Tao, Jianmin; Tang, Hong; Patra, Abhirup; Bhattarai, Puskar; Perdew, John P.

    2018-04-01

    Many processes of technological and fundamental importance occur on surfaces. Adsorption is one of these phenomena that has received the most attention. However, it presents a great challenge to conventional density functional theory. Starting with the Lifshitz-Zaremba-Kohn second-order perturbation theory, here we develop a long-range van der Waals (vdW) correction for physisorption of graphene on metals. The model importantly includes quadrupole-surface interaction and screening effects. The results show that, when the vdW correction is combined with the Perdew-Burke-Enzerhof functional, it yields adsorption energies in good agreement with the random-phase approximation, significantly improving upon other vdW methods. We also find that, compared with the leading-order interaction, the higher-order quadrupole-surface correction accounts for about 25 % of the total vdW correction, suggesting the importance of the higher-order term.

  13. Probing Planckian Corrections at the Horizon Scale with LISA Binaries

    NASA Astrophysics Data System (ADS)

    Maselli, Andrea; Pani, Paolo; Cardoso, Vitor; Abdelsalhin, Tiziano; Gualtieri, Leonardo; Ferrari, Valeria

    2018-02-01

    Several quantum-gravity models of compact objects predict microscopic or even Planckian corrections at the horizon scale. We explore the possibility of measuring two model-independent, smoking-gun effects of these corrections in the gravitational waveform of a compact binary, namely, the absence of tidal heating and the presence of tidal deformability. For events detectable by the future space-based interferometer LISA, we show that the effect of tidal heating dominates and allows one to constrain putative corrections down to the Planck scale. The measurement of the tidal Love numbers with LISA is more challenging but, in optimistic scenarios, it allows us to constrain the compactness of a supermassive exotic compact object down to the Planck scale. Our analysis suggests that highly spinning, supermassive binaries at 1-20 Gpc provide unparalleled tests of quantum-gravity effects at the horizon scale.

  14. Reply to "Comment on `Simple improvements to classical bubble nucleation models'"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kyoko K.; Tanaka, Hidekazu; Angélil, Raymond; Diemand, Jürg

    2016-08-01

    We reply to the Comment by Schmelzer and Baidakov [Phys. Rev. E 94, 026801 (2016)]., 10.1103/PhysRevE.94.026801 They suggest that a more modern approach than the classic description by Tolman is necessary to model the surface tension of curved interfaces. Therefore we now consider the higher-order Helfrich correction, rather than the simpler first-order Tolman correction. Using a recent parametrization of the Helfrich correction provided by Wilhelmsen et al. [J. Chem. Phys. 142, 064706 (2015)], 10.1063/1.4907588, we test this description against measurements from our simulations, and find an agreement stronger than what the pure Tolman description offers. Our analyses suggest a necessary correction of order higher than the second for small bubbles with radius ≲1 nm. In addition, we respond to other minor criticism about our results.

  15. Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.

    PubMed

    Etemad, S Gh; Thibault, J; Hashemabadi, S H

    2003-10-01

    This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.

  16. Probing Planckian Corrections at the Horizon Scale with LISA Binaries.

    PubMed

    Maselli, Andrea; Pani, Paolo; Cardoso, Vitor; Abdelsalhin, Tiziano; Gualtieri, Leonardo; Ferrari, Valeria

    2018-02-23

    Several quantum-gravity models of compact objects predict microscopic or even Planckian corrections at the horizon scale. We explore the possibility of measuring two model-independent, smoking-gun effects of these corrections in the gravitational waveform of a compact binary, namely, the absence of tidal heating and the presence of tidal deformability. For events detectable by the future space-based interferometer LISA, we show that the effect of tidal heating dominates and allows one to constrain putative corrections down to the Planck scale. The measurement of the tidal Love numbers with LISA is more challenging but, in optimistic scenarios, it allows us to constrain the compactness of a supermassive exotic compact object down to the Planck scale. Our analysis suggests that highly spinning, supermassive binaries at 1-20 Gpc provide unparalleled tests of quantum-gravity effects at the horizon scale.

  17. A new dynamical downscaling approach with GCM bias corrections and spectral nudging

    NASA Astrophysics Data System (ADS)

    Xu, Zhongfeng; Yang, Zong-Liang

    2015-04-01

    To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.

  18. ECHO: A reference-free short-read error correction algorithm

    PubMed Central

    Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.

    2011-01-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625

  19. Adaptive topographic mass correction for satellite gravity and gravity gradient data

    NASA Astrophysics Data System (ADS)

    Holzrichter, Nils; Szwillus, Wolfgang; Götze, Hans-Jürgen

    2014-05-01

    Subsurface modelling with gravity data includes a reliable topographic mass correction. Since decades, this mandatory step is a standard procedure. However, originally methods were developed for local terrestrial surveys. Therefore, these methods often include defaults like a limited correction area of 167 km around an observation point, resampling topography depending on the distance to the station or disregard the curvature of the earth. New satellite gravity data (e.g. GOCE) can be used for large scale lithospheric modelling with gravity data. The investigation areas can include thousands of kilometres. In addition, measurements are located in the flight height of the satellite (e.g. ~250 km for GOCE). The standard definition of the correction area and the specific grid spacing around an observation point was not developed for stations located in these heights and areas of these dimensions. This asks for a revaluation of the defaults used for topographic correction. We developed an algorithm which resamples the topography based on an adaptive approach. Instead of resampling topography depending on the distance to the station, the grids will be resampled depending on its influence at the station. Therefore, the only value the user has to define is the desired accuracy of the topographic correction. It is not necessary to define the grid spacing and a limited correction area. Furthermore, the algorithm calculates the topographic mass response with a spherical shaped polyhedral body. We show examples for local and global gravity datasets and compare the results of the topographic mass correction to existing approaches. We provide suggestions how satellite gravity and gradient data should be corrected.

  20. Role Models without Guarantees: Corrective Representations and the Cultural Politics of a Latino Male Teacher in the Borderlands

    ERIC Educational Resources Information Center

    Singh, Michael V.

    2018-01-01

    In recent years mentorship has become a popular 'solution' for struggling boys of color and has led to the recruitment of more male of color teachers. While not arguing against the merits of mentorship, this article critiques what the author deems 'corrective representations.' Corrective representations are the imagined embodiment of proper and…

  1. An "Extended Care" Community Corrections Model for Seriously Mentally Ill Offenders

    ERIC Educational Resources Information Center

    Sabbatine, Raymond

    2007-01-01

    Our system fills every bed before it is built. Supply and demand have never had a better relationship. We need to refocus our public policy upon a correctional system that heals itself by helping those it serves to heal themselves through service to others even less fortunate. Let us term this new paradigm a "Correctional Cooperative," where…

  2. Item Reliabilities for a Family of Answer-Until-Correct (AUC) Scoring Rules.

    ERIC Educational Resources Information Center

    Kane, Michael T.; Moloney, James M.

    The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…

  3. A Study of IR Loss Correction Methodologies for Commercially Available Pyranometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Chuck; Andreas, Afshin; Augustine, John

    2017-03-24

    This presentation provides a high-level overview of a study of IR Loss Connection Methodologies for Commercially Available Pyranometers. The IR Loss Corrections Study is investigating how various correction methodologies work for several makes and models of commercially available pyranometers in common use, both when operated in ventilators with DC fans and without ventilators, as when they are typically calibrated.

  4. Study of the Influence of Age in 18F-FDG PET Images Using a Data-Driven Approach and Its Evaluation in Alzheimer's Disease.

    PubMed

    Jiang, Jiehui; Sun, Yiwu; Zhou, Hucheng; Li, Shaoping; Huang, Zhemin; Wu, Ping; Shi, Kuangyu; Zuo, Chuantao; Neuroimaging Initiative, Alzheimer's Disease

    2018-01-01

    18 F-FDG PET scan is one of the most frequently used neural imaging scans. However, the influence of age has proven to be the greatest interfering factor for many clinical dementia diagnoses when analyzing 18 F-FDG PET images, since radiologists encounter difficulties when deciding whether the abnormalities in specific regions correlate with normal aging, disease, or both. In the present paper, the authors aimed to define specific brain regions and determine an age-correction mathematical model. A data-driven approach was used based on 255 healthy subjects. The inferior frontal gyrus, the left medial part and the left medial orbital part of superior frontal gyrus, the right insula, the left anterior cingulate, the left median cingulate, and paracingulate gyri, and bilateral superior temporal gyri were found to have a strong negative correlation with age. For evaluation, an age-correction model was applied to 262 healthy subjects and 50 AD subjects selected from the ADNI database, and partial correlations between SUVR mean and three clinical results were carried out before and after age correction. All correlation coefficients were significantly improved after the age correction. The proposed model was effective in the age correction of both healthy and AD subjects.

  5. Phase I Flow and Transport Model Document for Corrective Action Unit 97: Yucca Flat/Climax Mine, Nevada National Security Site, Nye County, Nevada, Revision 1 with ROTCs 1 and 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Robert

    The Underground Test Area (UGTA) Corrective Action Unit (CAU) 97, Yucca Flat/Climax Mine, in the northeast part of the Nevada National Security Site (NNSS) requires environmental corrective action activities to assess contamination resulting from underground nuclear testing. These activities are necessary to comply with the UGTA corrective action strategy (referred to as the UGTA strategy). The corrective action investigation phase of the UGTA strategy requires the development of groundwater flow and contaminant transport models whose purpose is to identify the lateral and vertical extent of contaminant migration over the next 1,000 years. In particular, the goal is to calculate themore » contaminant boundary, which is defined as a probabilistic model-forecast perimeter and a lower hydrostratigraphic unit (HSU) boundary that delineate the possible extent of radionuclide-contaminated groundwater from underground nuclear testing. Because of structural uncertainty in the contaminant boundary, a range of potential contaminant boundaries was forecast, resulting in an ensemble of contaminant boundaries. The contaminant boundary extent is determined by the volume of groundwater that has at least a 5 percent chance of exceeding the radiological standards of the Safe Drinking Water Act (SDWA) (CFR, 2012).« less

  6. Study of the integration of wind tunnel and computational methods for aerodynamic configurations

    NASA Technical Reports Server (NTRS)

    Browne, Lindsey E.; Ashby, Dale L.

    1989-01-01

    A study was conducted to determine the effectiveness of using a low-order panel code to estimate wind tunnel wall corrections. The corrections were found by two computations. The first computation included the test model and the surrounding wind tunnel walls, while in the second computation the wind tunnel walls were removed. The difference between the force and moment coefficients obtained by comparing these two cases allowed the determination of the wall corrections. The technique was verified by matching the test-section, wall-pressure signature from a wind tunnel test with the signature predicted by the panel code. To prove the viability of the technique, two cases were considered. The first was a two-dimensional high-lift wing with a flap that was tested in the 7- by 10-foot wind tunnel at NASA Ames Research Center. The second was a 1/32-scale model of the F/A-18 aircraft which was tested in the low-speed wind tunnel at San Diego State University. The panel code used was PMARC (Panel Method Ames Research Center). Results of this study indicate that the proposed wind tunnel wall correction method is comparable to other methods and that it also inherently includes the corrections due to model blockage and wing lift.

  7. Eliminating bias in rainfall estimates from microwave links due to antenna wetting

    NASA Astrophysics Data System (ADS)

    Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch

    2014-05-01

    Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced the bias of unshielded periods to 0.07 dB for the horizontal polarization (vertical: 0.06 dB). Applying the same model-based correction to shielded periods reduces the bias even more, to -0.03 dB and -0.01 dB, respectively. This indicates that additional attenuation could be caused also by different effects, such as reflection of sidelobes from wet surfaces and other environmental factors. Further, model-based corrections do not capture correctly the nature of WAE, but more likely provide only an empirical correction. This claim is supported by the fact that detailed analysis of particular events reveals that both antenna shielding and model-based correction performance differ substantially from event to event. Further investigation based on direct observation of antenna wetting and other environmental variables needs to be performed to identify more properly the nature of the attenuation bias. Schleiss, M., J. Rieckermann, and A. Berne, 2013: Quantification and modeling of wet-antenna attenuation for commercial microwave links. IEEE Geosci. Remote Sens. Lett., 10.1109/LGRS.2012.2236074.

  8. A Technique for Real-Time Ionospheric Ranging Error Correction Based On Radar Dual-Frequency Detection

    NASA Astrophysics Data System (ADS)

    Lyu, Jiang-Tao; Zhou, Chen

    2017-12-01

    Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.

  9. Blind multirigid retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2015-04-01

    Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.

  10. Compensation of long-range process effects on photomasks by design data correction

    NASA Astrophysics Data System (ADS)

    Schneider, Jens; Bloecker, Martin; Ballhorn, Gerd; Belic, Nikola; Eisenmann, Hans; Keogan, Danny

    2002-12-01

    CD requirements for advanced photomasks are getting very demanding for the 100 nm-node and below; the ITRS roadmap requires CD uniformities below 10 nm for the most critical layers. To reach this goal, statistical as well as systematic CD contributions must be minimized. Here, we focus on the reduction of systematic CD variations across the masks that may be caused by process effects, e.g. dry etch loading. We address this topic by compensating such effects via design data correction analogous to proximity correction. Dry etch loading is modeled by gaussian convolution of pattern densities. Data correction is done geometrically by edge shifting. As the effect amplitude has an order of magnitude of 10 nm this can only be done on e-beam writers with small address grids to reduce big CD steps in the design data. We present modeling and correction results for special mask patterns with very strong pattern density variations showing that the compensation method is able to reduce CD uniformity by 50-70% depending on pattern details. The data correction itself is done with a new module developed especially to compensate long-range effects and fits nicely into the common data flow environment.

  11. Topside correction of IRI by global modeling of ionospheric scale height using COSMIC radio occultation data

    NASA Astrophysics Data System (ADS)

    Wu, M. J.; Guo, P.; Fu, N. F.; Xu, T. L.; Xu, X. S.; Jin, H. L.; Hu, X. G.

    2016-06-01

    The ionosphere scale height is one of the most significant ionospheric parameters, which contains information about the ion and electron temperatures and dynamics in upper ionosphere. In this paper, an empirical orthogonal function (EOF) analysis method is applied to process all the ionospheric radio occultations of GPS/COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate) from the year 2007 to 2011 to reconstruct a global ionospheric scale height model. This monthly medium model has spatial resolution of 5° in geomagnetic latitude (-87.5° ~ 87.5°) and temporal resolution of 2 h in local time. EOF analysis preserves the characteristics of scale height quite well in the geomagnetic latitudinal, anural, seasonal, and diurnal variations. In comparison with COSMIC measurements of the year of 2012, the reconstructed model indicates a reasonable accuracy. In order to improve the topside model of International Reference Ionosphere (IRI), we attempted to adopt the scale height model in the Bent topside model by applying a scale factor q as an additional constraint. With the factor q functioning in the exponent profile of topside ionosphere, the IRI scale height should be forced equal to the precise COSMIC measurements. In this way, the IRI topside profile can be improved to get closer to the realistic density profiles. Internal quality check of this approach is carried out by comparing COSMIC realistic measurements and IRI with or without correction, respectively. In general, the initial IRI model overestimates the topside electron density to some extent, and with the correction introduced by COSMIC scale height model, the deviation of vertical total electron content (VTEC) between them is reduced. Furthermore, independent validation with Global Ionospheric Maps VTEC implies a reasonable improvement in the IRI VTEC with the topside model correction.

  12. Photometric Uncertainties

    NASA Astrophysics Data System (ADS)

    Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon

    2018-01-01

    The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.

  13. Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation

    DOE PAGES

    Burgess, C. P.; Holman, R.; Tasinato, G.

    2016-01-26

    Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less

  14. Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgess, C. P.; Holman, R.; Tasinato, G.

    Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less

  15. Precise predictions of H2O line shapes over a wide pressure range using simulations corrected by a single measurement

    NASA Astrophysics Data System (ADS)

    Ngo, N. H.; Nguyen, H. T.; Tran, H.

    2018-03-01

    In this work, we show that precise predictions of the shapes of H2O rovibrational lines broadened by N2, over a wide pressure range, can be made using simulations corrected by a single measurement. For that, we use the partially-correlated speed-dependent Keilson-Storer (pcsdKS) model whose parameters are deduced from molecular dynamics simulations and semi-classical calculations. This model takes into account the collision-induced velocity-changes effects, the speed dependences of the collisional line width and shift as well as the correlation between velocity and internal-state changes. For each considered transition, the model is corrected by using a parameter deduced from its broadening coefficient measured for a single pressure. The corrected-pcsdKS model is then used to simulate spectra for a wide pressure range. Direct comparisons of the corrected-pcsdKS calculated and measured spectra of 5 rovibrational lines of H2O for various pressures, from 0.1 to 1.2 atm, show very good agreements. Their maximum differences are in most cases well below 1%, much smaller than residuals obtained when fitting the measurements with the Voigt line shape. This shows that the present procedure can be used to predict H2O line shapes for various pressure conditions and thus the simulated spectra can be used to deduce the refined line-shape parameters to complete spectroscopic databases, in the absence of relevant experimental values.

  16. Cell-model prediction of the melting of a Lennard-Jones solid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holian, B.L.

    The classical free energy of the Lennard-Jones 6-12 solid is computed from a single-particle anharmonic cell model with a correction to the entropy given by the classical correlational entropy of quasiharmonic lattice dynamics. The free energy of the fluid is obtained from the Hansen-Ree analytic fit to Monte Carlo equation-of-state calculations. The resulting predictions of the solid-fluid coexistence curves by this corrected cell model of the solid are in excellent agreement with the computer experiments.

  17. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  18. Reverberant acoustic energy in auditoria that comprise systems of coupled rooms

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.

    2003-11-01

    A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.

  19. Modifying the affective behavior of preschoolers with autism using in-vivo or video modeling and reinforcement contingencies.

    PubMed

    Gena, Angeliki; Couloura, Sophia; Kymissis, Effie

    2005-10-01

    The purpose of this study was to modify the affective behavior of three preschoolers with autism in home settings and in the context of play activities, and to compare the effects of video modeling to the effects of in-vivo modeling in teaching these children contextually appropriate affective responses. A multiple-baseline design across subjects, with a return to baseline condition, was used to assess the effects of treatment that consisted of reinforcement, video modeling, in-vivo modeling, and prompting. During training trials, reinforcement in the form of verbal praise and tokens was delivered contingent upon appropriate affective responding. Error correction procedures differed for each treatment condition. In the in-vivo modeling condition, the therapist used modeling and verbal prompting. In the video modeling condition, video segments of a peer modeling the correct response and verbal prompting by the therapist were used as corrective procedures. Participants received treatment in three categories of affective behavior--sympathy, appreciation, and disapproval--and were presented with a total of 140 different scenarios. The study demonstrated that both treatments--video modeling and in-vivo modeling--systematically increased appropriate affective responding in all response categories for the three participants. Additionally, treatment effects generalized across responses to untrained scenarios, the child's mother, new therapists, and time.

  20. Benchmark model correction of monitoring system based on Dynamic Load Test of Bridge

    NASA Astrophysics Data System (ADS)

    Shi, Jing-xian; Fan, Jiang

    2018-03-01

    Structural health monitoring (SHM) is a field of research in the area, and it’s designed to achieve bridge safety and reliability assessment, which needs to be carried out on the basis of the accurate simulation of the finite element model. Bridge finite element model is simplified of the structural section form, support conditions, material properties and boundary condition, which is based on the design and construction drawings, and it gets the calculation models and the results.But according to the design and specification requirements established finite element model due to its cannot fully reflect the true state of the bridge, so need to modify the finite element model to obtain the more accurate finite element model. Based on Da-guan river crossing of Ma - Zhao highway in Yunnan province as the background to do the dynamic load test test, we find that the impact coefficient of the theoretical model of the bridge is very different from the coefficient of the actual test, and the change is different; according to the actual situation, the calculation model is adjusted to get the correct frequency of the bridge, the revised impact coefficient found that the modified finite element model is closer to the real state, and provides the basis for the correction of the finite model.

  1. Delay time correction of the gas analyzer in the calculation of anatomical dead space of the lung.

    PubMed

    Okubo, T; Shibata, H; Takishima, T

    1983-07-01

    By means of a mathematical model, we have studied a way to correct the delay time of the gas analyzer in order to calculate the anatomical dead space using Fowler's graphical method. The mathematical model was constructed of ten tubes of equal diameter but unequal length, so that the amount of dead space varied from tube to tube; the tubes were emptied sequentially. The gas analyzer responds with a time lag from the input of the gas signal to the beginning of the response, followed by an exponential response output. The single breath expired volume-concentration relationship was examined with three types of expired flow patterns of which were constant, exponential and sinusoidal. The results indicate that the time correction by the lag time plus time constant of the exponential response of the gas analyzer gives an accurate estimation of anatomical dead space. Time correction less inclusive than this, e.g. lag time only or lag time plus 50% response time, gives an overestimation, and a correction larger than this results in underestimation. The magnitude of error is dependent on the flow pattern and flow rate. The time correction in this study is only for the calculation of dead space, as the corrected volume-concentration curves does not coincide with the true curve. Such correction of the output of the gas analyzer is extremely important when one needs to compare the dead spaces of different gas species at a rather faster flow rate.

  2. Iterative universal state selective correction for the Brillouin-Wigner multireference coupled-cluster theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banik, Subrata; Ravichandran, Lalitha; Brabec, Jiri

    2015-03-21

    As a further development of the previously introduced a posteriori Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] and [Brabec et al., J. Chem. Phys., 136, 124102 (2012)], we suggest an iterative form of the USS correction by means of correcting effective Hamiltonian matrix elements. We also formulate USS corrections via the left Bloch equations. The convergence of the USS corrections with excitation level towards the FCI limit is also investigated. Various forms of the USS and simplified diagonal USSD corrections at the SD and SD(T) levels are numerically assessed on several model systems and onmore » the ozone and tetramethyleneethane molecules. It is shown that the iterative USS correction can successfully replace the previously developed a posteriori BWCC size-extensivity correction, while it is not sensitive to intruder states and performs well also in other cases when the a posteriori one fails, like e.g. for the asymmetric vibration mode of ozone.« less

  3. Modeling to Predict Escherichia coli at Presque Isle Beach 2, City of Erie, Erie County, Pennsylvania

    USGS Publications Warehouse

    Zimmerman, Tammy M.

    2008-01-01

    The Lake Erie beaches in Pennsylvania are a valuable recreational resource for Erie County. Concentrations of Escherichia coli (E. coli) at monitored beaches in Presque Isle State Park in Erie, Pa., occasionally exceed the single-sample bathing-water standard of 235 colonies per 100 milliliters resulting in potentially unsafe swimming conditions and prompting beach managers to post public advisories or to close beaches to recreation. To supplement the current method for assessing recreational water quality (E. coli concentrations from the previous day), a predictive regression model for E. coli concentrations at Presque Isle Beach 2 was developed from data collected during the 2004 and 2005 recreational seasons. Model output included predicted E. coli concentrations and exceedance probabilities--the probability that E. coli concentrations would exceed the standard. For this study, E. coli concentrations and other water-quality and environmental data were collected during the 2006 recreational season at Presque Isle Beach 2. The data from 2006, an independent year, were used to test (validate) the 2004-2005 predictive regression model and compare the model performance to the current method. Using 2006 data, the 2004-2005 model yielded more correct responses and better predicted exceedances of the standard than the use of E. coli concentrations from the previous day. The differences were not pronounced, however, and more data are needed. For example, the model correctly predicted exceedances of the standard 11 percent of the time (1 out of 9 exceedances that occurred in 2006) whereas using the E. coli concentrations from the previous day did not result in any correctly predicted exceedances. After validation, new models were developed by adding the 2006 data to the 2004-2005 dataset and by analyzing the data in 2- and 3-year combinations. Results showed that excluding the 2004 data (using 2005 and 2006 data only) yielded the best model. Explanatory variables in the 2005-2006 model were log10 turbidity, bird count, and wave height. The 2005-2006 model correctly predicted when the standard would not be exceeded (specificity) with a response of 95.2 percent (178 out of 187 nonexceedances) and correctly predicted when the standard would be exceeded (sensitivity) with a response of 64.3 percent (9 out of 14 exceedances). In all cases, the results from predictive modeling produced higher percentages of correct predictions than using E. coli concentrations from the previous day. Additional data collected each year can be used to test and possibly improve the model. The results of this study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to close a beach or post an advisory.

  4. Parsimonious estimation of the Wechsler Memory Scale, Fourth Edition demographically adjusted index scores: immediate and delayed memory.

    PubMed

    Miller, Justin B; Axelrod, Bradley N; Schutte, Christian

    2012-01-01

    The recent release of the Wechsler Memory Scale Fourth Edition contains many improvements from a theoretical and administration perspective, including demographic corrections using the Advanced Clinical Solutions. Although the administration time has been reduced from previous versions, a shortened version may be desirable in certain situations given practical time limitations in clinical practice. The current study evaluated two- and three-subtest estimations of demographically corrected Immediate and Delayed Memory index scores using both simple arithmetic prorating and regression models. All estimated values were significantly associated with observed index scores. Use of Lin's Concordance Correlation Coefficient as a measure of agreement showed a high degree of precision and virtually zero bias in the models, although the regression models showed a stronger association than prorated models. Regression-based models proved to be more accurate than prorated estimates with less dispersion around observed values, particularly when using three subtest regression models. Overall, the present research shows strong support for estimating demographically corrected index scores on the WMS-IV in clinical practice with an adequate performance using arithmetically prorated models and a stronger performance using regression models to predict index scores.

  5. The impact of missing trauma data on predicting massive transfusion

    PubMed Central

    Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.

    2013-01-01

    INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514

  6. Climate projections and extremes in dynamically downscaled CMIP5 model outputs over the Bengal delta: a quartile based bias-correction approach with new gridded data

    NASA Astrophysics Data System (ADS)

    Hasan, M. Alfi; Islam, A. K. M. Saiful; Akanda, Ali Shafqat

    2017-11-01

    In the era of global warning, the insight of future climate and their changing extremes is critical for climate-vulnerable regions of the world. In this study, we have conducted a robust assessment of Regional Climate Model (RCM) results in a monsoon-dominated region within the new Coupled Model Intercomparison Project Phase 5 (CMIP5) and the latest Representative Concentration Pathways (RCP) scenarios. We have applied an advanced bias correction approach to five RCM simulations in order to project future climate and associated extremes over Bangladesh, a critically climate-vulnerable country with a complex monsoon system. We have also generated a new gridded product that performed better in capturing observed climatic extremes than existing products. The bias-correction approach provided a notable improvement in capturing the precipitation extremes as well as mean climate. The majority of projected multi-model RCMs indicate an increase of rainfall, where one model shows contrary results during the 2080s (2071-2100) era. The multi-model mean shows that nighttime temperatures will increase much faster than daytime temperatures and the average annual temperatures are projected to be as hot as present-day summer temperatures. The expected increase of precipitation and temperature over the hilly areas are higher compared to other parts of the country. Overall, the projected extremities of future rainfall are more variable than temperature. According to the majority of the models, the number of the heavy rainy days will increase in future years. The severity of summer-day temperatures will be alarming, especially over hilly regions, where winters are relatively warm. The projected rise of both precipitation and temperature extremes over the intense rainfall-prone northeastern region of the country creates a possibility of devastating flash floods with harmful impacts on agriculture. Moreover, the effect of bias-correction, as presented in probable changes of both bias-corrected and uncorrected extremes, can be considered in future policy making.

  7. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  8. Clarifications regarding the use of model-fitting methods of kinetic analysis for determining the activation energy from a single non-isothermal curve.

    PubMed

    Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M

    2013-02-05

    This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.

  9. Phase I Transport Model of Corrective Action Units 101 and 102: Central and Western Pahute Mesa, Nevada Test Site, Nye County, Nevada with Errata Sheet 1, 2, 3, Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greg Ruskauff

    2009-02-01

    As prescribed in the Pahute Mesa Corrective Action Investigation Plan (CAIP) (DOE/NV, 1999) and Appendix VI of the Federal Facility Agreement and Consent Order (FFACO) (1996, as amended February 2008), the ultimate goal of transport analysis is to develop stochastic predictions of a contaminant boundary at a specified level of uncertainty. However, because of the significant uncertainty of the model results, the primary goal of this report was modified through mutual agreement between the DOE and the State of Nevada to assess the primary model components that contribute to this uncertainty and to postpone defining the contaminant boundary until additionalmore » model refinement is completed. Therefore, the role of this analysis has been to understand the behavior of radionuclide migration in the Pahute Mesa (PM) Corrective Action Unit (CAU) model and to define, both qualitatively and quantitatively, the sensitivity of such behavior to (flow) model conceptualization and (flow and transport) parameterization.« less

  10. Plasma versus Drude Modeling of the Casimir Force: Beyond the Proximity Force Approximation

    NASA Astrophysics Data System (ADS)

    Hartmann, Michael; Ingold, Gert-Ludwig; Neto, Paulo A. Maia

    2017-07-01

    We calculate the Casimir force and its gradient between a spherical and a planar gold surface. Significant numerical improvements allow us to extend the range of accessible parameters into the experimental regime. We compare our numerically exact results with those obtained within the proximity force approximation (PFA) employed in the analysis of all Casimir force experiments reported in the literature so far. Special attention is paid to the difference between the Drude model and the dissipationless plasma model at zero frequency. It is found that the correction to PFA is too small to explain the discrepancy between the experimental data and the PFA result based on the Drude model. However, it turns out that for the plasma model, the corrections to PFA lie well outside the experimental bound obtained by probing the variation of the force gradient with the sphere radius [D. E. Krause et al., Phys. Rev. Lett. 98, 050403 (2007), 10.1103/PhysRevLett.98.050403]. The corresponding corrections based on the Drude model are significantly smaller but still in violation of the experimental bound for small distances between plane and sphere.

  11. Comparisons of Computed Mobile Phone Induced SAR in the SAM Phantom to That in Anatomically Correct Models of the Human Head

    PubMed Central

    Beard, Brian B.; Kainz, Wolfgang; Onishi, Teruo; Iyama, Takahiro; Watanabe, Soichi; Fujiwara, Osamu; Wang, Jianqing; Bit-Babik, Giorgi; Faraone, Antonio; Wiart, Joe; Christ, Andreas; Kuster, Niels; Lee, Ae-Kyoung; Kroeze, Hugo; Siegbahn, Martin; Keshvari, Jafar; Abrishamkar, Houman; Simon, Winfried; Manteuffel, Dirk; Nikoloski, Neviana

    2018-01-01

    The specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, SubCommittee 2, and Working Group 2, and carried out by an international task force comprising 14 government, academic, and industrial research institutions. The detailed study protocol defined the computational head and mobile phone models. The participants used different finite-difference time-domain software and independently positioned the mobile phone and head models in accordance with the protocol. The results show that when the pinna SAR is calculated separately from the head SAR, SAM produced a higher SAR in the head than the anatomically correct head models. Also the larger (adult) head produced a statistically significant higher peak SAR for both the 1- and 10-g averages than did the smaller (child) head for all conditions of frequency and position. PMID:29515260

  12. Towards Compensation Correctness in Interactive Systems

    NASA Astrophysics Data System (ADS)

    Vaz, Cátia; Ferreira, Carla

    One fundamental idea of service-oriented computing is that applications should be developed by composing already available services. Due to the long running nature of service interactions, a main challenge in service composition is ensuring correctness of failure recovery. In this paper, we use a process calculus suitable for modelling long running transactions with a recovery mechanism based on compensations. Within this setting, we discuss and formally state correctness criteria for compensable processes compositions, assuming that each process is correct with respect to failure recovery. Under our theory, we formally interpret self-healing compositions, that can detect and recover from failures, as correct compositions of compensable processes.

  13. Impact of Bias-Correction Type and Conditional Training on Bayesian Model Averaging over the Northeast United States

    Treesearch

    Michael J. Erickson; Brian A. Colle; Joseph J. Charney

    2012-01-01

    The performance of a multimodel ensemble over the northeast United States is evaluated before and after applying bias correction and Bayesian model averaging (BMA). The 13-member Stony Brook University (SBU) ensemble at 0000 UTC is combined with the 21-member National Centers for Environmental Prediction (NCEP) Short-Range Ensemble Forecast (SREF) system at 2100 UTC....

  14. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning

    PubMed Central

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-01-01

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction. PMID:28300787

  15. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning.

    PubMed

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-03-16

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction.

  16. Proximal metatarsal osteotomies: a comparative geometric analysis conducted on sawbone models.

    PubMed

    Nyska, Meir; Trnka, Hans-Jörg; Parks, Brent G; Myerson, Mark S

    2002-10-01

    We evaluated the change in position of the first metatarsal head using a three-dimensional digitizer on sawbone models. Crescentic, closing wedge, oblique shaft (Ludloff 8 degrees and 16 degrees), reverse oblique shaft (Mau 8 degrees and 16 degrees), rotational "Z" (Scarf), and proximal chevron osteotomies were performed and secured using 3-mm screws. The 16 degrees Ludloff provided the most lateral shift (9.5 mm) and angular correction (14.5 degrees) but also produced the most elevation (1.4 mm) and shortening (2.9 mm). The 8 degrees Ludloff provided lateral and angular corrections similar to those of the crescentic and closing wedge osteotomies with less elevation and shortening. Because the displacement osteotomies (Scarf, proximal chevron) provided less angular correction, the same lateral displacement, and less shortening than the basilar angular osteotomies, based upon this model they can be more reliably used for a patient with a mild to moderate deformity, a short first metatarsal, or an intermediate deformity with a large distal metatarsal articular angle. These results can serve as recommendations for selecting the optimal osteotomy with which to correct a deformation.

  17. Efficacy of a Process Improvement Intervention on Delivery of HIV Services to Offenders: A Multisite Trial

    PubMed Central

    Shafer, Michael S.; Dembo, Richard; del Mar Vega-Debién, Graciela; Pankow, Jennifer; Duvall, Jamieson L.; Belenko, Steven; Frisman, Linda K.; Visher, Christy A.; Pich, Michele; Patterson, Yvonne

    2014-01-01

    Objectives. We tested a modified Network for the Improvement of Addiction Treatment (NIATx) process improvement model to implement improved HIV services (prevention, testing, and linkage to treatment) for offenders under correctional supervision. Methods. As part of the Criminal Justice Drug Abuse Treatment Studies, Phase 2, the HIV Services and Treatment Implementation in Corrections study conducted 14 cluster-randomized trials in 2011 to 2013 at 9 US sites, where one correctional facility received training in HIV services and coaching in a modified NIATx model and the other received only HIV training. The outcome measure was the odds of successful delivery of an HIV service. Results. The results were significant at the .05 level, and the point estimate for the odds ratio was 2.14. Although overall the results were heterogeneous, the experiments that focused on implementing HIV prevention interventions had a 95% confidence interval that exceeded the no-difference point. Conclusions. Our results demonstrate that a modified NIATx process improvement model can effectively implement improved rates of delivery of some types of HIV services in correctional environments. PMID:25322311

  18. Revisit to the RXTE and ASCA Data for GRO J1655-40: Effects of Radiative Transfer in Corona and Color Hardening in the Disk

    NASA Technical Reports Server (NTRS)

    Zhang, S. Nan; Zhang, Xiaoling; Wu, Xuebing; Yao, Yangsen; Sun, Xuejun; Xu, Haiguang; Cui, Wei; Chen, Wan; Harmon, B. A.; Robinson, C. R.

    1999-01-01

    The results of spectral modeling of the data for a series of RXTE observations and four ASCA observations of GRO J1655-40 are presented. The thermal Comptonization model is used instead of the power-law model for the hard component of the two-component continuum spectra. The previously reported dramatic variations of the apparent inner disk radius of GRO J1655-40 during its outburst may be due to the inverse Compton scattering in the hot corona. A procedure is developed for making the radiative transfer correction to the fitting parameters from RXTE data and a more stable inner disk radius is obtained. A practical process of determining the color correction (hardening) factor from observational data is proposed and applied to the four ASCA observations of GRO J1655-40. We found that the color correction factor may vary significantly between different observations and the finally corrected physical inner disk radius remains reasonably stable over a large range of luminosity and spectral states.

  19. Model-based MPC enables curvilinear ILT using either VSB or multi-beam mask writers

    NASA Astrophysics Data System (ADS)

    Pang, Linyong; Takatsukasa, Yutetsu; Hara, Daisuke; Pomerantsev, Michael; Su, Bo; Fujimura, Aki

    2017-07-01

    Inverse Lithography Technology (ILT) is becoming the choice for Optical Proximity Correction (OPC) of advanced technology nodes in IC design and production. Multi-beam mask writers promise significant mask writing time reduction for complex ILT style masks. Before multi-beam mask writers become the main stream working tools in mask production, VSB writers will continue to be the tool of choice to write both curvilinear ILT and Manhattanized ILT masks. To enable VSB mask writers for complex ILT style masks, model-based mask process correction (MB-MPC) is required to do the following: 1). Make reasonable corrections for complex edges for those features that exhibit relatively large deviations from both curvilinear ILT and Manhattanized ILT designs. 2). Control and manage both Edge Placement Errors (EPE) and shot count. 3. Assist in easing the migration to future multi-beam mask writer and serve as an effective backup solution during the transition. In this paper, a solution meeting all those requirements, MB-MPC with GPU acceleration, will be presented. One model calibration per process allows accurate correction regardless of the target mask writer.

  20. Efficacy of a process improvement intervention on delivery of HIV services to offenders: a multisite trial.

    PubMed

    Pearson, Frank S; Shafer, Michael S; Dembo, Richard; Del Mar Vega-Debién, Graciela; Pankow, Jennifer; Duvall, Jamieson L; Belenko, Steven; Frisman, Linda K; Visher, Christy A; Pich, Michele; Patterson, Yvonne

    2014-12-01

    We tested a modified Network for the Improvement of Addiction Treatment (NIATx) process improvement model to implement improved HIV services (prevention, testing, and linkage to treatment) for offenders under correctional supervision. As part of the Criminal Justice Drug Abuse Treatment Studies, Phase 2, the HIV Services and Treatment Implementation in Corrections study conducted 14 cluster-randomized trials in 2011 to 2013 at 9 US sites, where one correctional facility received training in HIV services and coaching in a modified NIATx model and the other received only HIV training. The outcome measure was the odds of successful delivery of an HIV service. The results were significant at the .05 level, and the point estimate for the odds ratio was 2.14. Although overall the results were heterogeneous, the experiments that focused on implementing HIV prevention interventions had a 95% confidence interval that exceeded the no-difference point. Our results demonstrate that a modified NIATx process improvement model can effectively implement improved rates of delivery of some types of HIV services in correctional environments.

  1. A measurement theory of illusory conjunctions.

    PubMed

    Prinzmetal, William; Ivry, Richard B; Beck, Diane; Shimizu, Naomi

    2002-04-01

    Illusory conjunctions refer to the incorrect perceptual combination of correctly perceived features, such as color and shape. Research on the phenomenon has been hampered by the lack of a measurement theory that accounts for guessing features, as well as the incorrect combination of correctly perceived features. Recently, several investigators have suggested using multinomial models as a tool for measuring feature integration. The authors examined the adequacy of these models in 2 experiments by testing whether model parameters reflect changes in stimulus factors. In a third experiment, confidence ratings were used as a tool for testing the model. Multinomial models accurately reflected both variations in stimulus factors and observers' trial-by-trial confidence ratings.

  2. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  3. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.

  4. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 19 94

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Jacobs, C. S.

    1994-01-01

    This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.

  5. Retrospective Correction of Physiological Noise in DTI Using an Extended Tensor Model and Peripheral Measurements

    PubMed Central

    Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599

  6. Cas9-nickase-mediated genome editing corrects hereditary tyrosinemia in rats.

    PubMed

    Shao, Yanjiao; Wang, Liren; Guo, Nana; Wang, Shengfei; Yang, Lei; Li, Yajing; Wang, Mingsong; Yin, Shuming; Han, Honghui; Zeng, Li; Zhang, Ludi; Hui, Lijian; Ding, Qiurong; Zhang, Jiqin; Geng, Hongquan; Liu, Mingyao; Li, Dali

    2018-05-04

    Hereditary tyrosinemia type I (HTI) is a metabolic genetic disorder caused by mutation of fumarylacetoacetate hydrolase (FAH). Because of the accumulation of toxic metabolites, HTI causes severe liver cirrhosis, liver failure, and even hepatocellular carcinoma. HTI is an ideal model for gene therapy, and several strategies have been shown to ameliorate HTI symptoms in animal models. Although CRISPR/Cas9-mediated genome editing is able to correct the Fah mutation in mouse models, WT Cas9 induces numerous undesired mutations that have raised safety concerns for clinical applications. To develop a new method for gene correction with high fidelity, we generated a Fah mutant rat model to investigate whether Cas9 nickase (Cas9n)-mediated genome editing can efficiently correct the Fah First, we confirmed that Cas9n rarely induces indels in both on-target and off-target sites in cell lines. Using WT Cas9 as a positive control, we delivered Cas9n and the repair donor template/single guide (sg)RNA through adenoviral vectors into HTI rats. Analyses of the initial genome editing efficiency indicated that only WT Cas9 but not Cas9n causes indels at the on-target site in the liver tissue. After receiving either Cas9n or WT Cas9-mediated gene correction therapy, HTI rats gained weight steadily and survived. Fah-expressing hepatocytes occupied over 95% of the liver tissue 9 months after the treatment. Moreover, CRISPR/Cas9-mediated gene therapy prevented the progression of liver cirrhosis, a phenotype that could not be recapitulated in the HTI mouse model. These results strongly suggest that Cas9n-mediated genome editing is a valuable and safe gene therapy strategy for this genetic disease. © 2018 by The American Society for Biochemistry and Molecular Biology, Inc.

  7. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography.

    PubMed

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-09-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.

  8. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography

    PubMed Central

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-01-01

    Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037

  9. Effects of vibration on inertial wind-tunnel model attitude measurement devices

    NASA Technical Reports Server (NTRS)

    Young, Clarence P., Jr.; Buehrle, Ralph D.; Balakrishna, S.; Kilgore, W. Allen

    1994-01-01

    Results of an experimental study of a wind tunnel model inertial angle-of-attack sensor response to a simulated dynamic environment are presented. The inertial device cannot distinguish between the gravity vector and the centrifugal accelerations associated with wind tunnel model vibration, this situation results in a model attitude measurement bias error. Significant bias error in model attitude measurement was found for the model system tested. The model attitude bias error was found to be vibration mode and amplitude dependent. A first order correction model was developed and used for estimating attitude measurement bias error due to dynamic motion. A method for correcting the output of the model attitude inertial sensor in the presence of model dynamics during on-line wind tunnel operation is proposed.

  10. A parametric and non-parametric metamodeling approach for the bias-correction of Satellite Rainfall Estimates using rain gauge measurements. Cases of study: Magdalena Basin (Colombia), Imperial Basin (Chile) and Paraiba do Sul (Brazil).

    NASA Astrophysics Data System (ADS)

    Rebolledo Coy, M. A.; Villanueva, O. M. B.; Bartz-Beielstein, T.; Ribbe, L.

    2017-12-01

    Rainfall measurement plays an important role on the understanding and modeling of the water cycle. However, the assessment of scarce data regions using common rain gauge information, cannot be done using a straightforward approach. Some of the main problems concerning rainfall assessment are; the lack of a sufficiently dense grid of ground stations in extensive areas and the unstable spatial accuracy of the Satellite Rainfall Estimates (SREs). Following previous works on SREs analysis and bias-correction, we generate an ensemble model that corrects the bias error on a seasonal and yearly basis using six different state-of-the-art SREs (TRMM 3B42RT, TRMM 3B42v7, PERSIANN-CDR, CHIRPSv2, CMORPH and MSWEPv1.2) in a point-to-pixel approach for the studied period (2003-2015). Three different basins; Magdalena in Colombia, Imperial in Chile and Paraiba do Sul in Brazil are evaluated. Using Gaussian process regression and Bayesian robust regression we model the behavior of the ground stations and evaluate its goodness-of-fit by using the modified Kling-Gupta efficiency (KGE'). Following this evaluation, the models are re-fitted by taking into account the error distribution in each point and the corresponding KGE' is evaluated again. Both models were specified using the probabilistic language STAN. To improve the efficiency of the Gaussian model a clustering of the data was implemented. We also compared the performance of both models in term of uncertainty and stability against the raw input concluding that both models represent better the study areas. The results show that the error displays an exponential behavior for days where precipitation was present, this allows the models to be corrected according to the observed rainfall values. The seasonal evaluations also show improved performance in relation to the yearly evaluations. The use of bias-corrected SREs for hydrologic purposes in scarce data regions is highly recommended in order to merge the punctual values from the ground measurements and the spatial distribution of rainfall from the satellite estimates.

  11. Advanced Corrections for InSAR Using GPS and Numerical Weather Models

    NASA Astrophysics Data System (ADS)

    Cossu, F.; Foster, J. H.; Amelung, F.; Varugu, B. K.; Businger, S.; Cherubini, T.

    2017-12-01

    We present results from an investigation into the application of numerical weather models for generating tropospheric correction fields for Interferometric Synthetic Aperture Radar (InSAR). We apply the technique to data acquired from a UAVSAR campaign as well as from the CosmoSkyMed satellites. The complex spatial and temporal changes in the atmospheric propagation delay of the radar signal remain the single biggest factor limiting InSAR's potential for hazard monitoring and mitigation. A new generation of InSAR systems is being built and launched, and optimizing the science and hazard applications of these systems requires advanced methodologies to mitigate tropospheric noise. We use the Weather Research and Forecasting (WRF) model to generate a 900 m spatial resolution atmospheric models covering the Big Island of Hawaii and an even higher, 300 m resolution grid over the Mauna Loa and Kilauea volcanoes. By comparing a range of approaches, from the simplest, using reanalyses based on typically available meteorological observations, through to the "kitchen-sink" approach of assimilating all relevant data sets into our custom analyses, we examine the impact of the additional data sets on the atmospheric models and their effectiveness in correcting InSAR data. We focus particularly on the assimilation of information from the more than 60 GPS sites in the island. We ingest zenith tropospheric delay estimates from these sites directly into the WRF analyses, and also perform double-difference tomography using the phase residuals from the GPS processing to robustly incorporate heterogeneous information from the GPS data into the atmospheric models. We assess our performance through comparisons of our atmospheric models with external observations not ingested into the model, and through the effectiveness of the derived phase screens in reducing InSAR variance. Comparison of the InSAR data, our atmospheric analyses, and assessments of the active local and mesoscale meteorological processes allows us to assess under what conditions the technique works most effectively. This work will produce best-practice recommendations for the use of weather models for InSAR correction, and inform efforts to design a global strategy for the NISAR mission, for both low-latency and definitive atmospheric correction products.

  12. Cryosat-2 and Sentinel-3 tropospheric corrections: their evaluation over rivers and lakes

    NASA Astrophysics Data System (ADS)

    Fernandes, Joana; Lázaro, Clara; Vieira, Telmo; Restano, Marco; Ambrózio, Américo; Benveniste, Jérôme

    2017-04-01

    In the scope of the Sentinel-3 Hydrologic Altimetry PrototypE (SHAPE) project, errors that presently affect the tropospheric corrections i.e. dry and wet tropospheric corrections (DTC and WTC, respectively) given in satellite altimetry products are evaluated over inland water regions. These errors arise because both corrections, function of altitude, are usually computed with respect to an incorrect altitude reference. Several regions of interest (ROI) where CryoSat-2 (CS-2) is operating in SAR/SAR-In modes were selected for this evaluation. In this study, results for Danube River, Amazon Basin, Vanern and Titicaca lakes, and Caspian Sea, using Level 1B CS-2 data, are shown. DTC and WTC have been compared to those derived from ECMWF Operational model and computed at different altitude references: i) ECMWF orography; ii) ACE2 (Altimeter Corrected Elevations 2) and GWD-LR (Global Width Database for Large Rivers) global digital elevation models; iii) mean lake level, derived from Envisat mission data, or river profile derived in the scope of SHAPE project by AlongTrack (ATK) using Jason-2 data. Whenever GNSS data are available in the ROI, a GNSS-derived WTC was also generated and used for comparison. Overall, results show that the tropospheric corrections present in CS-2 L1B products are provided at the level of ECMWF orography, which can depart from the mean lake level or river profile by hundreds of metres. Therefore, the use of the model orography originates errors in the corrections. To mitigate these errors, both DTC and WTC should be provided at the mean river profile/lake level. For example, for the Caspian Sea with a mean level of -27 m, the tropospheric corrections provided in CS-2 products were computed at mean sea level (zero level), leading therefore to a systematic error in the corrections. In case a mean lake level is not available, it can be easily determined from satellite altimetry. In the absence of a mean river profile, both mentioned DEM, considered better altimetric surfaces when compared to the ECMWF orography, can be used. When using the model orography, systematic errors up to 3-5 cm are found in the DTC for most of the selected regions, which can induce significant errors in e.g. the determination of mean river profiles or lake level time series. For the Danube River, larger DTC errors up to 10 cm, due to terrain characteristics, can appear. For the WTC, with higher spatial variability, model errors of magnitude 1-3 cm are expected over inland waters. In the Danube region, the comparison of GNSS- and ECMWF-derived WTC has shown that the error in the WTC computed at orography level can be up to 3 cm. WTC errors with this magnitude have been found for all ROI. Although globally small, these errors are systematic and must be corrected prior to the generation of CS-2 Level 2 products. Once computed at the mean profile and mean lake level, the results show that tropospheric corrections have accuracy better than 1 cm. This analysis is currently being extended to S3 data and the first results are shown.

  13. Deterministic figure correction of piezoelectrically adjustable slumped glass optics

    NASA Astrophysics Data System (ADS)

    DeRoo, Casey T.; Allured, Ryan; Cotroneo, Vincenzo; Hertz, Edward; Marquez, Vanessa; Reid, Paul B.; Schwartz, Eric D.; Vikhlinin, Alexey A.; Trolier-McKinstry, Susan; Walker, Julian; Jackson, Thomas N.; Liu, Tianning; Tendulkar, Mohit

    2018-01-01

    Thin x-ray optics with high angular resolution (≤ 0.5 arcsec) over a wide field of view enable the study of a number of astrophysically important topics and feature prominently in Lynx, a next-generation x-ray observatory concept currently under NASA study. In an effort to address this technology need, piezoelectrically adjustable, thin mirror segments capable of figure correction after mounting and on-orbit are under development. We report on the fabrication and characterization of an adjustable cylindrical slumped glass optic. This optic has realized 100% piezoelectric cell yield and employs lithographically patterned traces and anisotropic conductive film connections to address the piezoelectric cells. In addition, the measured responses of the piezoelectric cells are found to be in good agreement with finite-element analysis models. While the optic as manufactured is outside the range of absolute figure correction, simulated corrections using the measured responses of the piezoelectric cells are found to improve 5 to 10 arcsec mirrors to 1 to 3 arcsec [half-power diameter (HPD), single reflection at 1 keV]. Moreover, a measured relative figure change which would correct the figure of a representative slumped glass piece from 6.7 to 1.2 arcsec HPD is empirically demonstrated. We employ finite-element analysis-modeled influence functions to understand the current frequency limitations of the correction algorithm employed and identify a path toward achieving subarcsecond corrections.

  14. Optical control of the Advanced Technology Solar Telescope.

    PubMed

    Upton, Robert

    2006-08-10

    The Advanced Technology Solar Telescope (ATST) is an off-axis Gregorian astronomical telescope design. The ATST is expected to be subject to thermal and gravitational effects that result in misalignments of its mirrors and warping of its primary mirror. These effects require active, closed-loop correction to maintain its as-designed diffraction-limited optical performance. The simulation and modeling of the ATST with a closed-loop correction strategy are presented. The correction strategy is derived from the linear mathematical properties of two Jacobian, or influence, matrices that map the ATST rigid-body (RB) misalignments and primary mirror figure errors to wavefront sensor (WFS) measurements. The two Jacobian matrices also quantify the sensitivities of the ATST to RB and primary mirror figure perturbations. The modeled active correction strategy results in a decrease of the rms wavefront error averaged over the field of view (FOV) from 500 to 19 nm, subject to 10 nm rms WFS noise. This result is obtained utilizing nine WFSs distributed in the FOV with a 300 nm rms astigmatism figure error on the primary mirror. Correction of the ATST RB perturbations is demonstrated for an optimum subset of three WFSs with corrections improving the ATST rms wavefront error from 340 to 17.8 nm. In addition to the active correction of the ATST, an analytically robust sensitivity analysis that can be generally extended to a wider class of optical systems is presented.

  15. An improved standardization procedure to remove systematic low frequency variability biases in GCM simulations

    NASA Astrophysics Data System (ADS)

    Mehrotra, Rajeshwar; Sharma, Ashish

    2012-12-01

    The quality of the absolute estimates of general circulation models (GCMs) calls into question the direct use of GCM outputs for climate change impact assessment studies, particularly at regional scales. Statistical correction of GCM output is often necessary when significant systematic biasesoccur between the modeled output and observations. A common procedure is to correct the GCM output by removing the systematic biases in low-order moments relative to observations or to reanalysis data at daily, monthly, or seasonal timescales. In this paper, we present an extension of a recently published nested bias correction (NBC) technique to correct for the low- as well as higher-order moments biases in the GCM-derived variables across selected multiple time-scales. The proposed recursive nested bias correction (RNBC) approach offers an improved basis for applying bias correction at multiple timescales over the original NBC procedure. The method ensures that the bias-corrected series exhibits improvements that are consistently spread over all of the timescales considered. Different variations of the approach starting from the standard NBC to the more complex recursive alternatives are tested to assess their impacts on a range of GCM-simulated atmospheric variables of interest in downscaling applications related to hydrology and water resources. Results of the study suggest that three to five iteration RNBCs are the most effective in removing distributional and persistence related biases across the timescales considered.

  16. A Bayesian model to correct underestimated 3-D wind speeds from sonic anemometers increases turbulent components of the surface energy balance

    Treesearch

    John M. Frank; William J. Massman; Brent E. Ewers

    2016-01-01

    Sonic anemometers are the principal instruments in micrometeorological studies of turbulence and ecosystem fluxes. Common designs underestimate vertical wind measurements because they lack a correction for transducer shadowing, with no consensus on a suitable correction. We reanalyze a subset of data collected during field experiments in 2011 and 2013 featuring two or...

  17. On using smoothing spline and residual correction to fuse rain gauge observations and remote sensing data

    NASA Astrophysics Data System (ADS)

    Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei

    2014-01-01

    Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.

  18. Linear Regression Quantile Mapping (RQM) - A new approach to bias correction with consistent quantile trends

    NASA Astrophysics Data System (ADS)

    Passow, Christian; Donner, Reik

    2017-04-01

    Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016

  19. Assimilation of drifters' trajectories in velocity fields from coastal radar and model via the Lagrangian assimilation algorithm LAVA.

    NASA Astrophysics Data System (ADS)

    Berta, Maristella; Bellomo, Lucio; Griffa, Annalisa; Gatimu Magaldi, Marcello; Marmain, Julien; Molcard, Anne; Taillandier, Vincent

    2013-04-01

    The Lagrangian assimilation algorithm LAVA (LAgrangian Variational Analysis) is customized for coastal areas in the framework of the TOSCA (Tracking Oil Spills & Coastal Awareness network) Project, to improve the response to maritime accidents in the Mediterranean Sea. LAVA assimilates drifters' trajectories in the velocity fields which may come from either coastal radars or numerical models. In the present study, LAVA is applied to the coastal area in front of Toulon (France). Surface currents are available from a WERA radar network (2km spatial resolution, every 20 minutes) and from the GLAZUR model (1/64° spatial resolution, every hour). The cluster of drifters considered is constituted by 7 buoys, transmitting every 15 minutes for a period of 5 days. Three assimilation cases are considered: i) correction of the radar velocity field, ii) correction of the model velocity field and iii) reconstruction of the velocity field from drifters only. It is found that drifters' trajectories compare well with the ones obtained by the radar and the correction to radar velocity field is therefore minimal. Contrarily, observed and numerical trajectories separate rapidly and the correction to the model velocity field is substantial. For the reconstruction from drifters only, the velocity fields obtained are similar to the radar ones, but limited to the neighbor of the drifter paths.

  20. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    PubMed

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Footprint-weighted tile approach for a spruce forest and a nearby patchy clearing using the ACASA model

    NASA Astrophysics Data System (ADS)

    Gatzsche, Kathrin; Babel, Wolfgang; Falge, Eva; Pyles, Rex David; Tha Paw U, Kyaw; Raabe, Armin; Foken, Thomas

    2018-05-01

    The ACASA (Advanced Canopy-Atmosphere-Soil Algorithm) model, with a higher-order closure for tall vegetation, has already been successfully tested and validated for homogeneous spruce forests. The aim of this paper is to test the model using a footprint-weighted tile approach for a clearing with a heterogeneous structure of the underlying surface. The comparison with flux data shows a good agreement with a footprint-aggregated tile approach of the model. However, the results of a comparison with a tile approach on the basis of the mean land use classification of the clearing is not significantly different. It is assumed that the footprint model is not accurate enough to separate small-scale heterogeneities. All measured fluxes are corrected by forcing the energy balance closure of the test data either by maintaining the measured Bowen ratio or by the attribution of the residual depending on the fractions of sensible and latent heat flux to the buoyancy flux. The comparison with the model, in which the energy balance is closed, shows that the buoyancy correction for Bowen ratios > 1.5 better fits the measured data. For lower Bowen ratios, the correction probably lies between the two methods, but the amount of available data was too small to make a conclusion. With an assumption of similarity between water and carbon dioxide fluxes, no correction of the net ecosystem exchange is necessary for Bowen ratios > 1.5.

  2. OMV: A simplified mathematical model of the orbital maneuvering vehicle

    NASA Technical Reports Server (NTRS)

    Teoh, W.

    1984-01-01

    A model of the orbital maneuvering vehicle (OMV) is presented which contains several simplications. A set of hand controller signals may be used to control the motion of the OMV. Model verification is carried out using a sequence of tests. The dynamic variables generated by the model are compared, whenever possible, with the corresponding analytical variables. The results of the tests show conclusively that the present model is behaving correctly. Further, this model interfaces properly with the state vector transformation module (SVX) developed previously. Correct command sentence sequences are generated by the OMV and and SVX system, and these command sequences can be used to drive the flat floor simulation system at MSFC.

  3. Reopen parameter regions in two-Higgs doublet models

    NASA Astrophysics Data System (ADS)

    Staub, Florian

    2018-01-01

    The stability of the electroweak potential is a very important constraint for models of new physics. At the moment, it is standard for Two-Higgs doublet models (THDM), singlet or triplet extensions of the standard model to perform these checks at tree-level. However, these models are often studied in the presence of very large couplings. Therefore, it can be expected that radiative corrections to the potential are important. We study these effects at the example of the THDM type-II and find that loop corrections can revive more than 50% of the phenomenological viable points which are ruled out by the tree-level vacuum stability checks. Similar effects are expected for other extension of the standard model.

  4. Radiative corrections to elastic proton-electron scattering measured in coincidence

    NASA Astrophysics Data System (ADS)

    Gakh, G. I.; Konchatnij, M. I.; Merenkov, N. P.; Tomasi-Gustafsson, E.

    2017-05-01

    The differential cross section for elastic scattering of protons on electrons at rest is calculated, taking into account the QED radiative corrections to the leptonic part of interaction. These model-independent radiative corrections arise due to emission of the virtual and real soft and hard photons as well as to vacuum polarization. We analyze an experimental setup when both the final particles are recorded in coincidence and their energies are determined within some uncertainties. The kinematics, the cross section, and the radiative corrections are calculated and numerical results are presented.

  5. Communication: An efficient and accurate perturbative correction to initiator full configuration interaction quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Blunt, Nick S.

    2018-06-01

    We present a perturbative correction within initiator full configuration interaction quantum Monte Carlo (i-FCIQMC). In the existing i-FCIQMC algorithm, a significant number of spawned walkers are discarded due to the initiator criteria. Here we show that these discarded walkers have a form that allows the calculation of a second-order Epstein-Nesbet correction, which may be accumulated in a trivial and inexpensive manner, yet substantially improves i-FCIQMC results. The correction is applied to the Hubbard model and the uniform electron gas and molecular systems.

  6. Boomwhackers and End-Pipe Corrections

    NASA Astrophysics Data System (ADS)

    Ruiz, Michael J.

    2014-02-01

    End-pipe corrections seldom come to mind as a suitable topic for an introductory physics lab. Yet, the end-pipe correction formula can be verified in an engaging and inexpensive lab that requires only two supplies: plastic-tube toys called boomwhackers and a meterstick. This article describes a lab activity in which students model data from plastic tubes to arrive at the end-correction formula for an open pipe. Students also learn the basic mathematics behind the musical scale, and come to appreciate the importance of end-pipe physics in the engineering design of toy musical tubes.

  7. Experimental Determination of Jet Boundary Corrections for Airfoil Tests in Four Open Wind Tunnel Jets of Different Shapes

    NASA Technical Reports Server (NTRS)

    Knight, Montgomery; Harris, Thomas A

    1931-01-01

    This experimental investigation was conducted primarily for the purpose of obtaining a method of correcting to free air conditions the results of airfoil force tests in four open wind tunnel jets of different shapes. Tests were also made to determine whether the jet boundaries had any appreciable effect on the pitching moments of a complete airplane model. Satisfactory corrections for the effect of the boundaries of the various jets were obtained for all the airfoils tested, the span of the largest being 0.75 of the jet width. The corrections for angle of attack were, in general, larger than those for drag. The boundaries had no appreciable effect on the pitching moments of either the airfoils or the complete airplane model. Increasing turbulence appeared to increase the minimum drag and maximum lift and to decrease the pitching moment.

  8. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  9. TeV scale dark matter and electroweak radiative corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciafaloni, Paolo; Urbano, Alfredo

    2010-08-15

    Recent anomalies in cosmic rays data, namely, from the PAMELA Collaboration, can be interpreted in terms of TeV scale decaying/annihilating dark matter. We analyze the impact of radiative corrections coming from the electroweak sector of the standard model on the spectrum of the final products at the interaction point. As an example, we consider virtual one loop corrections and real gauge bosons emission in the case of a very heavy vector boson annihilating into fermions. We find electroweak corrections that are relevant, but not as big as sometimes found in the literature; we relate this mismatch to the issue ofmore » gauge invariance. At scales much higher than the symmetry breaking scale, one loop electroweak effects are so big that eventually higher orders/resummations have to be considered: we advocate for the inclusion of these effects in parton shower Monte Carlo models aiming at the description of TeV scale physics.« less

  10. Scene-based nonuniformity correction and enhancement: pixel statistics and subpixel motion.

    PubMed

    Zhao, Wenyi; Zhang, Chao

    2008-07-01

    We propose a framework for scene-based nonuniformity correction (NUC) and nonuniformity correction and enhancement (NUCE) that is required for focal-plane array-like sensors to obtain clean and enhanced-quality images. The core of the proposed framework is a novel registration-based nonuniformity correction super-resolution (NUCSR) method that is bootstrapped by statistical scene-based NUC methods. Based on a comprehensive imaging model and an accurate parametric motion estimation, we are able to remove severe/structured nonuniformity and in the presence of subpixel motion to simultaneously improve image resolution. One important feature of our NUCSR method is the adoption of a parametric motion model that allows us to (1) handle many practical scenarios where parametric motions are present and (2) carry out perfect super-resolution in principle by exploring available subpixel motions. Experiments with real data demonstrate the efficiency of the proposed NUCE framework and the effectiveness of the NUCSR method.

  11. Corrections for the effects of significant wave height and attitude on Geosat radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.; Hancock, D. W., III

    1990-01-01

    Range estimates from a radar altimeter have biases which are a function of the significant wave height (SWH) and the satellite attitude angle (AA). Based on results of prelaunch Geosat modeling and simulation, a correction for SWH and AA was already applied to the sea-surface height estimates from Geosat's production data processing. By fitting a detailed model radar return waveform to Geosat waveform sampler data, it is possible to provide independent estimates of the height bias, the SWH, and the AA. The waveform fitting has been carried out for 10-sec averages of Geosat waveform sampler data over a wide range of SWH and AA values. The results confirm that Geosat sea-surface-height correction is good to well within the original dm-level specification, but that an additional height correction can be made at the level of several cm.

  12. Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il

    2014-08-14

    We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less

  13. Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements

    NASA Technical Reports Server (NTRS)

    Buehrle, R. D.; Young, C. P., Jr.

    1995-01-01

    This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.

  14. Sensitivity of atmospheric correction to loading and model of the aerosol

    NASA Astrophysics Data System (ADS)

    Bassani, Cristiana; Braga, Federica; Bresciani, Mariano; Giardino, Claudia; Adamo, Maria; Ananasso, Cristina; Alberotanza, Luigi

    2013-04-01

    The physically-based atmospheric correction requires knowledge of the atmospheric conditions during the remotely data acquisitions [Guanter et al., 2007; Gao et al., 2009; Kotchenova et al. 2009; Bassani et al., 2010]. The propagation of solar radiation in the atmospheric window of visible and near-infrared spectral domain, depends on the aerosol scattering. The effects of solar beam extinction are related to the aerosol loading, by the aerosol optical thickness @550nm (AOT) parameter [Kaufman et al., 1997; Vermote et al., 1997; Kotchenova et al., 2008; Kokhanovsky et al. 2010], and also to the aerosol model. Recently, the atmospheric correction of hyperspectral data is considered sensitive to the micro-physical and optical characteristics of aerosol, as reported in [Bassani et al., 2012]. Within the framework of CLAM-PHYM (Coasts and Lake Assessment and Monitoring by PRISMA HYperspectral Mission) project, funded by Italian Space Agency (ASI), the role of the aerosol model on the accuracy of the atmospheric correction of hyperspectral image acquired over water target is investigated. In this work, the results of the atmospheric correction of HICO (Hyperspectral Imager for the Coastal Ocean) images acquired on Northern Adriatic Sea in the Mediterranean are presented. The atmospheric correction has been performed by an algorithm specifically developed for HICO sensor. The algorithm is based on the equation presented in [Vermote et al., 1997; Bassani et al., 2010] by using the last generation of the Second Simulation of a Satellite Signal in the Solar Spectrum (6S) radiative transfer code [Kotchenova et al., 2008; Vermote et al., 2009]. The sensitive analysis of the atmospheric correction of HICO data is performed with respect to the aerosol optical and micro-physical properties used to define the aerosol model. In particular, a variable mixture of the four basic components: dust- like, oceanic, water-soluble, and soot, has been considered. The water reflectance, obtained from the atmospheric correction with variable model and fixed loading of the aerosol, has been compared. The results highlight the requirements to define the aerosol characteristics, loading and model, to simulate the radiative field in the atmosphere system for an accurate atmospheric correction of hyperspectral data, improving the accuracy of the results for surface reflectance process over water, a dark-target. As conclusion, the aerosol model plays a crucial role for an accurate physically-based atmospheric correction of hyperspectral data over water. Currently, the PRISMA mission provides valuable opportunities to study aerosol and their radiative effects on the hyperspectral data. Bibliography Guanter, L.; Estellès, V.; Moreno, J. Spectral calibration and atmospheric correction of ultra-fine spectral and spatial resolution remote sensing data. Application to CASI-1500 data. Remote Sens. Environ. 2007, 109, 54-65. Gao, B.-C.; Montes, M.J.; Davis, C.O.; Goetz, A.F.H. Atmospheric correction algorithms for hyperspectral remote sensing data of land and ocean. Remote Sens. Environ. 2009, 113, S17-S24. Kotchenova, S. Atmospheric correction for the monitoring of land surfaces. J. Geophys. Res. 2009, 113, D23. Bassani C.; Cavalli, R.M.; Pignatti S. Aerosol optical retrieval and surface reflectance from airborne remote sensing data over land. Sens. 2010, 10, 6421-6438. Kaufman, Y. J., Tanrè, D., Gordon H. R., Nakajima T., Lenoble J., Frouin R., Grassl H., Herman B.M., King M., and Teillet P.M.: Operational remote sensing of tropospheric aerosol over land from EOS moderate resolution imaging spectroradiometer, J. Geophys. Res., 102(D14), 17051-17067, 1997. Vermote, E.F.; Tanrè , D.; Deuzè´ , J.L.; Herman M.; Morcrette J.J. Second simulation of the satellite signal in the solar spectrum, 6S: An overview. IEEE Trans. Geosci. Remote Sens. 1997, 35, 675-686. Kotchenova, S.Y.; Vermote, E.F.; Levy, R.; Lyapustin, A. Radiative transfer codes for atmospheric correction and aerosol retrieval: Intercomparison study. Appl. Optics 2008, 47, 2215-2226. Kokhanovsky A.A., Deuzè J.L., Diner D.J., Dubovik O., Ducos F., Emde C., Garay M.J., Grainger R.G., Heckel A., Herman M., Katsev I.L., Keller J., Levy R., North P.R.J., Prikhach A.S., Rozanov V.V., Sayer A.M., Ota Y., Tanrè D., Thomas G.E., Zege E.P. The inter-comparison of major satellite aerosol retrieval algorithms using simulated intensity and polarization characteristics of reflected light. Atmos. Meas. Tech., 3, 909-932, 2010. Bassani C.; Cavalli, R.M.; Antonelli, P. Influence of aerosol and surface reflectance variability on hyperspectral observed radiance. Atmos. Meas. Tech. 2012, 5, 1193-1203. Vermote , E.F.; Kotchenova, S. Atmospheric correction for the monitoring of land surfaces. J. Geophys. Res. 2009, 113, D23.

  15. 3D magnetotelluric inversion system with static shift correction and theoretical assessment in oil and gas exploration

    NASA Astrophysics Data System (ADS)

    Dong, H.; Kun, Z.; Zhang, L.

    2015-12-01

    This magnetotelluric (MT) system contains static shift correction and 3D inversion. The correction method is based on the data study on 3D forward modeling and field test. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with zero-cost, and avoids the additional field work and indoor processing with good results shown in Figure 1a-e. Figure 1a shows a normal model (I) without any local heterogeneity. Figure 1b shows a static-shifted model (II) with two local heterogeneous bodies (10 and 1000 ohm.m). Figure 1c is the inversion result (A) for the synthetic data generated from model I. Figure 1d is the inversion result (B) for the static-shifted data generated from model II. Figure 1e is the inversion result (C) for the static-shifted data from model II, but with static shift correction. The results show that the correction method is useful. The 3D inversion algorithm is improved base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the frequency based parallel structure, improved the computational efficiency, reduced the memory of computer, added the topographic and marine factors, and added the constraints of geology and geophysics. So the 3D inversion could even work in PAD with high efficiency and accuracy. The application example of theoretical assessment in oil and gas exploration is shown in Figure 1f-i. The synthetic geophysical model consists of five layers (from top to downwards): shale, limestone, gas, oil, groundwater and limestone overlying a basement rock. Figure 1f-g show the 3D model and central profile. Figure 1h shows the centrel section of 3D inversion, the resultsd show a high degree of reduction in difference on the synthetic model. Figure 1i shows the seismic waveform reflects the interfaces of every layer overall, but the relative positions of the interface of the two-way travel time vary, and the interface between limestone and oil at the sides of the section is not reflected. So 3-D MT can make up for the deficiency of the seismic results such as the fake sync-phase axis and multiple waves.

  16. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    PubMed

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  17. An effective drift correction for dynamical downscaling of decadal global climate predictions

    NASA Astrophysics Data System (ADS)

    Paeth, Heiko; Li, Jingmin; Pollinger, Felix; Müller, Wolfgang A.; Pohlmann, Holger; Feldmann, Hendrik; Panitz, Hans-Jürgen

    2018-04-01

    Initialized decadal climate predictions with coupled climate models are often marked by substantial climate drifts that emanate from a mismatch between the climatology of the coupled model system and the data set used for initialization. While such drifts may be easily removed from the prediction system when analyzing individual variables, a major problem prevails for multivariate issues and, especially, when the output of the global prediction system shall be used for dynamical downscaling. In this study, we present a statistical approach to remove climate drifts in a multivariate context and demonstrate the effect of this drift correction on regional climate model simulations over the Euro-Atlantic sector. The statistical approach is based on an empirical orthogonal function (EOF) analysis adapted to a very large data matrix. The climate drift emerges as a dramatic cooling trend in North Atlantic sea surface temperatures (SSTs) and is captured by the leading EOF of the multivariate output from the global prediction system, accounting for 7.7% of total variability. The SST cooling pattern also imposes drifts in various atmospheric variables and levels. The removal of the first EOF effectuates the drift correction while retaining other components of intra-annual, inter-annual and decadal variability. In the regional climate model, the multivariate drift correction of the input data removes the cooling trends in most western European land regions and systematically reduces the discrepancy between the output of the regional climate model and observational data. In contrast, removing the drift only in the SST field from the global model has hardly any positive effect on the regional climate model.

  18. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  19. Robust, open-source removal of systematics in Kepler data

    NASA Astrophysics Data System (ADS)

    Aigrain, S.; Parviainen, H.; Roberts, S.; Reece, S.; Evans, T.

    2017-10-01

    We present ARC2 (Astrophysically Robust Correction 2), an open-source python-based systematics-correction pipeline, to correct for the Kepler prime mission long-cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuities in the light curves and then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with 'shrinkage' priors to minimize the risk of overfitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipeline's performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open-source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.

  20. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    PubMed

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

Top